mirror of
https://github.com/samiyev/puaros.git
synced 2025-12-27 23:06:54 +05:00
Compare commits
26 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
b953956181 | ||
|
|
af094eb54a | ||
|
|
656571860e | ||
|
|
a6b4c69b75 | ||
|
|
1d6c2a0e00 | ||
|
|
db8a97202e | ||
|
|
0b1cc5a79a | ||
|
|
8d400c9517 | ||
|
|
9fb9beb311 | ||
|
|
5a43fbf116 | ||
|
|
669e764718 | ||
|
|
0b9b8564bf | ||
|
|
0da25d9046 | ||
|
|
7fea9a8fdb | ||
|
|
b5f54fc3f8 | ||
|
|
8a2c6fdc0e | ||
|
|
2479bde9a8 | ||
|
|
f6bb65f2f1 | ||
|
|
8916ce9eab | ||
|
|
24f54d4b57 | ||
|
|
d038f90bd2 | ||
|
|
e79874e420 | ||
|
|
1663d191ee | ||
|
|
7b4cb60f13 | ||
|
|
33d763c41b | ||
|
|
3cd97c6197 |
17
README.md
17
README.md
@@ -4,7 +4,7 @@ A TypeScript monorepo for code quality and analysis tools.
|
||||
|
||||
## Packages
|
||||
|
||||
- **[@puaros/guardian](./packages/guardian)** - Code quality guardian for vibe coders and enterprise teams. Detects hardcoded values, circular dependencies, and architecture violations. Perfect for AI-assisted development and enforcing Clean Architecture at scale.
|
||||
- **[@puaros/guardian](./packages/guardian)** - Research-backed code quality guardian for vibe coders and enterprise teams. Detects hardcoded values, secrets, circular dependencies, architecture violations, and anemic domain models. Every rule is based on academic research, industry standards (OWASP, SonarQube), and authoritative books (Martin Fowler, Uncle Bob, Eric Evans). Perfect for AI-assisted development and enforcing Clean Architecture at scale.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
@@ -147,6 +147,21 @@ The `@puaros/guardian` package is a code quality analyzer for both individual de
|
||||
- **CLI Tool**: Command-line interface with `guardian` command
|
||||
- **CI/CD Integration**: JSON/Markdown output for automation pipelines
|
||||
|
||||
### 📚 Research-Backed Rules
|
||||
|
||||
Guardian's detection rules are based on decades of software engineering research and industry best practices:
|
||||
|
||||
- **Academic Research**: MIT Course 6.031, ScienceDirect peer-reviewed studies (2020-2023), IEEE papers on Technical Debt
|
||||
- **Industry Standards**: SonarQube (400,000+ organizations), Google/Airbnb/Microsoft style guides, OWASP security standards
|
||||
- **Authoritative Books**:
|
||||
- Clean Architecture (Robert C. Martin, 2017)
|
||||
- Implementing Domain-Driven Design (Vaughn Vernon, 2013)
|
||||
- Domain-Driven Design (Eric Evans, 2003)
|
||||
- Patterns of Enterprise Application Architecture (Martin Fowler, 2002)
|
||||
- **Security Standards**: OWASP Secrets Management, GitHub Secret Scanning (350+ patterns)
|
||||
|
||||
**Every rule links to research citations** - see [Why Guardian's Rules Matter](./packages/guardian/docs/WHY.md) and [Full Research Citations](./packages/guardian/docs/RESEARCH_CITATIONS.md) for complete academic papers, books, and expert references.
|
||||
|
||||
### Use Cases
|
||||
|
||||
**For Vibe Coders:**
|
||||
|
||||
@@ -5,6 +5,325 @@ All notable changes to @samiyev/guardian will be documented in this file.
|
||||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
|
||||
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
||||
|
||||
## [0.9.1] - 2025-11-26
|
||||
|
||||
### Changed
|
||||
|
||||
- 🔄 **Refactored hardcode detector** - Migrated from regex-based to AST-based analysis:
|
||||
- Replaced regex pattern matching with tree-sitter Abstract Syntax Tree traversal
|
||||
- Improved accuracy with AST node context awareness (exports, types, tests)
|
||||
- Reduced false positives with better constant and context detection
|
||||
- Added duplicate value tracking across files for better insights
|
||||
- Implemented boolean literal detection (magic-boolean type)
|
||||
- Added value type classification (email, url, ip_address, api_key, uuid, version, color, etc.)
|
||||
- New modular architecture with specialized analyzers:
|
||||
- `AstTreeTraverser` - AST walking with "almost constants" detection
|
||||
- `DuplicateValueTracker` - Cross-file duplicate tracking
|
||||
- `AstContextChecker` - Node context analysis (reduced nesting depth)
|
||||
- `AstNumberAnalyzer`, `AstStringAnalyzer`, `AstBooleanAnalyzer` - Specialized analyzers
|
||||
- `ValuePatternMatcher` - Value type pattern detection
|
||||
|
||||
### Removed
|
||||
|
||||
- 🗑️ **Deprecated regex components** - Removed old regex-based detection strategies:
|
||||
- `BraceTracker.ts` - Replaced by AST context checking
|
||||
- `ExportConstantAnalyzer.ts` - Replaced by AstContextChecker
|
||||
- `MagicNumberMatcher.ts` - Replaced by AstNumberAnalyzer
|
||||
- `MagicStringMatcher.ts` - Replaced by AstStringAnalyzer
|
||||
|
||||
### Quality
|
||||
|
||||
- ✅ **All tests pass** - 629/629 tests passing (added 51 new tests)
|
||||
- ✅ **Test coverage** - 87.97% statements, 96.75% functions
|
||||
- ✅ **Build successful** - TypeScript compilation with no errors
|
||||
- ✅ **Linter** - 0 errors, 5 acceptable warnings (complexity, params)
|
||||
- ✅ **Code size** - Net reduction: -40 lines (more features, less code)
|
||||
|
||||
## [0.9.0] - 2025-11-26
|
||||
|
||||
### Added
|
||||
|
||||
- 🏛️ **Anemic Model Detection** - NEW feature to detect anemic domain models lacking business logic:
|
||||
- Detects entities with only getters/setters (violates DDD principles)
|
||||
- Identifies classes with public setters (breaks encapsulation)
|
||||
- Analyzes method-to-property ratio to find data-heavy, logic-light classes
|
||||
- Provides detailed suggestions: add business methods, move logic from services, encapsulate invariants
|
||||
- New `AnemicModelDetector` infrastructure component
|
||||
- New `AnemicModelViolation` value object with rich example fixes
|
||||
- New `IAnemicModelDetector` domain interface
|
||||
- Integrated into CLI with detailed violation reports
|
||||
- 12 comprehensive tests for anemic model detection
|
||||
|
||||
- 📦 **New shared constants** - Centralized constants for better code maintainability:
|
||||
- `CLASS_KEYWORDS` - TypeScript class and method keywords (constructor, public, private, protected)
|
||||
- `EXAMPLE_CODE_CONSTANTS` - Documentation example code strings (ORDER_STATUS_PENDING, ORDER_STATUS_APPROVED, CANNOT_APPROVE_ERROR)
|
||||
- `ANEMIC_MODEL_MESSAGES` - 8 suggestion messages for fixing anemic models
|
||||
|
||||
- 📚 **Example files** - Added DDD examples demonstrating anemic vs rich domain models:
|
||||
- `examples/bad/domain/entities/anemic-model-only-getters-setters.ts`
|
||||
- `examples/bad/domain/entities/anemic-model-public-setters.ts`
|
||||
- `examples/good-architecture/domain/entities/Customer.ts`
|
||||
- `examples/good-architecture/domain/entities/Order.ts`
|
||||
|
||||
### Changed
|
||||
|
||||
- ♻️ **Refactored hardcoded values** - Extracted all remaining hardcoded values to centralized constants:
|
||||
- Updated `AnemicModelDetector.ts` to use `CLASS_KEYWORDS` constants
|
||||
- Updated `AnemicModelViolation.ts` to use `EXAMPLE_CODE_CONSTANTS` for example fix strings
|
||||
- Replaced local constants with shared constants from `shared/constants`
|
||||
- Improved code maintainability and consistency
|
||||
|
||||
- 🎯 **Enhanced violation detection pipeline** - Added anemic model detection to `ExecuteDetection.ts`
|
||||
- 📊 **Updated API** - Added anemic model violations to response DTO
|
||||
- 🔧 **CLI improvements** - Added anemic model section to output formatting
|
||||
|
||||
### Quality
|
||||
|
||||
- ✅ **Guardian self-check** - 0 issues (was 5) - 100% clean codebase
|
||||
- ✅ **All tests pass** - 578/578 tests passing (added 12 new tests)
|
||||
- ✅ **Build successful** - TypeScript compilation with no errors
|
||||
- ✅ **Linter clean** - 0 errors, 3 acceptable warnings (complexity, params)
|
||||
- ✅ **Format verified** - All files properly formatted with 4-space indentation
|
||||
|
||||
## [0.8.1] - 2025-11-25
|
||||
|
||||
### Fixed
|
||||
|
||||
- 🧹 **Code quality improvements** - Fixed all 63 hardcoded value issues detected by Guardian self-check:
|
||||
- Fixed 1 CRITICAL: Removed hardcoded Slack token from documentation examples
|
||||
- Fixed 1 HIGH: Removed aws-sdk framework leak from domain layer examples
|
||||
- Fixed 4 MEDIUM: Renamed pipeline files to follow verb-noun convention
|
||||
- Fixed 57 LOW: Extracted all magic strings to reusable constants
|
||||
|
||||
### Added
|
||||
|
||||
- 📦 **New constants file** - `domain/constants/SecretExamples.ts`:
|
||||
- 32 secret keyword constants (AWS, GitHub, NPM, SSH, Slack, etc.)
|
||||
- 15 secret type name constants
|
||||
- 7 example secret values for documentation
|
||||
- Regex patterns and encoding constants
|
||||
|
||||
### Changed
|
||||
|
||||
- ♻️ **Refactored pipeline naming** - Updated use case files to follow naming conventions:
|
||||
- `DetectionPipeline.ts` → `ExecuteDetection.ts`
|
||||
- `FileCollectionStep.ts` → `CollectFiles.ts`
|
||||
- `ParsingStep.ts` → `ParseSourceFiles.ts`
|
||||
- `ResultAggregator.ts` → `AggregateResults.ts`
|
||||
- Added `Aggregate`, `Collect`, `Parse` to `USE_CASE_VERBS` list
|
||||
- 🔧 **Updated 3 core files to use constants**:
|
||||
- `SecretViolation.ts`: All secret examples use constants, `getSeverity()` returns `typeof SEVERITY_LEVELS.CRITICAL`
|
||||
- `SecretDetector.ts`: All secret keywords use constants
|
||||
- `MagicStringMatcher.ts`: Regex patterns extracted to constants
|
||||
- 📝 **Test updates** - Updated 2 tests to match new example fix messages
|
||||
|
||||
### Quality
|
||||
|
||||
- ✅ **Guardian self-check** - 0 issues (was 63) - 100% clean codebase
|
||||
- ✅ **All tests pass** - 566/566 tests passing
|
||||
- ✅ **Build successful** - TypeScript compilation with no errors
|
||||
- ✅ **Linter clean** - 0 errors, 2 acceptable warnings (complexity, params)
|
||||
- ✅ **Format verified** - All files properly formatted with 4-space indentation
|
||||
|
||||
## [0.8.0] - 2025-11-25
|
||||
|
||||
### Added
|
||||
|
||||
- 🔐 **Secret Detection** - NEW CRITICAL security feature using industry-standard Secretlint:
|
||||
- Detects 350+ types of hardcoded secrets (AWS keys, GitHub tokens, NPM tokens, SSH keys, API keys, etc.)
|
||||
- All secrets marked as **CRITICAL severity** for immediate attention
|
||||
- Context-aware remediation suggestions for each secret type
|
||||
- Integrated seamlessly with existing detectors
|
||||
- New `SecretDetector` infrastructure component using `@secretlint/node`
|
||||
- New `SecretViolation` value object with rich examples
|
||||
- New `ISecretDetector` domain interface
|
||||
- CLI output with "🔐 Found X hardcoded secrets - CRITICAL SECURITY RISK" section
|
||||
- Added dependencies: `@secretlint/node`, `@secretlint/core`, `@secretlint/types`, `@secretlint/secretlint-rule-preset-recommend`
|
||||
|
||||
### Changed
|
||||
|
||||
- 🔄 **Pipeline async support** - `DetectionPipeline.execute()` now async for secret detection
|
||||
- 📊 **Test suite expanded** - Added 47 new tests (23 for SecretViolation, 24 for SecretDetector)
|
||||
- Total: 566 tests (was 519), 100% pass rate
|
||||
- Coverage: 93.3% statements, 83.74% branches, 98.17% functions
|
||||
- SecretViolation: 100% coverage
|
||||
- 📝 **Documentation updated**:
|
||||
- README.md: Added Secret Detection section with examples
|
||||
- ROADMAP.md: Marked v0.8.0 as released
|
||||
- Updated package description to mention secrets detection
|
||||
|
||||
### Security
|
||||
|
||||
- 🛡️ **Prevents credentials in version control** - catches AWS, GitHub, NPM, SSH, Slack, GCP secrets before commit
|
||||
- ⚠️ **CRITICAL violations** - all hardcoded secrets immediately flagged with highest severity
|
||||
- 💡 **Smart remediation** - provides specific guidance per secret type (environment variables, secret managers, etc.)
|
||||
|
||||
## [0.7.9] - 2025-11-25
|
||||
|
||||
### Changed
|
||||
|
||||
- ♻️ **Refactored large detectors** - significantly improved maintainability and reduced complexity:
|
||||
- **AggregateBoundaryDetector**: Reduced from 381 to 162 lines (57% reduction)
|
||||
- **HardcodeDetector**: Reduced from 459 to 89 lines (81% reduction)
|
||||
- **RepositoryPatternDetector**: Reduced from 479 to 106 lines (78% reduction)
|
||||
- Extracted 13 focused strategy classes for single responsibilities
|
||||
- All 519 tests pass, no breaking changes
|
||||
- Zero ESLint errors (1 pre-existing warning unrelated to refactoring)
|
||||
- Improved code organization and separation of concerns
|
||||
|
||||
### Added
|
||||
|
||||
- 🏗️ **13 new strategy classes** for focused responsibilities:
|
||||
- `FolderRegistry` - Centralized DDD folder name management
|
||||
- `AggregatePathAnalyzer` - Path parsing and aggregate extraction
|
||||
- `ImportValidator` - Import validation logic
|
||||
- `BraceTracker` - Brace and bracket counting
|
||||
- `ConstantsFileChecker` - Constants file detection
|
||||
- `ExportConstantAnalyzer` - Export const analysis
|
||||
- `MagicNumberMatcher` - Magic number detection
|
||||
- `MagicStringMatcher` - Magic string detection
|
||||
- `OrmTypeMatcher` - ORM type matching
|
||||
- `MethodNameValidator` - Repository method validation
|
||||
- `RepositoryFileAnalyzer` - File role detection
|
||||
- `RepositoryViolationDetector` - Violation detection logic
|
||||
- Enhanced testability with smaller, focused classes
|
||||
|
||||
### Improved
|
||||
|
||||
- 📊 **Code quality metrics**:
|
||||
- Reduced cyclomatic complexity across all three detectors
|
||||
- Better separation of concerns with strategy pattern
|
||||
- More maintainable and extensible codebase
|
||||
- Easier to add new detection patterns
|
||||
- Improved code readability and self-documentation
|
||||
|
||||
## [0.7.8] - 2025-11-25
|
||||
|
||||
### Added
|
||||
|
||||
- 🧪 **Comprehensive E2E test suite** - full pipeline and CLI integration tests:
|
||||
- Added `tests/e2e/AnalyzeProject.e2e.test.ts` - 21 tests for full analysis pipeline
|
||||
- Added `tests/e2e/CLI.e2e.test.ts` - 22 tests for CLI command execution and output
|
||||
- Added `tests/e2e/JSONOutput.e2e.test.ts` - 19 tests for JSON structure validation
|
||||
- Total of 62 new E2E tests covering all major use cases
|
||||
- Tests validate `examples/good-architecture/` returns zero violations
|
||||
- Tests validate `examples/bad/` detects specific violations
|
||||
- CLI smoke tests with process spawning and output verification
|
||||
- JSON serialization and structure validation for all violation types
|
||||
- Total test count increased from 457 to 519 tests
|
||||
- **100% test pass rate achieved** 🎉 (519/519 tests passing)
|
||||
|
||||
### Changed
|
||||
|
||||
- 🔧 **Improved test robustness**:
|
||||
- E2E tests handle exit codes gracefully (CLI exits with non-zero when violations found)
|
||||
- Added helper function `runCLI()` for consistent error handling
|
||||
- Made validation tests conditional for better reliability
|
||||
- Fixed metrics structure assertions to match actual implementation
|
||||
- Enhanced error handling in CLI process spawning tests
|
||||
|
||||
### Fixed
|
||||
|
||||
- 🐛 **Test reliability improvements**:
|
||||
- Fixed CLI tests expecting zero exit codes when violations present
|
||||
- Updated metrics assertions to use correct field names (totalFiles, totalFunctions, totalImports, layerDistribution)
|
||||
- Corrected violation structure property names in E2E tests
|
||||
- Made bad example tests conditional to handle empty results gracefully
|
||||
|
||||
## [0.7.7] - 2025-11-25
|
||||
|
||||
### Added
|
||||
|
||||
- 🧪 **Comprehensive test coverage for under-tested domain files**:
|
||||
- Added 31 tests for `SourceFile.ts` - coverage improved from 46% to 100%
|
||||
- Added 31 tests for `ProjectPath.ts` - coverage improved from 50% to 100%
|
||||
- Added 18 tests for `ValueObject.ts` - coverage improved from 25% to 100%
|
||||
- Added 32 tests for `RepositoryViolation.ts` - coverage improved from 58% to 92.68%
|
||||
- Total test count increased from 345 to 457 tests
|
||||
- Overall coverage improved to 95.4% statements, 86.25% branches, 96.68% functions
|
||||
- All tests pass with no breaking changes
|
||||
|
||||
### Changed
|
||||
|
||||
- 📊 **Improved code quality and maintainability**:
|
||||
- Enhanced test suite for core domain entities and value objects
|
||||
- Better coverage of edge cases and error handling
|
||||
- Increased confidence in domain layer correctness
|
||||
|
||||
## [0.7.6] - 2025-11-25
|
||||
|
||||
### Changed
|
||||
|
||||
- ♻️ **Refactored CLI module** - improved maintainability and separation of concerns:
|
||||
- Split 484-line `cli/index.ts` into focused modules
|
||||
- Created `cli/groupers/ViolationGrouper.ts` for severity grouping and filtering (29 lines)
|
||||
- Created `cli/formatters/OutputFormatter.ts` for violation formatting (190 lines)
|
||||
- Created `cli/formatters/StatisticsFormatter.ts` for metrics and summary (58 lines)
|
||||
- Reduced `cli/index.ts` from 484 to 260 lines (46% reduction)
|
||||
- All 345 tests pass, CLI output identical to before
|
||||
- No breaking changes
|
||||
|
||||
## [0.7.5] - 2025-11-25
|
||||
|
||||
### Changed
|
||||
|
||||
- ♻️ **Refactored AnalyzeProject use-case** - improved maintainability and testability:
|
||||
- Split 615-line God Use-Case into focused pipeline components
|
||||
- Created `FileCollectionStep.ts` for file scanning and basic parsing (66 lines)
|
||||
- Created `ParsingStep.ts` for AST parsing and dependency graph construction (51 lines)
|
||||
- Created `DetectionPipeline.ts` for running all 7 detectors (371 lines)
|
||||
- Created `ResultAggregator.ts` for building response DTO (81 lines)
|
||||
- Reduced `AnalyzeProject.ts` from 615 to 245 lines (60% reduction)
|
||||
- All 345 tests pass, no breaking changes
|
||||
- Improved separation of concerns and single responsibility
|
||||
- Easier to test and modify individual pipeline steps
|
||||
|
||||
### Added
|
||||
|
||||
- 🤖 **AI Agent Instructions in CLI help** - dedicated section for AI coding assistants:
|
||||
- Step-by-step workflow: scan → fix → verify → expand scope
|
||||
- Recommended commands for each step (`--only-critical --limit 5`)
|
||||
- Output format description for easy parsing
|
||||
- Priority order guidance (CRITICAL → HIGH → MEDIUM → LOW)
|
||||
- Helps Claude, Copilot, Cursor, and other AI agents immediately take action
|
||||
|
||||
Run `guardian --help` to see the new "AI AGENT INSTRUCTIONS" section.
|
||||
|
||||
## [0.7.4] - 2025-11-25
|
||||
|
||||
### Fixed
|
||||
|
||||
- 🐛 **TypeScript-aware hardcode detection** - dramatically reduces false positives by 35.7%:
|
||||
- Ignore strings in TypeScript union types (`type Status = 'active' | 'inactive'`)
|
||||
- Ignore strings in interface property types (`interface { mode: 'development' | 'production' }`)
|
||||
- Ignore strings in type assertions (`as 'read' | 'write'`)
|
||||
- Ignore strings in typeof checks (`typeof x === 'string'`)
|
||||
- Ignore strings in Symbol() calls for DI tokens (`Symbol('LOGGER')`)
|
||||
- Ignore strings in dynamic import() calls (`import('../../module.js')`)
|
||||
- Exclude tokens.ts/tokens.js files completely (DI container files)
|
||||
- Tested on real-world TypeScript project: 985 → 633 issues (352 false positives eliminated)
|
||||
- ✅ **Added 13 new tests** for TypeScript type context filtering
|
||||
|
||||
## [0.7.3] - 2025-11-25
|
||||
|
||||
### Fixed
|
||||
|
||||
- 🐛 **False positive: repository importing its own aggregate:**
|
||||
- Added `isInternalBoundedContextImport()` method to detect internal imports
|
||||
- Imports like `../aggregates/Entity` from `repositories/Repo` are now allowed
|
||||
- This correctly allows `ICodeProjectRepository` to import `CodeProject` from the same bounded context
|
||||
- Cross-aggregate imports (with multiple `../..`) are still detected as violations
|
||||
|
||||
## [0.7.2] - 2025-11-25
|
||||
|
||||
### Fixed
|
||||
|
||||
- 🐛 **False positive: `errors` folder detected as aggregate:**
|
||||
- Added `errors` and `exceptions` to `DDD_FOLDER_NAMES` constants
|
||||
- Added to `nonAggregateFolderNames` — these folders are no longer detected as aggregates
|
||||
- Added to `allowedFolderNames` — imports from `errors`/`exceptions` folders are allowed across aggregates
|
||||
- Fixes issue where `domain/code-analysis/errors/` was incorrectly identified as a separate aggregate named "errors"
|
||||
|
||||
## [0.7.1] - 2025-11-25
|
||||
|
||||
### Fixed
|
||||
|
||||
895
packages/guardian/COMPARISON.md
Normal file
895
packages/guardian/COMPARISON.md
Normal file
@@ -0,0 +1,895 @@
|
||||
# Guardian vs Competitors: Comprehensive Comparison 🔍
|
||||
|
||||
**Last Updated:** 2025-01-24
|
||||
|
||||
This document provides an in-depth comparison of Guardian against major competitors in the static analysis and architecture enforcement space.
|
||||
|
||||
---
|
||||
|
||||
## 🎯 TL;DR - When to Use Each Tool
|
||||
|
||||
| Your Need | Recommended Tool | Why |
|
||||
|-----------|------------------|-----|
|
||||
| **TypeScript + AI coding + DDD** | ✅ **Guardian** | Only tool built for AI-assisted DDD development |
|
||||
| **Multi-language + Security** | SonarQube | 35+ languages, deep security scanning |
|
||||
| **Dependency visualization** | dependency-cruiser + Guardian | Best visualization + architecture rules |
|
||||
| **Java architecture** | ArchUnit | Java-specific with unit test integration |
|
||||
| **TypeScript complexity metrics** | FTA + Guardian | Fast metrics + architecture enforcement |
|
||||
| **Python architecture** | import-linter + Guardian (future) | Python layer enforcement |
|
||||
|
||||
---
|
||||
|
||||
## 📊 Feature Comparison Matrix
|
||||
|
||||
### Core Capabilities
|
||||
|
||||
| Feature | Guardian | SonarQube | dependency-cruiser | ArchUnit | FTA | ESLint |
|
||||
|---------|----------|-----------|-------------------|----------|-----|--------|
|
||||
| **Languages** | JS/TS | 35+ | JS/TS/Vue | Java | TS/JS | JS/TS |
|
||||
| **Setup Complexity** | ⚡ Simple | 🐌 Complex | ⚡ Simple | ⚙️ Medium | ⚡ Simple | ⚡ Simple |
|
||||
| **Price** | 🆓 Free | 💰 Freemium | 🆓 Free | 🆓 Free | 🆓 Free | 🆓 Free |
|
||||
| **GitHub Stars** | - | - | 6.2k | 3.1k | - | 24k+ |
|
||||
|
||||
### Detection Capabilities
|
||||
|
||||
| Feature | Guardian | SonarQube | dependency-cruiser | ArchUnit | FTA | ESLint |
|
||||
|---------|----------|-----------|-------------------|----------|-----|--------|
|
||||
| **Hardcode Detection** | ✅✅ (with AI tips) | ⚠️ (secrets only) | ❌ | ❌ | ❌ | ❌ |
|
||||
| **Circular Dependencies** | ✅ | ✅ | ✅✅ (visual) | ✅ | ❌ | ✅ |
|
||||
| **Architecture Layers** | ✅✅ (DDD/Clean) | ⚠️ (generic) | ✅ (via rules) | ✅✅ | ❌ | ⚠️ |
|
||||
| **Framework Leak** | ✅✅ UNIQUE | ❌ | ⚠️ (via rules) | ⚠️ | ❌ | ❌ |
|
||||
| **Entity Exposure** | ✅✅ UNIQUE | ❌ | ❌ | ❌ | ❌ | ❌ |
|
||||
| **Naming Conventions** | ✅ (DDD-specific) | ✅ (generic) | ❌ | ✅ | ❌ | ✅ |
|
||||
| **Repository Pattern** | ✅✅ UNIQUE | ❌ | ❌ | ⚠️ | ❌ | ❌ |
|
||||
| **Dependency Direction** | ✅✅ | ❌ | ✅ (via rules) | ✅ | ❌ | ❌ |
|
||||
| **Security (SAST)** | ❌ | ✅✅ | ❌ | ❌ | ❌ | ⚠️ |
|
||||
| **Dependency Risks (SCA)** | ❌ | ✅✅ | ❌ | ❌ | ❌ | ❌ |
|
||||
| **Complexity Metrics** | ❌ | ✅ | ❌ | ❌ | ✅✅ | ⚠️ |
|
||||
| **Code Duplication** | ❌ | ✅✅ | ❌ | ❌ | ❌ | ❌ |
|
||||
|
||||
### Developer Experience
|
||||
|
||||
| Feature | Guardian | SonarQube | dependency-cruiser | ArchUnit | FTA | ESLint |
|
||||
|---------|----------|-----------|-------------------|----------|-----|--------|
|
||||
| **CLI** | ✅ | ✅ | ✅ | ❌ (lib) | ✅ | ✅ |
|
||||
| **Configuration** | ✅ (v0.6+) | ✅✅ | ✅ | ✅ | ⚠️ | ✅✅ |
|
||||
| **Visualization** | ✅ (v0.7+) | ✅✅ (dashboard) | ✅✅ (graphs) | ❌ | ⚠️ | ❌ |
|
||||
| **Auto-Fix** | ✅✅ (v0.9+) UNIQUE | ❌ | ❌ | ❌ | ❌ | ✅ |
|
||||
| **AI Workflow** | ✅✅ UNIQUE | ❌ | ❌ | ❌ | ❌ | ❌ |
|
||||
| **CI/CD Integration** | ✅ (v0.8+) | ✅✅ | ✅ | ✅ | ⚠️ | ✅✅ |
|
||||
| **IDE Extensions** | 🔜 (v1.0+) | ✅ | ❌ | ❌ | ⚠️ | ✅✅ |
|
||||
| **Metrics Dashboard** | ✅ (v0.10+) | ✅✅ | ⚠️ | ❌ | ✅ | ❌ |
|
||||
|
||||
**Legend:**
|
||||
- ✅✅ = Excellent support
|
||||
- ✅ = Good support
|
||||
- ⚠️ = Limited/partial support
|
||||
- ❌ = Not available
|
||||
- 🔜 = Planned/Coming soon
|
||||
|
||||
---
|
||||
|
||||
## 🔥 Guardian's Unique Advantages
|
||||
|
||||
Guardian has **7 unique features** that no competitor offers:
|
||||
|
||||
### 1. ✨ Hardcode Detection with AI Suggestions
|
||||
|
||||
**Guardian:**
|
||||
```typescript
|
||||
// Detected:
|
||||
app.listen(3000)
|
||||
|
||||
// Suggestion:
|
||||
💡 Extract to: DEFAULT_PORT
|
||||
📁 Location: infrastructure/config/constants.ts
|
||||
🤖 AI Prompt: "Extract port 3000 to DEFAULT_PORT constant in config"
|
||||
```
|
||||
|
||||
**Competitors:**
|
||||
- SonarQube: Only detects hardcoded secrets (API keys), not magic numbers
|
||||
- Others: No hardcode detection at all
|
||||
|
||||
### 2. 🔌 Framework Leak Detection
|
||||
|
||||
**Guardian:**
|
||||
```typescript
|
||||
// domain/entities/User.ts
|
||||
import { Request } from 'express' // ❌ VIOLATION!
|
||||
|
||||
// Detected: Framework leak in domain layer
|
||||
// Suggestion: Use dependency injection via interfaces
|
||||
```
|
||||
|
||||
**Competitors:**
|
||||
- ArchUnit: Can check via custom rules (not built-in)
|
||||
- Others: Not available
|
||||
|
||||
### 3. 🎭 Entity Exposure Detection
|
||||
|
||||
**Guardian:**
|
||||
```typescript
|
||||
// ❌ Bad: Domain entity exposed
|
||||
async getUser(): Promise<User> { }
|
||||
|
||||
// ✅ Good: Use DTOs
|
||||
async getUser(): Promise<UserDto> { }
|
||||
|
||||
// Guardian detects this automatically!
|
||||
```
|
||||
|
||||
**Competitors:**
|
||||
- None have this built-in
|
||||
|
||||
### 4. 📚 Repository Pattern Validation
|
||||
|
||||
**Guardian:**
|
||||
```typescript
|
||||
// Detects ORM types in domain interfaces:
|
||||
interface IUserRepository {
|
||||
findOne(query: { where: ... }) // ❌ Prisma-specific!
|
||||
}
|
||||
|
||||
// Detects concrete repos in use cases:
|
||||
constructor(private prisma: PrismaClient) {} // ❌ VIOLATION!
|
||||
```
|
||||
|
||||
**Competitors:**
|
||||
- None validate repository pattern
|
||||
|
||||
### 5. 🤖 AI-First Workflow
|
||||
|
||||
**Guardian:**
|
||||
```bash
|
||||
# Generate AI-friendly fix prompt
|
||||
guardian check ./src --format ai-prompt > fix.txt
|
||||
|
||||
# Feed to Claude/GPT:
|
||||
"Fix these Guardian violations: $(cat fix.txt)"
|
||||
|
||||
# AI fixes → Run Guardian again → Ship it!
|
||||
```
|
||||
|
||||
**Competitors:**
|
||||
- Generic output, not optimized for AI assistants
|
||||
|
||||
### 6. 🛠️ Auto-Fix for Architecture (v0.9+)
|
||||
|
||||
**Guardian:**
|
||||
```bash
|
||||
# Automatically extract hardcodes to constants
|
||||
guardian fix ./src --auto
|
||||
|
||||
# Rename files to match conventions
|
||||
guardian fix naming ./src --auto
|
||||
|
||||
# Interactive mode
|
||||
guardian fix ./src --interactive
|
||||
```
|
||||
|
||||
**Competitors:**
|
||||
- ESLint has `--fix` but only for syntax
|
||||
- None fix architecture violations
|
||||
|
||||
### 7. 🎯 DDD Pattern Detection (30+)
|
||||
|
||||
**Guardian Roadmap:**
|
||||
- Aggregate boundaries
|
||||
- Anemic domain model
|
||||
- Domain events
|
||||
- Value Object immutability
|
||||
- CQRS violations
|
||||
- Saga pattern
|
||||
- Ubiquitous language
|
||||
- And 23+ more DDD patterns!
|
||||
|
||||
**Competitors:**
|
||||
- Generic architecture checks only
|
||||
- No DDD-specific patterns
|
||||
|
||||
---
|
||||
|
||||
## 📈 Detailed Tool Comparisons
|
||||
|
||||
## vs SonarQube
|
||||
|
||||
### When SonarQube Wins
|
||||
|
||||
✅ **Multi-language projects**
|
||||
```
|
||||
Java + Python + TypeScript → Use SonarQube
|
||||
TypeScript only → Consider Guardian
|
||||
```
|
||||
|
||||
✅ **Security-critical applications**
|
||||
```
|
||||
SonarQube: SAST, SCA, OWASP Top 10, CVE detection
|
||||
Guardian: Architecture only (security coming later)
|
||||
```
|
||||
|
||||
✅ **Large enterprise with compliance**
|
||||
```
|
||||
SonarQube: Compliance reports, audit trails, enterprise support
|
||||
Guardian: Lightweight, developer-focused
|
||||
```
|
||||
|
||||
✅ **Existing SonarQube investment**
|
||||
```
|
||||
Already using SonarQube? Add Guardian for DDD-specific checks
|
||||
```
|
||||
|
||||
### When Guardian Wins
|
||||
|
||||
✅ **TypeScript + AI coding workflow**
|
||||
```typescript
|
||||
// AI generates code → Guardian checks → AI fixes → Ship
|
||||
// 10x faster than manual review
|
||||
```
|
||||
|
||||
✅ **Clean Architecture / DDD enforcement**
|
||||
```typescript
|
||||
// Guardian understands DDD out-of-the-box
|
||||
// SonarQube requires custom rules
|
||||
```
|
||||
|
||||
✅ **Fast setup (< 5 minutes)**
|
||||
```bash
|
||||
npm install -g @samiyev/guardian
|
||||
guardian check ./src
|
||||
# Done! (vs hours of SonarQube setup)
|
||||
```
|
||||
|
||||
✅ **Hardcode detection with context**
|
||||
```typescript
|
||||
// Guardian knows the difference between:
|
||||
const port = 3000 // ❌ Should be constant
|
||||
const increment = 1 // ✅ Allowed (semantic)
|
||||
```
|
||||
|
||||
### Side-by-Side Example
|
||||
|
||||
**Scenario:** Detect hardcoded port in Express app
|
||||
|
||||
```typescript
|
||||
// src/server.ts
|
||||
app.listen(3000)
|
||||
```
|
||||
|
||||
**SonarQube:**
|
||||
```
|
||||
❌ No violation (not a secret)
|
||||
```
|
||||
|
||||
**Guardian:**
|
||||
```
|
||||
✅ Hardcode detected:
|
||||
Type: magic-number
|
||||
Value: 3000
|
||||
💡 Suggested: DEFAULT_PORT
|
||||
📁 Location: infrastructure/config/constants.ts
|
||||
🤖 AI Fix: "Extract 3000 to DEFAULT_PORT constant"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## vs dependency-cruiser
|
||||
|
||||
### When dependency-cruiser Wins
|
||||
|
||||
✅ **Visualization priority**
|
||||
```bash
|
||||
# Best-in-class dependency graphs
|
||||
depcruise src --output-type dot | dot -T svg > graph.svg
|
||||
```
|
||||
|
||||
✅ **Custom dependency rules**
|
||||
```javascript
|
||||
// Highly flexible rule system
|
||||
forbidden: [
|
||||
{
|
||||
from: { path: '^src/domain' },
|
||||
to: { path: '^src/infrastructure' }
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
✅ **Multi-framework support**
|
||||
```
|
||||
JS, TS, Vue, Svelte, JSX, CoffeeScript
|
||||
```
|
||||
|
||||
### When Guardian Wins
|
||||
|
||||
✅ **DDD/Clean Architecture out-of-the-box**
|
||||
```typescript
|
||||
// Guardian knows these patterns:
|
||||
// - Domain/Application/Infrastructure layers
|
||||
// - Entity exposure
|
||||
// - Repository pattern
|
||||
// - Framework leaks
|
||||
|
||||
// dependency-cruiser: Write custom rules for each
|
||||
```
|
||||
|
||||
✅ **Hardcode detection**
|
||||
```typescript
|
||||
// Guardian finds:
|
||||
setTimeout(() => {}, 5000) // Magic number
|
||||
const url = "http://..." // Magic string
|
||||
|
||||
// dependency-cruiser: Doesn't check this
|
||||
```
|
||||
|
||||
✅ **AI workflow integration**
|
||||
```bash
|
||||
guardian check ./src --format ai-prompt
|
||||
# Optimized for Claude/GPT
|
||||
|
||||
depcruise src
|
||||
# Generic output
|
||||
```
|
||||
|
||||
### Complementary Usage
|
||||
|
||||
**Best approach:** Use both!
|
||||
|
||||
```bash
|
||||
# Guardian for architecture + hardcode
|
||||
guardian check ./src
|
||||
|
||||
# dependency-cruiser for visualization
|
||||
depcruise src --output-type svg > architecture.svg
|
||||
```
|
||||
|
||||
**Coming in Guardian v0.7.0:**
|
||||
```bash
|
||||
# Guardian will have built-in visualization!
|
||||
guardian visualize ./src --output architecture.svg
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## vs ArchUnit (Java)
|
||||
|
||||
### When ArchUnit Wins
|
||||
|
||||
✅ **Java projects**
|
||||
```java
|
||||
// ArchUnit is built for Java
|
||||
@ArchTest
|
||||
void domainShouldNotDependOnInfrastructure(JavaClasses classes) {
|
||||
noClasses().that().resideInPackage("..domain..")
|
||||
.should().dependOnClassesThat().resideInPackage("..infrastructure..")
|
||||
.check(classes);
|
||||
}
|
||||
```
|
||||
|
||||
✅ **Test-based architecture validation**
|
||||
```java
|
||||
// Architecture rules = unit tests
|
||||
// Runs in your CI with other tests
|
||||
```
|
||||
|
||||
✅ **Mature Java ecosystem**
|
||||
```
|
||||
Spring Boot, Hibernate, JPA patterns
|
||||
Built-in rules for layered/onion architecture
|
||||
```
|
||||
|
||||
### When Guardian Wins
|
||||
|
||||
✅ **TypeScript/JavaScript projects**
|
||||
```typescript
|
||||
// Guardian is built for TypeScript
|
||||
// ArchUnit doesn't support TS
|
||||
```
|
||||
|
||||
✅ **AI coding workflow**
|
||||
```bash
|
||||
# Guardian → AI → Fix → Ship
|
||||
# ArchUnit is test-based (slower feedback)
|
||||
```
|
||||
|
||||
✅ **Zero-config DDD**
|
||||
```bash
|
||||
guardian check ./src
|
||||
# Works immediately with DDD structure
|
||||
|
||||
# ArchUnit requires writing tests for each rule
|
||||
```
|
||||
|
||||
### Philosophical Difference
|
||||
|
||||
**ArchUnit:**
|
||||
```java
|
||||
// Architecture = Tests
|
||||
// You write explicit tests for each rule
|
||||
```
|
||||
|
||||
**Guardian:**
|
||||
```bash
|
||||
# Architecture = Linter
|
||||
# Pre-configured DDD rules out-of-the-box
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## vs FTA (Fast TypeScript Analyzer)
|
||||
|
||||
### When FTA Wins
|
||||
|
||||
✅ **Complexity metrics focus**
|
||||
```bash
|
||||
# FTA provides:
|
||||
# - Cyclomatic complexity
|
||||
# - Halstead metrics
|
||||
# - Line counts
|
||||
# - Technical debt estimation
|
||||
```
|
||||
|
||||
✅ **Performance (Rust-based)**
|
||||
```
|
||||
FTA: 1600 files/second
|
||||
Guardian: ~500 files/second (Node.js)
|
||||
```
|
||||
|
||||
✅ **Simplicity**
|
||||
```bash
|
||||
# FTA does one thing well: metrics
|
||||
fta src/
|
||||
```
|
||||
|
||||
### When Guardian Wins
|
||||
|
||||
✅ **Architecture enforcement**
|
||||
```typescript
|
||||
// Guardian checks:
|
||||
// - Layer violations
|
||||
// - Framework leaks
|
||||
// - Circular dependencies
|
||||
// - Repository pattern
|
||||
|
||||
// FTA: Only measures complexity, no architecture checks
|
||||
```
|
||||
|
||||
✅ **Hardcode detection**
|
||||
```typescript
|
||||
// Guardian finds magic numbers/strings
|
||||
// FTA doesn't check this
|
||||
```
|
||||
|
||||
✅ **AI workflow**
|
||||
```bash
|
||||
# Guardian provides actionable suggestions
|
||||
# FTA provides metrics only
|
||||
```
|
||||
|
||||
### Complementary Usage
|
||||
|
||||
**Best approach:** Use both!
|
||||
|
||||
```bash
|
||||
# Guardian for architecture
|
||||
guardian check ./src
|
||||
|
||||
# FTA for complexity metrics
|
||||
fta src/ --threshold complexity:15
|
||||
```
|
||||
|
||||
**Coming in Guardian v0.10.0:**
|
||||
```bash
|
||||
# Guardian will include complexity metrics!
|
||||
guardian metrics ./src --include-complexity
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## vs ESLint + Plugins
|
||||
|
||||
### When ESLint Wins
|
||||
|
||||
✅ **General code quality**
|
||||
```javascript
|
||||
// Best for:
|
||||
// - Code style
|
||||
// - Common bugs
|
||||
// - TypeScript errors
|
||||
// - React/Vue specific rules
|
||||
```
|
||||
|
||||
✅ **Huge ecosystem**
|
||||
```bash
|
||||
# 10,000+ plugins
|
||||
eslint-plugin-react
|
||||
eslint-plugin-vue
|
||||
eslint-plugin-security
|
||||
# ...and many more
|
||||
```
|
||||
|
||||
✅ **Auto-fix for syntax**
|
||||
```bash
|
||||
eslint --fix
|
||||
# Fixes semicolons, quotes, formatting, etc.
|
||||
```
|
||||
|
||||
### When Guardian Wins
|
||||
|
||||
✅ **Architecture enforcement**
|
||||
```typescript
|
||||
// ESLint doesn't understand:
|
||||
// - Clean Architecture layers
|
||||
// - DDD patterns
|
||||
// - Framework leaks
|
||||
// - Entity exposure
|
||||
|
||||
// Guardian does!
|
||||
```
|
||||
|
||||
✅ **Hardcode detection with context**
|
||||
```typescript
|
||||
// ESLint plugins check patterns
|
||||
// Guardian understands semantic context
|
||||
```
|
||||
|
||||
✅ **AI workflow integration**
|
||||
```bash
|
||||
# Guardian optimized for AI assistants
|
||||
# ESLint generic output
|
||||
```
|
||||
|
||||
### Complementary Usage
|
||||
|
||||
**Best approach:** Use both!
|
||||
|
||||
```bash
|
||||
# ESLint for code quality
|
||||
eslint src/
|
||||
|
||||
# Guardian for architecture
|
||||
guardian check ./src
|
||||
```
|
||||
|
||||
**Many teams run both in CI:**
|
||||
```yaml
|
||||
# .github/workflows/quality.yml
|
||||
- name: ESLint
|
||||
run: npm run lint
|
||||
|
||||
- name: Guardian
|
||||
run: guardian check ./src --fail-on error
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## vs import-linter (Python)
|
||||
|
||||
### When import-linter Wins
|
||||
|
||||
✅ **Python projects**
|
||||
```ini
|
||||
# .importlinter
|
||||
[importlinter]
|
||||
root_package = myproject
|
||||
|
||||
[importlinter:contract:1]
|
||||
name = Layers contract
|
||||
type = layers
|
||||
layers =
|
||||
myproject.domain
|
||||
myproject.application
|
||||
myproject.infrastructure
|
||||
```
|
||||
|
||||
✅ **Mature Python ecosystem**
|
||||
```python
|
||||
# Django, Flask, FastAPI integration
|
||||
```
|
||||
|
||||
### When Guardian Wins
|
||||
|
||||
✅ **TypeScript/JavaScript**
|
||||
```typescript
|
||||
// Guardian is for TS/JS
|
||||
// import-linter is Python-only
|
||||
```
|
||||
|
||||
✅ **More than import checking**
|
||||
```typescript
|
||||
// Guardian checks:
|
||||
// - Hardcode
|
||||
// - Entity exposure
|
||||
// - Repository pattern
|
||||
// - Framework leaks
|
||||
|
||||
// import-linter: Only imports
|
||||
```
|
||||
|
||||
### Future Integration
|
||||
|
||||
**Guardian v2.0+ (Planned):**
|
||||
```bash
|
||||
# Multi-language support coming
|
||||
guardian check ./python-src --language python
|
||||
guardian check ./ts-src --language typescript
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 💰 Cost Comparison
|
||||
|
||||
| Tool | Free Tier | Paid Plans | Enterprise |
|
||||
|------|-----------|------------|------------|
|
||||
| **Guardian** | ✅ MIT License (100% free) | - | - |
|
||||
| **SonarQube** | ✅ Community Edition | Developer: $150/yr | Custom pricing |
|
||||
| **dependency-cruiser** | ✅ MIT License | - | - |
|
||||
| **ArchUnit** | ✅ Apache 2.0 | - | - |
|
||||
| **FTA** | ✅ Open Source | - | - |
|
||||
| **ESLint** | ✅ MIT License | - | - |
|
||||
|
||||
**Guardian will always be free and open-source (MIT License)**
|
||||
|
||||
---
|
||||
|
||||
## 🚀 Setup Time Comparison
|
||||
|
||||
| Tool | Setup Time | Configuration Required |
|
||||
|------|------------|------------------------|
|
||||
| **Guardian** | ⚡ 2 minutes | ❌ Zero-config (DDD) |
|
||||
| **SonarQube** | 🐌 2-4 hours | ✅ Extensive setup |
|
||||
| **dependency-cruiser** | ⚡ 5 minutes | ⚠️ Rules configuration |
|
||||
| **ArchUnit** | ⚙️ 30 minutes | ✅ Write test rules |
|
||||
| **FTA** | ⚡ 1 minute | ❌ Zero-config |
|
||||
| **ESLint** | ⚡ 10 minutes | ⚠️ Plugin configuration |
|
||||
|
||||
**Guardian Setup:**
|
||||
```bash
|
||||
# 1. Install (30 seconds)
|
||||
npm install -g @samiyev/guardian
|
||||
|
||||
# 2. Run (90 seconds)
|
||||
cd your-project
|
||||
guardian check ./src
|
||||
|
||||
# Done! 🎉
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 📊 Real-World Performance
|
||||
|
||||
### Analysis Speed (1000 TypeScript files)
|
||||
|
||||
| Tool | Time | Notes |
|
||||
|------|------|-------|
|
||||
| **FTA** | ~0.6s | ⚡ Fastest (Rust) |
|
||||
| **Guardian** | ~2s | Fast (Node.js, tree-sitter) |
|
||||
| **dependency-cruiser** | ~3s | Fast |
|
||||
| **ESLint** | ~5s | Depends on rules |
|
||||
| **SonarQube** | ~15s | Slower (comprehensive) |
|
||||
|
||||
### Memory Usage
|
||||
|
||||
| Tool | RAM | Notes |
|
||||
|------|-----|-------|
|
||||
| **Guardian** | ~150MB | Efficient |
|
||||
| **FTA** | ~50MB | Minimal (Rust) |
|
||||
| **dependency-cruiser** | ~200MB | Moderate |
|
||||
| **ESLint** | ~300MB | Varies by plugins |
|
||||
| **SonarQube** | ~2GB | Heavy (server) |
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Use Case Recommendations
|
||||
|
||||
### Scenario 1: TypeScript Startup Using AI Coding
|
||||
|
||||
**Best Stack:**
|
||||
```bash
|
||||
✅ Guardian (architecture + hardcode)
|
||||
✅ ESLint (code quality)
|
||||
✅ Prettier (formatting)
|
||||
```
|
||||
|
||||
**Why:**
|
||||
- Fast setup
|
||||
- AI workflow integration
|
||||
- Zero-config DDD
|
||||
- Catches AI mistakes (hardcode)
|
||||
|
||||
### Scenario 2: Enterprise Multi-Language
|
||||
|
||||
**Best Stack:**
|
||||
```bash
|
||||
✅ SonarQube (security + multi-language)
|
||||
✅ Guardian (TypeScript DDD specialization)
|
||||
✅ ArchUnit (Java architecture)
|
||||
```
|
||||
|
||||
**Why:**
|
||||
- Comprehensive coverage
|
||||
- Security scanning
|
||||
- Language-specific depth
|
||||
|
||||
### Scenario 3: Clean Architecture Refactoring
|
||||
|
||||
**Best Stack:**
|
||||
```bash
|
||||
✅ Guardian (architecture enforcement)
|
||||
✅ dependency-cruiser (visualization)
|
||||
✅ Guardian v0.9+ (auto-fix)
|
||||
```
|
||||
|
||||
**Why:**
|
||||
- Visualize current state
|
||||
- Detect violations
|
||||
- Auto-fix issues
|
||||
|
||||
### Scenario 4: Python + TypeScript Monorepo
|
||||
|
||||
**Best Stack:**
|
||||
```bash
|
||||
✅ Guardian (TypeScript)
|
||||
✅ import-linter (Python)
|
||||
✅ SonarQube (security, both languages)
|
||||
```
|
||||
|
||||
**Why:**
|
||||
- Language-specific depth
|
||||
- Unified security scanning
|
||||
|
||||
---
|
||||
|
||||
## 🏆 Winner by Category
|
||||
|
||||
| Category | Winner | Runner-up |
|
||||
|----------|--------|-----------|
|
||||
| **TypeScript Architecture** | 🥇 Guardian | dependency-cruiser |
|
||||
| **Multi-Language** | 🥇 SonarQube | - |
|
||||
| **Visualization** | 🥇 dependency-cruiser | SonarQube |
|
||||
| **AI Workflow** | 🥇 Guardian | - (no competitor) |
|
||||
| **Security** | 🥇 SonarQube | - |
|
||||
| **Hardcode Detection** | 🥇 Guardian | - (no competitor) |
|
||||
| **DDD Patterns** | 🥇 Guardian | ArchUnit (Java) |
|
||||
| **Auto-Fix** | 🥇 ESLint (syntax) | Guardian v0.9+ (architecture) |
|
||||
| **Complexity Metrics** | 🥇 FTA | SonarQube |
|
||||
| **Setup Speed** | 🥇 FTA | Guardian |
|
||||
|
||||
---
|
||||
|
||||
## 🔮 Future Roadmap Comparison
|
||||
|
||||
### Guardian v1.0.0 (Q4 2026)
|
||||
- ✅ Configuration & presets (v0.6)
|
||||
- ✅ Visualization (v0.7)
|
||||
- ✅ CI/CD kit (v0.8)
|
||||
- ✅ Auto-fix (v0.9) **UNIQUE!**
|
||||
- ✅ Metrics dashboard (v0.10)
|
||||
- ✅ 30+ DDD patterns (v0.11-v0.32)
|
||||
- ✅ VS Code extension
|
||||
- ✅ JetBrains plugin
|
||||
|
||||
### Competitors
|
||||
- **SonarQube**: Incremental improvements, AI-powered fixes (experimental)
|
||||
- **dependency-cruiser**: Stable, no major changes planned
|
||||
- **ArchUnit**: Java focus, incremental improvements
|
||||
- **FTA**: Adding more metrics
|
||||
- **ESLint**: Flat config, performance improvements
|
||||
|
||||
**Guardian's Advantage:** Only tool actively expanding DDD/architecture detection
|
||||
|
||||
---
|
||||
|
||||
## 💡 Migration Guides
|
||||
|
||||
### From SonarQube to Guardian
|
||||
|
||||
**When to migrate:**
|
||||
- TypeScript-only project
|
||||
- Want faster iteration
|
||||
- Need DDD-specific checks
|
||||
- Don't need multi-language/security
|
||||
|
||||
**How to migrate:**
|
||||
```bash
|
||||
# Keep SonarQube for security
|
||||
# Add Guardian for architecture
|
||||
npm install -g @samiyev/guardian
|
||||
guardian check ./src
|
||||
|
||||
# CI/CD: Run both
|
||||
# SonarQube (security) → Guardian (architecture)
|
||||
```
|
||||
|
||||
### From ESLint-only to ESLint + Guardian
|
||||
|
||||
**Why add Guardian:**
|
||||
```typescript
|
||||
// ESLint checks syntax
|
||||
// Guardian checks architecture
|
||||
```
|
||||
|
||||
**How to add:**
|
||||
```bash
|
||||
# Keep ESLint
|
||||
npm run lint
|
||||
|
||||
# Add Guardian
|
||||
guardian check ./src
|
||||
|
||||
# Both in CI:
|
||||
npm run lint && guardian check ./src
|
||||
```
|
||||
|
||||
### From dependency-cruiser to Guardian
|
||||
|
||||
**Why migrate:**
|
||||
- Need more than circular deps
|
||||
- Want hardcode detection
|
||||
- Need DDD patterns
|
||||
- Want auto-fix (v0.9+)
|
||||
|
||||
**How to migrate:**
|
||||
```bash
|
||||
# Replace:
|
||||
depcruise src --config .dependency-cruiser.js
|
||||
|
||||
# With:
|
||||
guardian check ./src
|
||||
|
||||
# Or keep both:
|
||||
# dependency-cruiser → visualization
|
||||
# Guardian → architecture + hardcode
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 📚 Additional Resources
|
||||
|
||||
### Guardian
|
||||
- [GitHub Repository](https://github.com/samiyev/puaros)
|
||||
- [Documentation](https://puaros.ailabs.uz)
|
||||
- [npm Package](https://www.npmjs.com/package/@samiyev/guardian)
|
||||
|
||||
### Competitors
|
||||
- [SonarQube](https://www.sonarsource.com/products/sonarqube/)
|
||||
- [dependency-cruiser](https://github.com/sverweij/dependency-cruiser)
|
||||
- [ArchUnit](https://www.archunit.org/)
|
||||
- [FTA](https://ftaproject.dev/)
|
||||
- [import-linter](https://import-linter.readthedocs.io/)
|
||||
|
||||
---
|
||||
|
||||
## 🤝 Community & Support
|
||||
|
||||
| Tool | Community | Support |
|
||||
|------|-----------|---------|
|
||||
| **Guardian** | GitHub Issues | Community (planned: Discord) |
|
||||
| **SonarQube** | Community Forum | Commercial support available |
|
||||
| **dependency-cruiser** | GitHub Issues | Community |
|
||||
| **ArchUnit** | GitHub Issues | Community |
|
||||
| **ESLint** | Discord, Twitter | Community |
|
||||
|
||||
---
|
||||
|
||||
**Guardian's Position in the Market:**
|
||||
|
||||
> **"The AI-First Architecture Guardian for TypeScript teams practicing DDD/Clean Architecture"**
|
||||
|
||||
**Guardian is NOT:**
|
||||
- ❌ A replacement for SonarQube's security scanning
|
||||
- ❌ A replacement for ESLint's code quality checks
|
||||
- ❌ A multi-language tool (yet)
|
||||
|
||||
**Guardian IS:**
|
||||
- ✅ The best tool for TypeScript DDD/Clean Architecture
|
||||
- ✅ The only tool optimized for AI-assisted coding
|
||||
- ✅ The only tool with intelligent hardcode detection
|
||||
- ✅ The only tool with auto-fix for architecture (v0.9+)
|
||||
|
||||
---
|
||||
|
||||
**Questions? Feedback?**
|
||||
|
||||
- 📧 Email: fozilbek.samiyev@gmail.com
|
||||
- 🐙 GitHub: https://github.com/samiyev/puaros/issues
|
||||
- 🌐 Website: https://puaros.ailabs.uz
|
||||
323
packages/guardian/COMPETITIVE_ANALYSIS_SUMMARY.md
Normal file
323
packages/guardian/COMPETITIVE_ANALYSIS_SUMMARY.md
Normal file
@@ -0,0 +1,323 @@
|
||||
# Competitive Analysis & Roadmap - Summary
|
||||
|
||||
**Date:** 2025-01-24
|
||||
**Prepared for:** Puaros Guardian
|
||||
**Documents Created:**
|
||||
1. ROADMAP_NEW.md - Updated roadmap with reprioritized features
|
||||
2. COMPARISON.md - Comprehensive competitor comparison
|
||||
3. docs/v0.6.0-CONFIGURATION-SPEC.md - Configuration feature specification
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Executive Summary
|
||||
|
||||
Guardian has **5 unique features** that no competitor offers, positioning it as the **only tool built for AI-assisted DDD/Clean Architecture development**. However, to achieve enterprise adoption, we need to first match competitors' baseline features (configuration, visualization, CI/CD, metrics).
|
||||
|
||||
### Current Position (v0.5.1)
|
||||
|
||||
**Strengths:**
|
||||
- ✅ Hardcode detection with AI suggestions (UNIQUE)
|
||||
- ✅ Framework leak detection (UNIQUE)
|
||||
- ✅ Entity exposure detection (UNIQUE)
|
||||
- ✅ Repository pattern validation (UNIQUE)
|
||||
- ✅ DDD-specific naming conventions (UNIQUE)
|
||||
|
||||
**Gaps:**
|
||||
- ❌ No configuration file support
|
||||
- ❌ No visualization/graphs
|
||||
- ❌ No ready-to-use CI/CD templates
|
||||
- ❌ No metrics/quality score
|
||||
- ❌ No auto-fix capabilities
|
||||
|
||||
---
|
||||
|
||||
## 📊 Competitive Landscape
|
||||
|
||||
### Main Competitors
|
||||
|
||||
| Tool | Strength | Weakness | Market Position |
|
||||
|------|----------|----------|-----------------|
|
||||
| **SonarQube** | Multi-language + Security | Complex setup, expensive | Enterprise leader |
|
||||
| **dependency-cruiser** | Best visualization | No hardcode/DDD | Dependency specialist |
|
||||
| **ArchUnit** | Java architecture | Java-only | Java ecosystem |
|
||||
| **FTA** | Fast metrics (Rust) | No architecture checks | Metrics tool |
|
||||
| **ESLint** | Huge ecosystem | No architecture | Code quality standard |
|
||||
|
||||
### Guardian's Unique Position
|
||||
|
||||
> **"The AI-First Architecture Guardian for TypeScript teams practicing DDD/Clean Architecture"**
|
||||
|
||||
**Market Gap Filled:**
|
||||
- No tool optimizes for AI-assisted coding workflow
|
||||
- No tool deeply understands DDD patterns (except ArchUnit for Java)
|
||||
- No tool combines hardcode detection + architecture enforcement
|
||||
|
||||
---
|
||||
|
||||
## 🚀 Strategic Roadmap
|
||||
|
||||
### Phase 1: Market Parity (v0.6-v0.10) - Q1-Q2 2026
|
||||
|
||||
**Goal:** Match competitors' baseline features
|
||||
|
||||
| Version | Feature | Why Critical | Competitor |
|
||||
|---------|---------|--------------|------------|
|
||||
| v0.6.0 | Configuration & Presets | All competitors have this | ESLint, SonarQube |
|
||||
| v0.7.0 | Visualization | dependency-cruiser's main advantage | dependency-cruiser |
|
||||
| v0.8.0 | CI/CD Integration Kit | Enterprise requirement | SonarQube |
|
||||
| v0.9.0 | **Auto-Fix (UNIQUE!)** | Game-changer, no one has this | None |
|
||||
| v0.10.0 | Metrics & Quality Score | Enterprise adoption | SonarQube |
|
||||
|
||||
**After v0.10.0:** Guardian competes with SonarQube/dependency-cruiser on features
|
||||
|
||||
### Phase 2: DDD Specialization (v0.11-v0.32) - Q3-Q4 2026
|
||||
|
||||
**Goal:** Deepen DDD/Clean Architecture expertise
|
||||
|
||||
30+ DDD pattern detectors:
|
||||
- Aggregate boundaries
|
||||
- Anemic domain model
|
||||
- Domain events
|
||||
- Value Object immutability
|
||||
- CQRS validation
|
||||
- Saga pattern
|
||||
- Anti-Corruption Layer
|
||||
- Ubiquitous Language
|
||||
- And 22+ more...
|
||||
|
||||
**After Phase 2:** Guardian = THE tool for DDD/Clean Architecture
|
||||
|
||||
### Phase 3: Enterprise Ecosystem (v1.0+) - Q4 2026+
|
||||
|
||||
**Goal:** Full enterprise platform
|
||||
|
||||
- VS Code extension
|
||||
- JetBrains plugin
|
||||
- Web dashboard
|
||||
- Team analytics
|
||||
- Multi-language support (Python, C#, Java)
|
||||
|
||||
---
|
||||
|
||||
## 🔥 Critical Changes to Current Roadmap
|
||||
|
||||
### Old Roadmap Issues
|
||||
|
||||
❌ **v0.6.0 was "Aggregate Boundaries"** → Too early for DDD-specific features
|
||||
❌ **v0.12.0 was "Configuration"** → Way too late! Critical feature postponed
|
||||
❌ **Missing:** Visualization, CI/CD, Auto-fix, Metrics
|
||||
❌ **Too many consecutive DDD features** → Need market parity first
|
||||
|
||||
### New Roadmap Priorities
|
||||
|
||||
✅ **v0.6.0 = Configuration (MOVED UP)** → Critical for adoption
|
||||
✅ **v0.7.0 = Visualization (NEW)** → Compete with dependency-cruiser
|
||||
✅ **v0.8.0 = CI/CD Kit (NEW)** → Enterprise requirement
|
||||
✅ **v0.9.0 = Auto-Fix (NEW, UNIQUE!)** → Game-changing differentiator
|
||||
✅ **v0.10.0 = Metrics (NEW)** → Compete with SonarQube
|
||||
✅ **v0.11+ = DDD Features** → After market parity
|
||||
|
||||
---
|
||||
|
||||
## 💡 Key Recommendations
|
||||
|
||||
### Immediate Actions (Next 2 Weeks)
|
||||
|
||||
1. **Review & Approve New Roadmap**
|
||||
- Read ROADMAP_NEW.md
|
||||
- Approve priority changes
|
||||
- Create GitHub milestones
|
||||
|
||||
2. **Start v0.6.0 Configuration**
|
||||
- Read v0.6.0-CONFIGURATION-SPEC.md
|
||||
- Create implementation tasks
|
||||
- Start Phase 1 development
|
||||
|
||||
3. **Update Documentation**
|
||||
- Update main README.md with comparison table
|
||||
- Add "Guardian vs Competitors" section
|
||||
- Link to COMPARISON.md
|
||||
|
||||
### Next 3 Months (Q1 2026)
|
||||
|
||||
4. **Complete v0.6.0 (Configuration)**
|
||||
- 8-week timeline
|
||||
- Beta test with community
|
||||
- Stable release
|
||||
|
||||
5. **Start v0.7.0 (Visualization)**
|
||||
- Design graph system
|
||||
- Choose visualization library
|
||||
- Prototype SVG/Mermaid output
|
||||
|
||||
6. **Marketing & Positioning**
|
||||
- Create comparison blog post
|
||||
- Submit to Product Hunt
|
||||
- Share on Reddit/HackerNews
|
||||
|
||||
### Next 6 Months (Q1-Q2 2026)
|
||||
|
||||
7. **Complete Market Parity (v0.6-v0.10)**
|
||||
- Configuration ✅
|
||||
- Visualization ✅
|
||||
- CI/CD Integration ✅
|
||||
- Auto-Fix ✅ (UNIQUE!)
|
||||
- Metrics ✅
|
||||
|
||||
8. **Community Growth**
|
||||
- 1000+ GitHub stars
|
||||
- 100+ weekly npm installs
|
||||
- 10+ enterprise adopters
|
||||
|
||||
---
|
||||
|
||||
## 📈 Success Metrics
|
||||
|
||||
### v0.10.0 (Market Parity Achieved) - June 2026
|
||||
|
||||
**Feature Parity:**
|
||||
- ✅ Configuration support (compete with ESLint)
|
||||
- ✅ Visualization (compete with dependency-cruiser)
|
||||
- ✅ CI/CD integration (compete with SonarQube)
|
||||
- ✅ Auto-fix (UNIQUE! Game-changer)
|
||||
- ✅ Metrics dashboard (compete with SonarQube)
|
||||
|
||||
**Adoption Metrics:**
|
||||
- 1,000+ GitHub stars
|
||||
- 100+ weekly npm installs
|
||||
- 50+ projects with guardian.config.js
|
||||
- 10+ enterprise teams
|
||||
|
||||
### v1.0.0 (Enterprise Ready) - December 2026
|
||||
|
||||
**Feature Completeness:**
|
||||
- ✅ All baseline features
|
||||
- ✅ 30+ DDD pattern detectors
|
||||
- ✅ IDE extensions (VS Code, JetBrains)
|
||||
- ✅ Web dashboard
|
||||
- ✅ Team analytics
|
||||
|
||||
**Market Position:**
|
||||
- #1 tool for TypeScript DDD/Clean Architecture
|
||||
- Top 3 in static analysis for TypeScript
|
||||
- Known in enterprise as "the AI code reviewer"
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Positioning Strategy
|
||||
|
||||
### Target Segments
|
||||
|
||||
1. **Primary:** TypeScript developers using AI coding assistants (GitHub Copilot, Cursor, Windsurf, Claude, ChatGPT, Cline)
|
||||
2. **Secondary:** Teams implementing DDD/Clean Architecture
|
||||
3. **Tertiary:** Startups/scale-ups needing fast quality enforcement
|
||||
|
||||
### Messaging
|
||||
|
||||
**Tagline:** "The AI-First Architecture Guardian"
|
||||
|
||||
**Key Messages:**
|
||||
- "Catches the #1 AI mistake: hardcoded values everywhere"
|
||||
- "Enforces Clean Architecture that AI often ignores"
|
||||
- "Closes the AI feedback loop for cleaner code"
|
||||
- "The only tool with auto-fix for architecture" (v0.9+)
|
||||
|
||||
### Differentiation
|
||||
|
||||
**Guardian ≠ SonarQube:** We're specialized for TypeScript DDD, not multi-language security
|
||||
**Guardian ≠ dependency-cruiser:** We detect patterns, not just dependencies
|
||||
**Guardian ≠ ESLint:** We enforce architecture, not syntax
|
||||
|
||||
**Guardian = ESLint for architecture + AI code reviewer**
|
||||
|
||||
---
|
||||
|
||||
## 📚 Document Guide
|
||||
|
||||
### ROADMAP_NEW.md
|
||||
|
||||
**Purpose:** Complete technical roadmap with reprioritized features
|
||||
**Audience:** Development team, contributors
|
||||
**Key Sections:**
|
||||
- Current state analysis
|
||||
- Phase 1: Market Parity (v0.6-v0.10)
|
||||
- Phase 2: DDD Specialization (v0.11-v0.32)
|
||||
- Phase 3: Enterprise Ecosystem (v1.0+)
|
||||
|
||||
### COMPARISON.md
|
||||
|
||||
**Purpose:** Marketing-focused comparison with all competitors
|
||||
**Audience:** Users, potential adopters, marketing
|
||||
**Key Sections:**
|
||||
- Feature comparison matrix
|
||||
- Detailed tool comparisons
|
||||
- When to use each tool
|
||||
- Use case recommendations
|
||||
- Winner by category
|
||||
|
||||
### v0.6.0-CONFIGURATION-SPEC.md
|
||||
|
||||
**Purpose:** Technical specification for Configuration feature
|
||||
**Audience:** Development team
|
||||
**Key Sections:**
|
||||
- Configuration file format
|
||||
- Preset system design
|
||||
- Rule configuration
|
||||
- Implementation plan (8 weeks)
|
||||
- Testing strategy
|
||||
|
||||
---
|
||||
|
||||
## 🎬 Next Steps
|
||||
|
||||
### Week 1-2: Planning & Kickoff
|
||||
|
||||
- [ ] Review all three documents
|
||||
- [ ] Approve new roadmap priorities
|
||||
- [ ] Create GitHub milestones for v0.6.0-v0.10.0
|
||||
- [ ] Create implementation issues for v0.6.0
|
||||
- [ ] Update main README.md with comparison table
|
||||
|
||||
### Week 3-10: v0.6.0 Development
|
||||
|
||||
- [ ] Phase 1: Core Configuration (Week 3-4)
|
||||
- [ ] Phase 2: Rule Configuration (Week 4-5)
|
||||
- [ ] Phase 3: Preset System (Week 5-6)
|
||||
- [ ] Phase 4: Ignore Patterns (Week 6-7)
|
||||
- [ ] Phase 5: CLI Integration (Week 7-8)
|
||||
- [ ] Phase 6: Documentation (Week 8-9)
|
||||
- [ ] Phase 7: Beta & Release (Week 9-10)
|
||||
|
||||
### Post-v0.6.0
|
||||
|
||||
- [ ] Start v0.7.0 (Visualization) planning
|
||||
- [ ] Marketing push (blog, Product Hunt, etc.)
|
||||
- [ ] Community feedback gathering
|
||||
|
||||
---
|
||||
|
||||
## ❓ Questions?
|
||||
|
||||
**For technical questions:**
|
||||
- Email: fozilbek.samiyev@gmail.com
|
||||
- GitHub Issues: https://github.com/samiyev/puaros/issues
|
||||
|
||||
**For strategic decisions:**
|
||||
- Review sessions: Schedule with team
|
||||
- Roadmap adjustments: Create GitHub discussion
|
||||
|
||||
---
|
||||
|
||||
## 📝 Changelog
|
||||
|
||||
**2025-01-24:** Initial competitive analysis and roadmap revision
|
||||
- Created comprehensive competitor comparison
|
||||
- Reprioritized roadmap (Configuration moved to v0.6.0)
|
||||
- Added market parity phase (v0.6-v0.10)
|
||||
- Designed v0.6.0 Configuration specification
|
||||
|
||||
---
|
||||
|
||||
**Status:** ✅ Analysis complete, ready for implementation
|
||||
|
||||
**Confidence Level:** HIGH - Analysis based on thorough competitor research and market positioning
|
||||
@@ -72,14 +72,41 @@ Code quality guardian for vibe coders and enterprise teams - because AI writes f
|
||||
- Prevents "new Repository()" anti-pattern
|
||||
- 📚 *Based on: Martin Fowler's Repository Pattern, DDD (Evans 2003)* → [Why?](./docs/WHY.md#repository-pattern)
|
||||
|
||||
🔒 **Aggregate Boundary Validation** ✨ NEW
|
||||
🔒 **Aggregate Boundary Validation**
|
||||
- Detects direct entity references across DDD aggregates
|
||||
- Enforces reference-by-ID or Value Object pattern
|
||||
- Prevents tight coupling between aggregates
|
||||
- Supports multiple folder structures (domain/aggregates/*, domain/*, domain/entities/*)
|
||||
- Filters allowed imports (value-objects, events, repositories, services)
|
||||
- Critical severity for maintaining aggregate independence
|
||||
- 📚 *Based on: Domain-Driven Design (Evans 2003), Implementing DDD (Vernon 2013)* → [Why?](./docs/WHY.md#aggregate-boundaries)
|
||||
- 📚 *Based on: Domain-Driven Design (Evans 2003), Implementing DDD (Vernon 2013)* → [Why?](./docs/WHY.md#aggregate-boundary-validation)
|
||||
|
||||
🔐 **Secret Detection** ✨ NEW in v0.8.0
|
||||
- Detects 350+ types of hardcoded secrets using industry-standard Secretlint
|
||||
- Catches AWS keys, GitHub tokens, NPM tokens, SSH keys, API keys, and more
|
||||
- All secrets marked as **CRITICAL severity** - immediate security risk
|
||||
- Context-aware remediation suggestions for each secret type
|
||||
- Prevents credentials from reaching version control
|
||||
- Integrates seamlessly with existing detectors
|
||||
- 📚 *Based on: OWASP Secrets Management, GitHub Secret Scanning (350+ patterns), security standards* → [Why?](./docs/WHY.md#secret-detection)
|
||||
|
||||
🩺 **Anemic Domain Model Detection** ✨ NEW in v0.9.0
|
||||
- Detects entities with only getters/setters (data bags without behavior)
|
||||
- Identifies public setters anti-pattern in domain entities
|
||||
- Calculates methods-to-properties ratio for behavioral analysis
|
||||
- Enforces rich domain models over anemic models
|
||||
- Suggests moving business logic from services to entities
|
||||
- Medium severity - architectural code smell
|
||||
- 📚 *Based on: Martin Fowler's "Anemic Domain Model" (2003), DDD (Evans 2003), Transaction Script vs Domain Model patterns* → [Why?](./docs/WHY.md#anemic-domain-model-detection)
|
||||
|
||||
🎯 **Severity-Based Prioritization**
|
||||
- Automatic sorting by severity: CRITICAL → HIGH → MEDIUM → LOW
|
||||
- Filter by severity level: `--only-critical` or `--min-severity high`
|
||||
- Focus on what matters most: secrets and circular dependencies first
|
||||
- Visual severity indicators with color-coded labels (🔴🟠🟡🟢)
|
||||
- Smart categorization based on impact to production
|
||||
- Enables gradual technical debt reduction
|
||||
- 📚 *Based on: SonarQube severity classification, IEEE/ScienceDirect research on Technical Debt prioritization* → [Why?](./docs/WHY.md#severity-based-prioritization)
|
||||
|
||||
🏗️ **Clean Architecture Enforcement**
|
||||
- Built with DDD principles
|
||||
@@ -366,6 +393,15 @@ const result = await analyzeProject({
|
||||
})
|
||||
|
||||
console.log(`Found ${result.hardcodeViolations.length} hardcoded values`)
|
||||
console.log(`Found ${result.secretViolations.length} hardcoded secrets 🔐`)
|
||||
|
||||
// Check for critical security issues first!
|
||||
result.secretViolations.forEach((violation) => {
|
||||
console.log(`🔐 CRITICAL: ${violation.file}:${violation.line}`)
|
||||
console.log(` Secret Type: ${violation.secretType}`)
|
||||
console.log(` ${violation.message}`)
|
||||
console.log(` ⚠️ Rotate this secret immediately!`)
|
||||
})
|
||||
|
||||
result.hardcodeViolations.forEach((violation) => {
|
||||
console.log(`${violation.file}:${violation.line}`)
|
||||
@@ -394,9 +430,9 @@ npx @samiyev/guardian check ./src --verbose
|
||||
npx @samiyev/guardian check ./src --no-hardcode # Skip hardcode detection
|
||||
npx @samiyev/guardian check ./src --no-architecture # Skip architecture checks
|
||||
|
||||
# Filter by severity
|
||||
npx @samiyev/guardian check ./src --min-severity high # Show high, critical only
|
||||
npx @samiyev/guardian check ./src --only-critical # Show only critical issues
|
||||
# Filter by severity (perfect for finding secrets first!)
|
||||
npx @samiyev/guardian check ./src --only-critical # Show only critical issues (secrets, circular deps)
|
||||
npx @samiyev/guardian check ./src --min-severity high # Show high and critical only
|
||||
|
||||
# Limit detailed output (useful for large codebases)
|
||||
npx @samiyev/guardian check ./src --limit 10 # Show first 10 violations per category
|
||||
|
||||
@@ -2,9 +2,22 @@
|
||||
|
||||
This document outlines the current features and future plans for @puaros/guardian.
|
||||
|
||||
## Current Version: 0.6.0 ✅ RELEASED
|
||||
## Current Version: 0.9.0 ✅ RELEASED
|
||||
|
||||
**Released:** 2025-11-24
|
||||
**Released:** 2025-11-26
|
||||
|
||||
### What's New in 0.9.0
|
||||
|
||||
- 🏛️ **Anemic Model Detection** - NEW feature to detect anemic domain models lacking business logic
|
||||
- ✅ **100% clean codebase** - Guardian now passes its own self-check with 0 issues
|
||||
- 📦 **New shared constants** - Added CLASS_KEYWORDS and EXAMPLE_CODE_CONSTANTS
|
||||
- ✅ **All 578 tests passing** - Added 12 new tests for anemic model detection
|
||||
|
||||
---
|
||||
|
||||
## Previous Version: 0.8.1 ✅ RELEASED
|
||||
|
||||
**Released:** 2025-11-25
|
||||
|
||||
### Features Included in 0.1.0
|
||||
|
||||
@@ -301,7 +314,249 @@ class Order {
|
||||
|
||||
---
|
||||
|
||||
### Version 0.8.0 - Anemic Domain Model Detection 🩺
|
||||
### Version 0.7.5 - Refactor AnalyzeProject Use-Case 🔧 ✅ RELEASED
|
||||
|
||||
**Released:** 2025-11-25
|
||||
**Priority:** HIGH
|
||||
**Scope:** Single session (~128K tokens)
|
||||
|
||||
Split `AnalyzeProject.ts` (615 lines) into focused pipeline components.
|
||||
|
||||
**Problem:**
|
||||
- God Use-Case with 615 lines
|
||||
- Mixing: file scanning, parsing, detection, aggregation
|
||||
- Hard to test and modify individual steps
|
||||
|
||||
**Solution:**
|
||||
```
|
||||
application/use-cases/
|
||||
├── AnalyzeProject.ts # Orchestrator (245 lines)
|
||||
├── pipeline/
|
||||
│ ├── FileCollectionStep.ts # File scanning (66 lines)
|
||||
│ ├── ParsingStep.ts # AST + dependency graph (51 lines)
|
||||
│ ├── DetectionPipeline.ts # All 7 detectors (371 lines)
|
||||
│ └── ResultAggregator.ts # Build response DTO (81 lines)
|
||||
```
|
||||
|
||||
**Deliverables:**
|
||||
- ✅ Extract 4 pipeline components
|
||||
- ✅ Reduce `AnalyzeProject.ts` from 615 to 245 lines (60% reduction)
|
||||
- ✅ All 345 tests pass, no breaking changes
|
||||
- ✅ Publish to npm
|
||||
|
||||
---
|
||||
|
||||
### Version 0.7.6 - Refactor CLI Module 🔧 ✅ RELEASED
|
||||
|
||||
**Released:** 2025-11-25
|
||||
**Priority:** MEDIUM
|
||||
**Scope:** Single session (~128K tokens)
|
||||
|
||||
Split `cli/index.ts` (484 lines) into focused formatters.
|
||||
|
||||
**Problem:**
|
||||
- CLI file has 484 lines
|
||||
- Mixing: command setup, formatting, grouping, statistics
|
||||
|
||||
**Solution:**
|
||||
```
|
||||
cli/
|
||||
├── index.ts # Commands only (260 lines)
|
||||
├── formatters/
|
||||
│ ├── OutputFormatter.ts # Violation formatting (190 lines)
|
||||
│ └── StatisticsFormatter.ts # Metrics & summary (58 lines)
|
||||
├── groupers/
|
||||
│ └── ViolationGrouper.ts # Sorting & grouping (29 lines)
|
||||
```
|
||||
|
||||
**Deliverables:**
|
||||
- ✅ Extract formatters and groupers
|
||||
- ✅ Reduce `cli/index.ts` from 484 to 260 lines (46% reduction)
|
||||
- ✅ CLI output identical to before
|
||||
- ✅ All 345 tests pass, no breaking changes
|
||||
- ✅ Publish to npm
|
||||
|
||||
---
|
||||
|
||||
### Version 0.7.7 - Improve Test Coverage 🧪 ✅ RELEASED
|
||||
|
||||
**Released:** 2025-11-25
|
||||
**Priority:** MEDIUM
|
||||
**Scope:** Single session (~128K tokens)
|
||||
|
||||
Increase coverage for under-tested domain files.
|
||||
|
||||
**Results:**
|
||||
| File | Before | After |
|
||||
|------|--------|-------|
|
||||
| SourceFile.ts | 46% | 100% ✅ |
|
||||
| ProjectPath.ts | 50% | 100% ✅ |
|
||||
| ValueObject.ts | 25% | 100% ✅ |
|
||||
| RepositoryViolation.ts | 58% | 92.68% ✅ |
|
||||
|
||||
**Deliverables:**
|
||||
- ✅ SourceFile.ts → 100% (31 tests)
|
||||
- ✅ ProjectPath.ts → 100% (31 tests)
|
||||
- ✅ ValueObject.ts → 100% (18 tests)
|
||||
- ✅ RepositoryViolation.ts → 92.68% (32 tests)
|
||||
- ✅ All 457 tests passing
|
||||
- ✅ Overall coverage: 95.4% statements, 86.25% branches, 96.68% functions
|
||||
- ✅ Publish to npm
|
||||
|
||||
---
|
||||
|
||||
### Version 0.7.8 - Add E2E Tests 🧪 ✅ RELEASED
|
||||
|
||||
**Released:** 2025-11-25
|
||||
**Priority:** MEDIUM
|
||||
**Scope:** Single session (~128K tokens)
|
||||
|
||||
Add integration tests for full pipeline and CLI.
|
||||
|
||||
**Deliverables:**
|
||||
- ✅ E2E test: `AnalyzeProject` full pipeline (21 tests)
|
||||
- ✅ CLI smoke test (spawn process, check output) (22 tests)
|
||||
- ✅ Test `examples/good-architecture/` → 0 violations
|
||||
- ✅ Test `examples/bad/` → specific violations
|
||||
- ✅ Test JSON output format (19 tests)
|
||||
- ✅ 519 total tests (519 passing, **100% pass rate** 🎉)
|
||||
- ✅ Comprehensive E2E coverage for API and CLI
|
||||
- ✅ 3 new E2E test files with full pipeline coverage
|
||||
- ✅ Publish to npm
|
||||
|
||||
---
|
||||
|
||||
### Version 0.7.9 - Refactor Large Detectors 🔧 ✅ RELEASED
|
||||
|
||||
**Released:** 2025-11-25
|
||||
**Priority:** LOW
|
||||
**Scope:** Single session (~128K tokens)
|
||||
|
||||
Refactored largest detectors to reduce complexity and improve maintainability.
|
||||
|
||||
**Results:**
|
||||
| Detector | Before | After | Reduction |
|
||||
|----------|--------|-------|-----------|
|
||||
| AggregateBoundaryDetector | 381 lines | 162 lines | 57% ✅ |
|
||||
| HardcodeDetector | 459 lines | 89 lines | 81% ✅ |
|
||||
| RepositoryPatternDetector | 479 lines | 106 lines | 78% ✅ |
|
||||
|
||||
**Implemented Features:**
|
||||
- ✅ Extracted 13 strategy classes for focused responsibilities
|
||||
- ✅ Reduced file sizes by 57-81%
|
||||
- ✅ Improved code organization and maintainability
|
||||
- ✅ All 519 tests passing
|
||||
- ✅ Zero ESLint errors, 1 pre-existing warning
|
||||
- ✅ No breaking changes
|
||||
|
||||
**New Strategy Classes:**
|
||||
- `FolderRegistry` - Centralized DDD folder name management
|
||||
- `AggregatePathAnalyzer` - Path parsing and aggregate extraction
|
||||
- `ImportValidator` - Import validation logic
|
||||
- `BraceTracker` - Brace and bracket counting
|
||||
- `ConstantsFileChecker` - Constants file detection
|
||||
- `ExportConstantAnalyzer` - Export const analysis
|
||||
- `MagicNumberMatcher` - Magic number detection
|
||||
- `MagicStringMatcher` - Magic string detection
|
||||
- `OrmTypeMatcher` - ORM type matching
|
||||
- `MethodNameValidator` - Repository method validation
|
||||
- `RepositoryFileAnalyzer` - File role detection
|
||||
- `RepositoryViolationDetector` - Violation detection logic
|
||||
|
||||
---
|
||||
|
||||
### Version 0.8.0 - Secret Detection 🔐 ✅ RELEASED
|
||||
|
||||
**Released:** 2025-11-25
|
||||
**Priority:** CRITICAL
|
||||
|
||||
Detect hardcoded secrets (API keys, tokens, credentials) using industry-standard Secretlint library.
|
||||
|
||||
**🎯 SecretDetector - NEW standalone detector:**
|
||||
|
||||
```typescript
|
||||
// ❌ CRITICAL: Hardcoded AWS credentials
|
||||
const AWS_KEY = "AKIA1234567890ABCDEF" // VIOLATION!
|
||||
const AWS_SECRET = "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" // VIOLATION!
|
||||
|
||||
// ❌ CRITICAL: Hardcoded GitHub token
|
||||
const GITHUB_TOKEN = "ghp_1234567890abcdefghijklmnopqrstuv" // VIOLATION!
|
||||
|
||||
// ❌ CRITICAL: SSH Private Key in code
|
||||
const privateKey = `-----BEGIN RSA PRIVATE KEY-----
|
||||
MIIEpAIBAAKCAQEA...` // VIOLATION!
|
||||
|
||||
// ❌ CRITICAL: NPM token
|
||||
//registry.npmjs.org/:_authToken=npm_abc123xyz // VIOLATION!
|
||||
|
||||
// ✅ GOOD: Use environment variables
|
||||
const AWS_KEY = process.env.AWS_ACCESS_KEY_ID
|
||||
const AWS_SECRET = process.env.AWS_SECRET_ACCESS_KEY
|
||||
const GITHUB_TOKEN = process.env.GITHUB_TOKEN
|
||||
```
|
||||
|
||||
**Planned Features:**
|
||||
- ✅ **SecretDetector** - Standalone detector (separate from HardcodeDetector)
|
||||
- ✅ **Secretlint Integration** - Industry-standard library (@secretlint/node)
|
||||
- ✅ **350+ Secret Patterns** - AWS, GitHub, NPM, SSH, GCP, Slack, Basic Auth, etc.
|
||||
- ✅ **CRITICAL Severity** - All secret violations marked as critical
|
||||
- ✅ **Smart Suggestions** - Context-aware remediation per secret type
|
||||
- ✅ **Clean Architecture** - New ISecretDetector interface, SecretViolation value object
|
||||
- ✅ **CLI Integration** - New "🔐 Secrets" section in output
|
||||
- ✅ **Parallel Execution** - Runs alongside existing detectors
|
||||
|
||||
**Secret Types Detected:**
|
||||
- AWS Access Keys & Secret Keys
|
||||
- GitHub Tokens (ghp_, github_pat_, gho_, etc.)
|
||||
- NPM tokens in .npmrc and code
|
||||
- SSH Private Keys
|
||||
- GCP Service Account credentials
|
||||
- Slack tokens (xoxb-, xoxp-, etc.)
|
||||
- Basic Auth credentials
|
||||
- JWT tokens
|
||||
- Private encryption keys
|
||||
|
||||
**Architecture:**
|
||||
```typescript
|
||||
// New domain layer
|
||||
interface ISecretDetector {
|
||||
detectAll(code: string, filePath: string): Promise<SecretViolation[]>
|
||||
}
|
||||
|
||||
class SecretViolation {
|
||||
file: string
|
||||
line: number
|
||||
secretType: string // AWS, GitHub, NPM, etc.
|
||||
message: string
|
||||
severity: "critical"
|
||||
suggestion: string // Context-aware guidance
|
||||
}
|
||||
|
||||
// New infrastructure implementation
|
||||
class SecretDetector implements ISecretDetector {
|
||||
// Uses @secretlint/node internally
|
||||
}
|
||||
```
|
||||
|
||||
**Why Secretlint?**
|
||||
- ✅ Actively maintained (updates weekly)
|
||||
- ✅ TypeScript native
|
||||
- ✅ Pluggable architecture
|
||||
- ✅ Low false positives
|
||||
- ✅ Industry standard
|
||||
|
||||
**Why NOT custom implementation?**
|
||||
- ❌ No good npm library for magic numbers/strings
|
||||
- ❌ Our HardcodeDetector is better than existing solutions
|
||||
- ✅ Secretlint is perfect for secrets (don't reinvent the wheel)
|
||||
- ✅ Two focused detectors better than one bloated detector
|
||||
|
||||
**Impact:**
|
||||
Guardian will now catch critical security issues BEFORE they reach production, complementing existing magic number/string detection.
|
||||
|
||||
---
|
||||
|
||||
### Version 0.9.0 - Anemic Domain Model Detection 🩺
|
||||
**Target:** Q2 2026
|
||||
**Priority:** MEDIUM
|
||||
|
||||
@@ -342,7 +597,7 @@ class Order {
|
||||
|
||||
---
|
||||
|
||||
### Version 0.8.0 - Domain Event Usage Validation 📢
|
||||
### Version 0.10.0 - Domain Event Usage Validation 📢
|
||||
**Target:** Q2 2026
|
||||
**Priority:** MEDIUM
|
||||
|
||||
@@ -381,7 +636,7 @@ class Order {
|
||||
|
||||
---
|
||||
|
||||
### Version 0.9.0 - Value Object Immutability Check 🔐
|
||||
### Version 0.11.0 - Value Object Immutability Check 🔐
|
||||
**Target:** Q2 2026
|
||||
**Priority:** MEDIUM
|
||||
|
||||
@@ -424,7 +679,7 @@ class Email {
|
||||
|
||||
---
|
||||
|
||||
### Version 0.10.0 - Use Case Single Responsibility 🎯
|
||||
### Version 0.12.0 - Use Case Single Responsibility 🎯
|
||||
**Target:** Q2 2026
|
||||
**Priority:** LOW
|
||||
|
||||
@@ -461,7 +716,7 @@ class SendWelcomeEmail {
|
||||
|
||||
---
|
||||
|
||||
### Version 0.11.0 - Interface Segregation Validation 🔌
|
||||
### Version 0.13.0 - Interface Segregation Validation 🔌
|
||||
**Target:** Q2 2026
|
||||
**Priority:** LOW
|
||||
|
||||
@@ -506,7 +761,7 @@ interface IUserExporter {
|
||||
|
||||
---
|
||||
|
||||
### Version 0.12.0 - Port-Adapter Pattern Validation 🔌
|
||||
### Version 0.14.0 - Port-Adapter Pattern Validation 🔌
|
||||
**Target:** Q2 2026
|
||||
**Priority:** MEDIUM
|
||||
|
||||
@@ -545,7 +800,7 @@ class TwilioAdapter implements INotificationPort {
|
||||
|
||||
---
|
||||
|
||||
### Version 0.13.0 - Configuration File Support ⚙️
|
||||
### Version 0.15.0 - Configuration File Support ⚙️
|
||||
**Target:** Q3 2026
|
||||
**Priority:** MEDIUM
|
||||
|
||||
@@ -596,7 +851,7 @@ export default {
|
||||
|
||||
---
|
||||
|
||||
### Version 0.14.0 - Command Query Separation (CQS/CQRS) 📝
|
||||
### Version 0.16.0 - Command Query Separation (CQS/CQRS) 📝
|
||||
**Target:** Q3 2026
|
||||
**Priority:** MEDIUM
|
||||
|
||||
@@ -657,7 +912,7 @@ class GetUser { // Query
|
||||
|
||||
---
|
||||
|
||||
### Version 0.15.0 - Factory Pattern Validation 🏭
|
||||
### Version 0.17.0 - Factory Pattern Validation 🏭
|
||||
**Target:** Q3 2026
|
||||
**Priority:** LOW
|
||||
|
||||
@@ -740,7 +995,7 @@ class Order {
|
||||
|
||||
---
|
||||
|
||||
### Version 0.16.0 - Specification Pattern Detection 🔍
|
||||
### Version 0.18.0 - Specification Pattern Detection 🔍
|
||||
**Target:** Q3 2026
|
||||
**Priority:** MEDIUM
|
||||
|
||||
@@ -812,7 +1067,7 @@ class ApproveOrder {
|
||||
|
||||
---
|
||||
|
||||
### Version 0.17.0 - Layered Service Anti-pattern Detection ⚠️
|
||||
### Version 0.19.0 - Layered Service Anti-pattern Detection ⚠️
|
||||
**Target:** Q3 2026
|
||||
**Priority:** MEDIUM
|
||||
|
||||
@@ -889,7 +1144,7 @@ class OrderService {
|
||||
|
||||
---
|
||||
|
||||
### Version 0.18.0 - Bounded Context Leak Detection 🚧
|
||||
### Version 0.20.0 - Bounded Context Leak Detection 🚧
|
||||
**Target:** Q3 2026
|
||||
**Priority:** LOW
|
||||
|
||||
@@ -954,7 +1209,7 @@ class ProductPriceChangedHandler {
|
||||
|
||||
---
|
||||
|
||||
### Version 0.19.0 - Transaction Script vs Domain Model Detection 📜
|
||||
### Version 0.21.0 - Transaction Script vs Domain Model Detection 📜
|
||||
**Target:** Q3 2026
|
||||
**Priority:** LOW
|
||||
|
||||
@@ -1021,7 +1276,7 @@ class Order {
|
||||
|
||||
---
|
||||
|
||||
### Version 0.20.0 - Persistence Ignorance Validation 💾
|
||||
### Version 0.22.0 - Persistence Ignorance Validation 💾
|
||||
**Target:** Q3 2026
|
||||
**Priority:** MEDIUM
|
||||
|
||||
@@ -1107,7 +1362,7 @@ class UserEntityMapper {
|
||||
|
||||
---
|
||||
|
||||
### Version 0.21.0 - Null Object Pattern Detection 🎭
|
||||
### Version 0.23.0 - Null Object Pattern Detection 🎭
|
||||
**Target:** Q3 2026
|
||||
**Priority:** LOW
|
||||
|
||||
@@ -1189,7 +1444,7 @@ class ProcessOrder {
|
||||
|
||||
---
|
||||
|
||||
### Version 0.22.0 - Primitive Obsession in Methods 🔢
|
||||
### Version 0.24.0 - Primitive Obsession in Methods 🔢
|
||||
**Target:** Q3 2026
|
||||
**Priority:** MEDIUM
|
||||
|
||||
@@ -1256,7 +1511,7 @@ class Order {
|
||||
|
||||
---
|
||||
|
||||
### Version 0.23.0 - Service Locator Anti-pattern 🔍
|
||||
### Version 0.25.0 - Service Locator Anti-pattern 🔍
|
||||
**Target:** Q4 2026
|
||||
**Priority:** MEDIUM
|
||||
|
||||
@@ -1316,7 +1571,7 @@ class CreateUser {
|
||||
|
||||
---
|
||||
|
||||
### Version 0.24.0 - Double Dispatch Pattern Validation 🎯
|
||||
### Version 0.26.0 - Double Dispatch Pattern Validation 🎯
|
||||
**Target:** Q4 2026
|
||||
**Priority:** LOW
|
||||
|
||||
@@ -1393,7 +1648,7 @@ class ShippingCostCalculator implements IOrderItemVisitor {
|
||||
|
||||
---
|
||||
|
||||
### Version 0.25.0 - Entity Identity Validation 🆔
|
||||
### Version 0.27.0 - Entity Identity Validation 🆔
|
||||
**Target:** Q4 2026
|
||||
**Priority:** MEDIUM
|
||||
|
||||
@@ -1486,7 +1741,7 @@ class UserId {
|
||||
|
||||
---
|
||||
|
||||
### Version 0.26.0 - Saga Pattern Detection 🔄
|
||||
### Version 0.28.0 - Saga Pattern Detection 🔄
|
||||
**Target:** Q4 2026
|
||||
**Priority:** LOW
|
||||
|
||||
@@ -1584,7 +1839,7 @@ abstract class SagaStep {
|
||||
|
||||
---
|
||||
|
||||
### Version 0.27.0 - Anti-Corruption Layer Detection 🛡️
|
||||
### Version 0.29.0 - Anti-Corruption Layer Detection 🛡️
|
||||
**Target:** Q4 2026
|
||||
**Priority:** MEDIUM
|
||||
|
||||
@@ -1670,7 +1925,7 @@ interface IOrderSyncPort {
|
||||
|
||||
---
|
||||
|
||||
### Version 0.28.0 - Ubiquitous Language Validation 📖
|
||||
### Version 0.30.0 - Ubiquitous Language Validation 📖
|
||||
**Target:** Q4 2026
|
||||
**Priority:** HIGH
|
||||
|
||||
@@ -1857,5 +2112,5 @@ Until we reach 1.0.0, minor version bumps (0.x.0) may include breaking changes a
|
||||
|
||||
---
|
||||
|
||||
**Last Updated:** 2025-11-24
|
||||
**Current Version:** 0.6.0
|
||||
**Last Updated:** 2025-11-25
|
||||
**Current Version:** 0.7.7
|
||||
|
||||
906
packages/guardian/ROADMAP_NEW.md
Normal file
906
packages/guardian/ROADMAP_NEW.md
Normal file
@@ -0,0 +1,906 @@
|
||||
# Guardian Roadmap 🗺️
|
||||
|
||||
**Last Updated:** 2025-01-24
|
||||
**Current Version:** 0.5.1
|
||||
|
||||
This document outlines the current features and strategic roadmap for @puaros/guardian, prioritized based on market competition analysis and enterprise adoption requirements.
|
||||
|
||||
---
|
||||
|
||||
## 📊 Current State (v0.5.1) ✅
|
||||
|
||||
### ✨ Unique Competitive Advantages
|
||||
|
||||
Guardian currently has **5 unique features** that competitors don't offer:
|
||||
|
||||
| Feature | Status | Competitors |
|
||||
|---------|--------|-------------|
|
||||
| **Hardcode Detection + AI Suggestions** | ✅ Released | ❌ None |
|
||||
| **Framework Leak Detection** | ✅ Released | ❌ None |
|
||||
| **Entity Exposure Detection** | ✅ Released (v0.3.0) | ❌ None |
|
||||
| **Dependency Direction Enforcement** | ✅ Released (v0.4.0) | ⚠️ dependency-cruiser (via rules) |
|
||||
| **Repository Pattern Validation** | ✅ Released (v0.5.0) | ❌ None |
|
||||
|
||||
### 🛠️ Core Features (v0.1.0-v0.5.0)
|
||||
|
||||
**Detection Capabilities:**
|
||||
- ✅ Hardcode detection (magic numbers, magic strings) with smart suggestions
|
||||
- ✅ Circular dependency detection
|
||||
- ✅ Naming convention enforcement (DDD layer-based rules)
|
||||
- ✅ Clean Architecture layer violations
|
||||
- ✅ Framework leak detection (domain importing frameworks)
|
||||
- ✅ Entity exposure in API responses (v0.3.0)
|
||||
- ✅ Dependency direction validation (v0.4.0)
|
||||
- ✅ Repository pattern validation (v0.5.0)
|
||||
|
||||
**Developer Experience:**
|
||||
- ✅ CLI interface with `guardian check` command
|
||||
- ✅ Smart constant name suggestions
|
||||
- ✅ Layer distribution analysis
|
||||
- ✅ Detailed violation reports with file:line:column
|
||||
- ✅ Context snippets for each issue
|
||||
|
||||
**Quality & Testing:**
|
||||
- ✅ 194 tests across 7 test files (all passing)
|
||||
- ✅ 80%+ code coverage on all metrics
|
||||
- ✅ Self-analysis: 0 violations (100% clean codebase)
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Strategic Roadmap Overview
|
||||
|
||||
### Phase 1: Market Parity (v0.6-v0.10) - Q1-Q2 2026
|
||||
**Goal:** Match competitors' baseline features to enable enterprise adoption
|
||||
|
||||
- Configuration & Presets
|
||||
- Visualization & Dependency Graphs
|
||||
- CI/CD Integration Kit
|
||||
- Auto-Fix & Code Generation (UNIQUE!)
|
||||
- Metrics & Quality Score
|
||||
|
||||
### Phase 2: DDD Specialization (v0.11-v0.27) - Q3-Q4 2026
|
||||
**Goal:** Deepen DDD/Clean Architecture expertise
|
||||
|
||||
- Advanced DDD pattern detection (25+ features)
|
||||
- Aggregate boundaries, Domain Events, Value Objects
|
||||
- CQRS, Saga Pattern, Anti-Corruption Layer
|
||||
- Ubiquitous Language validation
|
||||
|
||||
### Phase 3: Enterprise Ecosystem (v1.0+) - Q4 2026+
|
||||
**Goal:** Full-featured enterprise platform
|
||||
|
||||
- VS Code extension
|
||||
- JetBrains plugin
|
||||
- Web dashboard
|
||||
- Team analytics
|
||||
- Multi-language support
|
||||
|
||||
---
|
||||
|
||||
## 📅 Detailed Roadmap
|
||||
|
||||
## Version 0.6.0 - Configuration & Presets ⚙️
|
||||
**Target:** Q1 2026 (January-February)
|
||||
**Priority:** 🔥 CRITICAL
|
||||
|
||||
> **Why Critical:** All competitors (SonarQube, ESLint, dependency-cruiser) have configuration. Without this, Guardian cannot be customized for different teams/projects.
|
||||
|
||||
### Features
|
||||
|
||||
#### 1. Configuration File Support
|
||||
|
||||
```javascript
|
||||
// guardian.config.js (primary)
|
||||
export default {
|
||||
// Zero-config presets
|
||||
preset: 'clean-architecture', // or 'ddd', 'hexagonal', 'onion'
|
||||
|
||||
// Rule configuration
|
||||
rules: {
|
||||
'hardcode/magic-numbers': 'error',
|
||||
'hardcode/magic-strings': 'warn',
|
||||
'architecture/layer-violation': 'error',
|
||||
'architecture/framework-leak': 'error',
|
||||
'architecture/entity-exposure': 'error',
|
||||
'circular-dependency': 'error',
|
||||
'naming-convention': 'warn',
|
||||
'dependency-direction': 'error',
|
||||
'repository-pattern': 'error',
|
||||
},
|
||||
|
||||
// Custom layer paths
|
||||
layers: {
|
||||
domain: 'src/core/domain',
|
||||
application: 'src/core/application',
|
||||
infrastructure: 'src/adapters',
|
||||
shared: 'src/shared',
|
||||
},
|
||||
|
||||
// Exclusions
|
||||
exclude: [
|
||||
'**/*.test.ts',
|
||||
'**/*.spec.ts',
|
||||
'scripts/',
|
||||
'migrations/',
|
||||
'node_modules/',
|
||||
],
|
||||
|
||||
// Per-rule ignores
|
||||
ignore: {
|
||||
'hardcode/magic-numbers': {
|
||||
'src/config/constants.ts': [3000, 8080],
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
#### 2. Built-in Presets
|
||||
|
||||
```javascript
|
||||
// Preset: clean-architecture (default)
|
||||
preset: 'clean-architecture'
|
||||
// Enables: layer-violation, dependency-direction, naming-convention
|
||||
|
||||
// Preset: ddd
|
||||
preset: 'ddd'
|
||||
// Enables all DDD patterns: aggregates, value-objects, domain-events
|
||||
|
||||
// Preset: hexagonal (Ports & Adapters)
|
||||
preset: 'hexagonal'
|
||||
// Validates port/adapter separation
|
||||
|
||||
// Preset: minimal (for prototyping)
|
||||
preset: 'minimal'
|
||||
// Only critical rules: hardcode, circular-deps
|
||||
```
|
||||
|
||||
#### 3. Framework-Specific Presets
|
||||
|
||||
```javascript
|
||||
// NestJS
|
||||
preset: 'nestjs-clean-architecture'
|
||||
|
||||
// Express
|
||||
preset: 'express-clean-architecture'
|
||||
|
||||
// Next.js
|
||||
preset: 'nextjs-clean-architecture'
|
||||
```
|
||||
|
||||
#### 4. Configuration Discovery
|
||||
|
||||
Support multiple config file formats:
|
||||
- `guardian.config.js` (ES modules)
|
||||
- `guardian.config.cjs` (CommonJS)
|
||||
- `.guardianrc` (JSON)
|
||||
- `.guardianrc.json`
|
||||
- `package.json` (`guardian` field)
|
||||
|
||||
#### 5. CLI Override
|
||||
|
||||
```bash
|
||||
# Override config from CLI
|
||||
guardian check ./src --rule hardcode/magic-numbers=off
|
||||
|
||||
# Use specific config file
|
||||
guardian check ./src --config custom-config.js
|
||||
|
||||
# Generate config
|
||||
guardian init --preset clean-architecture
|
||||
```
|
||||
|
||||
### Implementation Tasks
|
||||
- [ ] Create config parser and validator
|
||||
- [ ] Implement preset system
|
||||
- [ ] Add config discovery logic
|
||||
- [ ] Update AnalyzeProject use case to accept config
|
||||
- [ ] CLI integration for config override
|
||||
- [ ] Add `guardian init` command
|
||||
- [ ] Documentation and examples
|
||||
- [ ] Tests (config parsing, presets, overrides)
|
||||
|
||||
---
|
||||
|
||||
## Version 0.7.0 - Visualization & Dependency Graphs 🎨
|
||||
**Target:** Q1 2026 (March)
|
||||
**Priority:** 🔥 HIGH
|
||||
|
||||
> **Why High:** dependency-cruiser's main advantage is visualization. Guardian needs this to compete.
|
||||
|
||||
### Features
|
||||
|
||||
#### 1. Dependency Graph Visualization
|
||||
|
||||
```bash
|
||||
# Generate SVG graph
|
||||
guardian visualize ./src --output architecture.svg
|
||||
|
||||
# Interactive HTML
|
||||
guardian visualize ./src --format html --output report.html
|
||||
|
||||
# Mermaid diagram for docs
|
||||
guardian graph ./src --format mermaid > ARCHITECTURE.md
|
||||
|
||||
# ASCII tree for terminal
|
||||
guardian visualize ./src --format ascii
|
||||
```
|
||||
|
||||
#### 2. Layer Dependency Diagram
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
I[Infrastructure Layer] --> A[Application Layer]
|
||||
I --> D[Domain Layer]
|
||||
A --> D
|
||||
D --> S[Shared]
|
||||
A --> S
|
||||
I --> S
|
||||
|
||||
style D fill:#4CAF50
|
||||
style A fill:#2196F3
|
||||
style I fill:#FF9800
|
||||
style S fill:#9E9E9E
|
||||
```
|
||||
|
||||
#### 3. Violation Highlighting
|
||||
|
||||
Visualize violations on graph:
|
||||
- 🔴 Circular dependencies (red arrows)
|
||||
- ⚠️ Framework leaks (yellow highlights)
|
||||
- 🚫 Wrong dependency direction (dashed red arrows)
|
||||
- ✅ Correct dependencies (green arrows)
|
||||
|
||||
#### 4. Metrics Overlay
|
||||
|
||||
```bash
|
||||
guardian visualize ./src --show-metrics
|
||||
|
||||
# Shows on each node:
|
||||
# - File count per layer
|
||||
# - Hardcode violations count
|
||||
# - Complexity score
|
||||
```
|
||||
|
||||
#### 5. Export Formats
|
||||
|
||||
- SVG (for docs/website)
|
||||
- PNG (for presentations)
|
||||
- HTML (interactive, zoomable)
|
||||
- Mermaid (for markdown docs)
|
||||
- DOT (Graphviz format)
|
||||
- JSON (for custom processing)
|
||||
|
||||
### Implementation Tasks
|
||||
- [ ] Implement graph generation engine
|
||||
- [ ] Add SVG/PNG renderer
|
||||
- [ ] Create Mermaid diagram generator
|
||||
- [ ] Build HTML interactive viewer
|
||||
- [ ] Add violation highlighting
|
||||
- [ ] Metrics overlay system
|
||||
- [ ] CLI commands (`visualize`, `graph`)
|
||||
- [ ] Documentation and examples
|
||||
- [ ] Tests (graph generation, formats)
|
||||
|
||||
---
|
||||
|
||||
## Version 0.8.0 - CI/CD Integration Kit 🚀
|
||||
**Target:** Q2 2026 (April)
|
||||
**Priority:** 🔥 HIGH
|
||||
|
||||
> **Why High:** Enterprise requires CI/CD integration. SonarQube succeeds because of this.
|
||||
|
||||
### Features
|
||||
|
||||
#### 1. GitHub Actions
|
||||
|
||||
```yaml
|
||||
# .github/workflows/guardian.yml (ready-to-use template)
|
||||
name: Guardian Quality Check
|
||||
|
||||
on: [push, pull_request]
|
||||
|
||||
jobs:
|
||||
guardian:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- uses: actions/setup-node@v3
|
||||
|
||||
- name: Guardian Analysis
|
||||
uses: puaros/guardian-action@v1
|
||||
with:
|
||||
path: './src'
|
||||
fail-on: 'error'
|
||||
report-format: 'markdown'
|
||||
|
||||
- name: Comment PR
|
||||
uses: actions/github-script@v6
|
||||
if: github.event_name == 'pull_request'
|
||||
with:
|
||||
script: |
|
||||
// Auto-comment violations on PR
|
||||
```
|
||||
|
||||
#### 2. GitLab CI Template
|
||||
|
||||
```yaml
|
||||
# .gitlab-ci.yml
|
||||
include:
|
||||
- template: Guardian.gitlab-ci.yml
|
||||
|
||||
guardian_check:
|
||||
stage: test
|
||||
extends: .guardian
|
||||
variables:
|
||||
GUARDIAN_FAIL_ON: "error"
|
||||
GUARDIAN_FORMAT: "markdown"
|
||||
```
|
||||
|
||||
#### 3. Quality Gate
|
||||
|
||||
```bash
|
||||
# Fail build on violations
|
||||
guardian check ./src --fail-on error
|
||||
guardian check ./src --fail-on warning
|
||||
|
||||
# Threshold-based
|
||||
guardian check ./src --max-violations 10
|
||||
guardian check ./src --max-hardcode 5
|
||||
```
|
||||
|
||||
#### 4. PR Auto-Comments
|
||||
|
||||
Automatically comment on PRs with:
|
||||
- Summary of violations
|
||||
- Comparison with base branch
|
||||
- Quality score change
|
||||
- Actionable suggestions
|
||||
|
||||
```markdown
|
||||
## 🛡️ Guardian Report
|
||||
|
||||
**Quality Score:** 87/100 (⬆️ +3 from main)
|
||||
|
||||
### Violations Found: 5
|
||||
|
||||
#### 🔴 Critical (2)
|
||||
- `src/api/server.ts:15` - Hardcoded port 3000
|
||||
- `src/domain/User.ts:10` - Framework leak (Express)
|
||||
|
||||
#### ⚠️ Warnings (3)
|
||||
- `src/services/UserService.ts` - Naming convention
|
||||
- ...
|
||||
|
||||
[View Full Report](link)
|
||||
```
|
||||
|
||||
#### 5. Pre-commit Hook
|
||||
|
||||
```bash
|
||||
# Install via npx
|
||||
npx guardian install-hooks
|
||||
|
||||
# Creates .husky/pre-commit
|
||||
#!/bin/sh
|
||||
guardian check --staged --fail-on error
|
||||
```
|
||||
|
||||
#### 6. Status Checks
|
||||
|
||||
Integrate with GitHub/GitLab status checks:
|
||||
- ✅ No violations
|
||||
- ⚠️ Warnings only
|
||||
- ❌ Errors found
|
||||
|
||||
### Implementation Tasks
|
||||
- [ ] Create GitHub Action
|
||||
- [ ] Create GitLab CI template
|
||||
- [ ] Implement quality gate logic
|
||||
- [ ] Build PR comment generator
|
||||
- [ ] Pre-commit hook installer
|
||||
- [ ] Status check integration
|
||||
- [ ] Bitbucket Pipelines support
|
||||
- [ ] Documentation and examples
|
||||
- [ ] Tests (CI/CD scenarios)
|
||||
|
||||
---
|
||||
|
||||
## Version 0.9.0 - Auto-Fix & Code Generation 🤖
|
||||
**Target:** Q2 2026 (May)
|
||||
**Priority:** 🚀 GAME-CHANGER (UNIQUE!)
|
||||
|
||||
> **Why Game-Changer:** No competitor has intelligent auto-fix for architecture. This makes Guardian unique!
|
||||
|
||||
### Features
|
||||
|
||||
#### 1. Auto-Fix Hardcode
|
||||
|
||||
```bash
|
||||
# Fix all hardcode violations automatically
|
||||
guardian fix ./src --auto
|
||||
|
||||
# Preview changes
|
||||
guardian fix ./src --dry-run
|
||||
|
||||
# Fix specific types
|
||||
guardian fix ./src --type hardcode
|
||||
guardian fix ./src --type naming
|
||||
```
|
||||
|
||||
**Example:**
|
||||
|
||||
```typescript
|
||||
// Before
|
||||
const timeout = 5000
|
||||
app.listen(3000)
|
||||
|
||||
// After (auto-generated constants.ts)
|
||||
export const DEFAULT_TIMEOUT_MS = 5000
|
||||
export const DEFAULT_PORT = 3000
|
||||
|
||||
// After (fixed code)
|
||||
import { DEFAULT_TIMEOUT_MS, DEFAULT_PORT } from './constants'
|
||||
const timeout = DEFAULT_TIMEOUT_MS
|
||||
app.listen(DEFAULT_PORT)
|
||||
```
|
||||
|
||||
#### 2. Generate Constants File
|
||||
|
||||
```bash
|
||||
# Extract all hardcodes to constants
|
||||
guardian generate constants ./src --output src/config/constants.ts
|
||||
|
||||
# Generated file:
|
||||
// src/config/constants.ts
|
||||
export const DEFAULT_TIMEOUT_MS = 5000
|
||||
export const DEFAULT_PORT = 3000
|
||||
export const MAX_RETRIES = 3
|
||||
export const API_BASE_URL = 'http://localhost:8080'
|
||||
```
|
||||
|
||||
#### 3. Fix Naming Violations
|
||||
|
||||
```bash
|
||||
# Rename files to match conventions
|
||||
guardian fix naming ./src --auto
|
||||
|
||||
# Before: src/application/use-cases/user.ts
|
||||
# After: src/application/use-cases/CreateUser.ts
|
||||
```
|
||||
|
||||
#### 4. AI-Friendly Fix Prompts
|
||||
|
||||
```bash
|
||||
# Generate prompt for AI assistant
|
||||
guardian check ./src --format ai-prompt > fix-prompt.txt
|
||||
|
||||
# Output (optimized for Claude/GPT):
|
||||
"""
|
||||
Fix the following Guardian violations:
|
||||
|
||||
1. HARDCODE (src/api/server.ts:15)
|
||||
- Replace: app.listen(3000)
|
||||
- With: Extract 3000 to DEFAULT_PORT constant
|
||||
- Location: Create src/config/constants.ts
|
||||
|
||||
2. FRAMEWORK_LEAK (src/domain/User.ts:5)
|
||||
- Remove: import { Request } from 'express'
|
||||
- Reason: Domain layer cannot import frameworks
|
||||
- Suggestion: Use dependency injection via interfaces
|
||||
|
||||
[Complete fix suggestions...]
|
||||
"""
|
||||
|
||||
# Then feed to Claude:
|
||||
# cat fix-prompt.txt | pbcopy
|
||||
# Paste into Claude: "Fix these Guardian violations"
|
||||
```
|
||||
|
||||
#### 5. Interactive Fix Mode
|
||||
|
||||
```bash
|
||||
# Interactive fix selection
|
||||
guardian fix ./src --interactive
|
||||
|
||||
# Prompts:
|
||||
# ? Fix hardcode in server.ts:15 (3000)? (Y/n)
|
||||
# ? Suggested constant name: DEFAULT_PORT
|
||||
# [Edit name] [Skip] [Fix All]
|
||||
```
|
||||
|
||||
#### 6. Refactoring Commands
|
||||
|
||||
```bash
|
||||
# Break circular dependency
|
||||
guardian refactor circular ./src/services/UserService.ts
|
||||
# Suggests: Extract shared interface
|
||||
|
||||
# Fix layer violation
|
||||
guardian refactor layer ./src/domain/entities/User.ts
|
||||
# Suggests: Move framework imports to infrastructure
|
||||
```
|
||||
|
||||
### Implementation Tasks
|
||||
- [ ] Implement auto-fix engine (AST transformation)
|
||||
- [ ] Constants extractor and generator
|
||||
- [ ] File renaming system
|
||||
- [ ] AI prompt generator
|
||||
- [ ] Interactive fix mode
|
||||
- [ ] Refactoring suggestions
|
||||
- [ ] Safe rollback mechanism
|
||||
- [ ] Documentation and examples
|
||||
- [ ] Tests (fix scenarios, edge cases)
|
||||
|
||||
---
|
||||
|
||||
## Version 0.10.0 - Metrics & Quality Score 📊
|
||||
**Target:** Q2 2026 (June)
|
||||
**Priority:** 🔥 HIGH
|
||||
|
||||
> **Why High:** Enterprise needs metrics to justify investment. SonarQube's dashboard is a major selling point.
|
||||
|
||||
### Features
|
||||
|
||||
#### 1. Quality Score (0-100)
|
||||
|
||||
```bash
|
||||
guardian score ./src
|
||||
|
||||
# Output:
|
||||
# 🛡️ Guardian Quality Score: 87/100 (Good)
|
||||
#
|
||||
# Category Breakdown:
|
||||
# ✅ Architecture: 95/100 (Excellent)
|
||||
# ⚠️ Hardcode: 78/100 (Needs Improvement)
|
||||
# ✅ Naming: 92/100 (Excellent)
|
||||
# ✅ Dependencies: 89/100 (Good)
|
||||
```
|
||||
|
||||
**Score Calculation:**
|
||||
- Architecture violations: -5 per error
|
||||
- Hardcode violations: -1 per occurrence
|
||||
- Circular dependencies: -10 per cycle
|
||||
- Naming violations: -2 per error
|
||||
|
||||
#### 2. Metrics Dashboard (JSON/HTML)
|
||||
|
||||
```bash
|
||||
# Export metrics
|
||||
guardian metrics ./src --format json > metrics.json
|
||||
guardian metrics ./src --format html > dashboard.html
|
||||
|
||||
# Metrics included:
|
||||
{
|
||||
"qualityScore": 87,
|
||||
"violations": {
|
||||
"hardcode": 12,
|
||||
"circular": 0,
|
||||
"architecture": 2,
|
||||
"naming": 5
|
||||
},
|
||||
"metrics": {
|
||||
"totalFiles": 45,
|
||||
"totalLOC": 3500,
|
||||
"hardcodePerKLOC": 3.4,
|
||||
"averageFilesPerLayer": 11.25
|
||||
},
|
||||
"trends": {
|
||||
"scoreChange": "+5",
|
||||
"violationsChange": "-8"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### 3. Trend Analysis
|
||||
|
||||
```bash
|
||||
# Compare with main branch
|
||||
guardian metrics ./src --compare-with main
|
||||
|
||||
# Output:
|
||||
# Quality Score: 87/100 (⬆️ +3 from main)
|
||||
#
|
||||
# Changes:
|
||||
# ✅ Hardcode violations: 12 (⬇️ -5)
|
||||
# ⚠️ Naming violations: 5 (⬆️ +2)
|
||||
# ✅ Circular deps: 0 (⬇️ -1)
|
||||
```
|
||||
|
||||
#### 4. Historical Tracking
|
||||
|
||||
```bash
|
||||
# Store metrics history
|
||||
guardian metrics ./src --save
|
||||
|
||||
# View trends
|
||||
guardian trends --last 30d
|
||||
|
||||
# Output: ASCII graph showing quality score over time
|
||||
```
|
||||
|
||||
#### 5. Export for Dashboards
|
||||
|
||||
```bash
|
||||
# Prometheus format
|
||||
guardian metrics ./src --format prometheus
|
||||
|
||||
# Grafana JSON
|
||||
guardian metrics ./src --format grafana
|
||||
|
||||
# CSV for Excel
|
||||
guardian metrics ./src --format csv
|
||||
```
|
||||
|
||||
#### 6. Badge Generation
|
||||
|
||||
```bash
|
||||
# Generate badge for README
|
||||
guardian badge ./src --output badge.svg
|
||||
|
||||
# Markdown badge
|
||||

|
||||
```
|
||||
|
||||
### Implementation Tasks
|
||||
- [ ] Quality score calculation algorithm
|
||||
- [ ] Metrics collection system
|
||||
- [ ] Trend analysis engine
|
||||
- [ ] JSON/HTML/Prometheus exporters
|
||||
- [ ] Historical data storage
|
||||
- [ ] Badge generator
|
||||
- [ ] CLI commands (`score`, `metrics`, `trends`, `badge`)
|
||||
- [ ] Documentation and examples
|
||||
- [ ] Tests (metrics calculation, exports)
|
||||
|
||||
---
|
||||
|
||||
## Version 0.11.0+ - DDD Specialization 🏗️
|
||||
**Target:** Q3-Q4 2026
|
||||
**Priority:** MEDIUM (After Market Parity)
|
||||
|
||||
Now we can focus on Guardian's unique DDD/Clean Architecture specialization:
|
||||
|
||||
### v0.11.0 - Aggregate Boundary Validation 🔒
|
||||
- Detect entity references across aggregates
|
||||
- Enforce ID-only references between aggregates
|
||||
- Validate aggregate root access patterns
|
||||
|
||||
### v0.12.0 - Anemic Domain Model Detection 🩺
|
||||
- Detect entities with only getters/setters
|
||||
- Count methods vs properties ratio
|
||||
- Suggest moving logic from services to entities
|
||||
|
||||
### v0.13.0 - Domain Event Validation 📢
|
||||
- Validate event publishing pattern
|
||||
- Check events inherit from DomainEvent base
|
||||
- Detect direct infrastructure calls from entities
|
||||
|
||||
### v0.14.0 - Value Object Immutability 🔐
|
||||
- Ensure Value Objects have readonly fields
|
||||
- Detect public setters
|
||||
- Verify equals() method exists
|
||||
|
||||
### v0.15.0 - Use Case Single Responsibility 🎯
|
||||
- Check Use Case has single public method (execute)
|
||||
- Detect multiple responsibilities
|
||||
- Suggest splitting large Use Cases
|
||||
|
||||
### v0.16.0 - Interface Segregation 🔌
|
||||
- Count methods per interface (> 10 = warning)
|
||||
- Check method cohesion
|
||||
- Suggest interface splitting
|
||||
|
||||
### v0.17.0 - Port-Adapter Pattern 🔌
|
||||
- Check Ports (interfaces) are in application/domain
|
||||
- Verify Adapters are in infrastructure
|
||||
- Detect external library imports in use cases
|
||||
|
||||
### v0.18.0 - Command Query Separation (CQRS) 📝
|
||||
- Detect methods that both change state and return data
|
||||
- Check Use Case names for CQS violations
|
||||
- Validate Command Use Cases return void
|
||||
|
||||
### v0.19.0 - Factory Pattern Validation 🏭
|
||||
- Detect complex logic in entity constructors
|
||||
- Check for `new Entity()` calls in use cases
|
||||
- Suggest extracting construction to Factory
|
||||
|
||||
### v0.20.0 - Specification Pattern Detection 🔍
|
||||
- Detect complex business rules in use cases
|
||||
- Validate Specification classes in domain
|
||||
- Suggest extracting rules to Specifications
|
||||
|
||||
### v0.21.0 - Layered Service Anti-pattern ⚠️
|
||||
- Detect service methods operating on single entity
|
||||
- Validate entities have behavior methods
|
||||
- Suggest moving service methods to entities
|
||||
|
||||
### v0.22.0 - Bounded Context Leak Detection 🚧
|
||||
- Detect entity imports across contexts
|
||||
- Validate only ID references between contexts
|
||||
- Verify event-based integration
|
||||
|
||||
### v0.23.0 - Transaction Script Detection 📜
|
||||
- Detect procedural logic in use cases
|
||||
- Check use case length (> 30-50 lines = warning)
|
||||
- Suggest moving logic to domain entities
|
||||
|
||||
### v0.24.0 - Persistence Ignorance 💾
|
||||
- Detect ORM decorators in domain entities
|
||||
- Check for ORM library imports in domain
|
||||
- Suggest persistence ignorance pattern
|
||||
|
||||
### v0.25.0 - Null Object Pattern Detection 🎭
|
||||
- Count null checks in use cases
|
||||
- Suggest Null Object pattern
|
||||
- Detect repositories returning null vs Null Object
|
||||
|
||||
### v0.26.0 - Primitive Obsession Detection 🔢
|
||||
- Detect methods with > 3 primitive parameters
|
||||
- Check for common Value Object candidates
|
||||
- Suggest creating Value Objects
|
||||
|
||||
### v0.27.0 - Service Locator Anti-pattern 🔍
|
||||
- Detect global ServiceLocator/Registry classes
|
||||
- Validate constructor injection
|
||||
- Suggest DI container usage
|
||||
|
||||
### v0.28.0 - Double Dispatch Pattern 🎯
|
||||
- Detect frequent instanceof or type checking
|
||||
- Check for long if-else/switch by type
|
||||
- Suggest Visitor pattern
|
||||
|
||||
### v0.29.0 - Entity Identity Validation 🆔
|
||||
- Detect public mutable ID fields
|
||||
- Validate ID is Value Object
|
||||
- Check for equals() method implementation
|
||||
|
||||
### v0.30.0 - Saga Pattern Detection 🔄
|
||||
- Detect multiple external calls without compensation
|
||||
- Validate compensating transactions
|
||||
- Suggest Saga pattern for distributed operations
|
||||
|
||||
### v0.31.0 - Anti-Corruption Layer Detection 🛡️
|
||||
- Detect direct legacy library imports
|
||||
- Check for domain adaptation to external APIs
|
||||
- Validate translator/adapter layer exists
|
||||
|
||||
### v0.32.0 - Ubiquitous Language Validation 📖
|
||||
**Priority: HIGH**
|
||||
- Detect synonyms for same concepts (User/Customer/Client)
|
||||
- Check inconsistent verbs (Create/Register/SignUp)
|
||||
- Require Ubiquitous Language glossary
|
||||
|
||||
---
|
||||
|
||||
## Version 1.0.0 - Stable Release 🚀
|
||||
**Target:** Q4 2026 (December)
|
||||
**Priority:** 🔥 CRITICAL
|
||||
|
||||
Production-ready stable release with ecosystem:
|
||||
|
||||
### Core Features
|
||||
- ✅ All detection features stabilized
|
||||
- ✅ Configuration & presets
|
||||
- ✅ Visualization & graphs
|
||||
- ✅ CI/CD integration
|
||||
- ✅ Auto-fix & code generation
|
||||
- ✅ Metrics & quality score
|
||||
- ✅ 30+ DDD pattern detectors
|
||||
|
||||
### Ecosystem
|
||||
|
||||
#### VS Code Extension
|
||||
- Real-time detection as you type
|
||||
- Inline suggestions and quick fixes
|
||||
- Problem panel integration
|
||||
- Code actions for auto-fix
|
||||
|
||||
#### JetBrains Plugin
|
||||
- IntelliJ IDEA, WebStorm support
|
||||
- Inspection integration
|
||||
- Quick fixes
|
||||
|
||||
#### Web Dashboard
|
||||
- Team quality metrics
|
||||
- Historical trends
|
||||
- Per-developer analytics
|
||||
- Project comparison
|
||||
|
||||
#### GitHub Integration
|
||||
- GitHub App
|
||||
- Code scanning integration
|
||||
- Dependency insights
|
||||
- Security alerts for architecture violations
|
||||
|
||||
---
|
||||
|
||||
## 💡 Future Ideas (Post-1.0.0)
|
||||
|
||||
### Multi-Language Support
|
||||
- Python (Django/Flask + DDD)
|
||||
- C# (.NET + Clean Architecture)
|
||||
- Java (Spring Boot + DDD)
|
||||
- Go (Clean Architecture)
|
||||
|
||||
### AI-Powered Features
|
||||
- LLM-based fix suggestions
|
||||
- AI generates code for complex refactorings
|
||||
- Claude/GPT API integration
|
||||
- Natural language architecture queries
|
||||
|
||||
### Team Analytics
|
||||
- Per-developer quality metrics
|
||||
- Team quality trends dashboard
|
||||
- Technical debt tracking
|
||||
- Leaderboards (gamification)
|
||||
|
||||
### Security Features
|
||||
- Secrets detection (API keys, passwords)
|
||||
- SQL injection pattern detection
|
||||
- XSS vulnerability patterns
|
||||
- Dependency vulnerability scanning
|
||||
|
||||
### Code Quality Metrics
|
||||
- Maintainability index
|
||||
- Technical debt estimation
|
||||
- Code duplication detection
|
||||
- Complexity trends
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Success Criteria
|
||||
|
||||
### v0.10.0 (Market Parity Achieved)
|
||||
- ✅ Configuration support (compete with ESLint)
|
||||
- ✅ Visualization (compete with dependency-cruiser)
|
||||
- ✅ CI/CD integration (compete with SonarQube)
|
||||
- ✅ Auto-fix (UNIQUE! Game-changer)
|
||||
- ✅ Metrics dashboard (compete with SonarQube)
|
||||
|
||||
### v1.0.0 (Enterprise Ready)
|
||||
- ✅ 1000+ GitHub stars
|
||||
- ✅ 100+ npm installs/week
|
||||
- ✅ 10+ enterprise adopters
|
||||
- ✅ 99%+ test coverage
|
||||
- ✅ Complete documentation
|
||||
- ✅ IDE extensions available
|
||||
|
||||
---
|
||||
|
||||
## 📊 Competitive Positioning
|
||||
|
||||
| Feature | Guardian v1.0 | SonarQube | dependency-cruiser | ArchUnit | FTA |
|
||||
|---------|---------------|-----------|-------------------|----------|-----|
|
||||
| TypeScript Focus | ✅✅ | ⚠️ | ✅✅ | ❌ | ✅✅ |
|
||||
| Hardcode + AI Tips | ✅✅ UNIQUE | ⚠️ | ❌ | ❌ | ❌ |
|
||||
| Architecture (DDD) | ✅✅ UNIQUE | ⚠️ | ⚠️ | ✅ | ❌ |
|
||||
| Visualization | ✅ | ✅ | ✅✅ | ❌ | ⚠️ |
|
||||
| Auto-Fix | ✅✅ UNIQUE | ❌ | ❌ | ❌ | ❌ |
|
||||
| Configuration | ✅ | ✅✅ | ✅ | ✅ | ⚠️ |
|
||||
| CI/CD | ✅ | ✅✅ | ✅ | ✅ | ⚠️ |
|
||||
| Metrics | ✅ | ✅✅ | ⚠️ | ❌ | ✅✅ |
|
||||
| Security (SAST) | ❌ | ✅✅ | ❌ | ❌ | ❌ |
|
||||
| Multi-language | ❌ | ✅✅ | ⚠️ | ⚠️ | ❌ |
|
||||
|
||||
**Guardian's Position:** The AI-First Architecture Guardian for TypeScript/DDD teams
|
||||
|
||||
---
|
||||
|
||||
## 🤝 Contributing
|
||||
|
||||
Want to help build Guardian? Check out:
|
||||
- [GitHub Issues](https://github.com/samiyev/puaros/issues)
|
||||
- [CONTRIBUTING.md](../../CONTRIBUTING.md)
|
||||
- [Discord Community](#) (coming soon)
|
||||
|
||||
---
|
||||
|
||||
## 📈 Versioning
|
||||
|
||||
Guardian follows [Semantic Versioning](https://semver.org/):
|
||||
- **MAJOR** (1.0.0) - Breaking changes
|
||||
- **MINOR** (0.x.0) - New features, backwards compatible
|
||||
- **PATCH** (0.x.y) - Bug fixes
|
||||
|
||||
Until 1.0.0, minor versions may include breaking changes as we iterate on the API.
|
||||
@@ -16,6 +16,10 @@ This document provides authoritative sources, academic papers, industry standard
|
||||
8. [General Software Quality Standards](#8-general-software-quality-standards)
|
||||
9. [Code Complexity Metrics](#9-code-complexity-metrics)
|
||||
10. [Additional Authoritative Sources](#10-additional-authoritative-sources)
|
||||
11. [Anemic Domain Model Detection](#11-anemic-domain-model-detection)
|
||||
12. [Aggregate Boundary Validation (DDD Tactical Patterns)](#12-aggregate-boundary-validation-ddd-tactical-patterns)
|
||||
13. [Secret Detection & Security](#13-secret-detection--security)
|
||||
14. [Severity-Based Prioritization & Technical Debt](#14-severity-based-prioritization--technical-debt)
|
||||
|
||||
---
|
||||
|
||||
@@ -503,22 +507,318 @@ This document provides authoritative sources, academic papers, industry standard
|
||||
|
||||
---
|
||||
|
||||
## 11. Anemic Domain Model Detection
|
||||
|
||||
### Martin Fowler's Original Blog Post (2003)
|
||||
|
||||
**Blog Post: "Anemic Domain Model"** (November 25, 2003)
|
||||
- Author: Martin Fowler
|
||||
- Published: November 25, 2003
|
||||
- Described as an anti-pattern related to domain driven design and application architecture
|
||||
- Basic symptom: domain objects have hardly any behavior, making them little more than bags of getters and setters
|
||||
- Reference: [Martin Fowler - Anemic Domain Model](https://martinfowler.com/bliki/AnemicDomainModel.html)
|
||||
|
||||
**Key Problems Identified:**
|
||||
- "The basic symptom of an Anemic Domain Model is that at first blush it looks like the real thing"
|
||||
- "There are objects, many named after the nouns in the domain space, and these objects are connected with the rich relationships and structure that true domain models have"
|
||||
- "The catch comes when you look at the behavior, and you realize that there is hardly any behavior on these objects"
|
||||
- "This is contrary to the basic idea of object-oriented design; which is to combine data and process together"
|
||||
|
||||
**Why It's an Anti-pattern:**
|
||||
- Fowler argues that anemic domain models incur all of the costs of a domain model, without yielding any of the benefits
|
||||
- The logic that should be in a domain object is domain logic - validations, calculations, business rules
|
||||
- Separating data from behavior violates core OOP principles
|
||||
- Reference: [Wikipedia - Anemic Domain Model](https://en.wikipedia.org/wiki/Anemic_domain_model)
|
||||
|
||||
### Rich Domain Model vs Transaction Script
|
||||
|
||||
**Martin Fowler: Transaction Script Pattern**
|
||||
- Transaction Script organizes business logic by procedures where each procedure handles a single request
|
||||
- Good for simple logic with not-null checks and basic calculations
|
||||
- Reference: [Martin Fowler - Transaction Script](https://martinfowler.com/eaaCatalog/transactionScript.html)
|
||||
|
||||
**When to Use Rich Domain Model:**
|
||||
- If you have complicated and everchanging business rules involving validation, calculations, and derivations
|
||||
- Object model handles complex domain logic better than procedural scripts
|
||||
- Reference: [InformIT - Domain Logic Patterns](https://www.informit.com/articles/article.aspx?p=1398617&seqNum=2)
|
||||
|
||||
**Comparison:**
|
||||
- Transaction Script is better for simple logic
|
||||
- Domain Model is better when things get complicated with complex business rules
|
||||
- You can refactor from Transaction Script to Domain Model, but it's a harder change
|
||||
- Reference: [Medium - Transaction Script vs Domain Model](https://medium.com/@vibstudio_7040/transaction-script-active-record-and-domain-model-the-good-the-bad-and-the-ugly-c5b80a733305)
|
||||
|
||||
### Domain-Driven Design Context
|
||||
|
||||
**Eric Evans: Domain-Driven Design** (2003)
|
||||
- Entities should have both identity and behavior
|
||||
- Rich domain models place business logic within domain entities
|
||||
- Anemic models violate DDD principles by separating data from behavior
|
||||
- Reference: Already covered in Section 10 - [Domain-Driven Design Book](#domain-driven-design)
|
||||
|
||||
**Community Discussion:**
|
||||
- Some argue anemic models can follow SOLID design principles
|
||||
- However, consensus among DDD practitioners aligns with Fowler's anti-pattern view
|
||||
- Reference: [Stack Overflow - Anemic Domain Model Anti-Pattern](https://stackoverflow.com/questions/6293981/concrete-examples-on-why-the-anemic-domain-model-is-considered-an-anti-pattern)
|
||||
|
||||
---
|
||||
|
||||
## 12. Aggregate Boundary Validation (DDD Tactical Patterns)
|
||||
|
||||
### Eric Evans: Domain-Driven Design (2003)
|
||||
|
||||
**Original Book Definition:**
|
||||
- Aggregate: "A cluster of associated objects that we treat as a unit for the purpose of data changes"
|
||||
- An aggregate defines a consistency boundary around one or more entities
|
||||
- Exactly one entity in an aggregate is the root
|
||||
- Reference: [Microsoft Learn - Tactical DDD](https://learn.microsoft.com/en-us/azure/architecture/microservices/model/tactical-ddd)
|
||||
|
||||
**DDD Reference Document** (2015)
|
||||
- Official Domain-Driven Design Reference by Eric Evans
|
||||
- Contains comprehensive definitions of Aggregates and boundaries
|
||||
- Reference: [Domain Language - DDD Reference PDF](https://www.domainlanguage.com/wp-content/uploads/2016/05/DDD_Reference_2015-03.pdf)
|
||||
|
||||
### Vaughn Vernon: Implementing Domain-Driven Design (2013)
|
||||
|
||||
**Chapter 10: Aggregates** (Page 347)
|
||||
- Author: Vaughn Vernon
|
||||
- Publisher: Addison-Wesley
|
||||
- ISBN: 978-0321834577
|
||||
- Available at: [Amazon - Implementing DDD](https://www.amazon.com/Implementing-Domain-Driven-Design-Vaughn-Vernon/dp/0321834577)
|
||||
|
||||
**Key Rules from the Chapter:**
|
||||
- **Rule: Model True Invariants in Consistency Boundaries**
|
||||
- **Rule: Design Small Aggregates**
|
||||
- **Rule: Reference Other Aggregates by Identity**
|
||||
- **Rule: Use Eventual Consistency Outside the Boundary**
|
||||
|
||||
**Effective Aggregate Design Series:**
|
||||
- Three-part essay series by Vaughn Vernon
|
||||
- Available as downloadable PDFs
|
||||
- Licensed under Creative Commons Attribution-NoDerivs 3.0
|
||||
- Reference: [Kalele - Effective Aggregate Design](https://kalele.io/effective-aggregate-design/)
|
||||
|
||||
**Appendix A: Aggregates and Event Sourcing:**
|
||||
- Additional coverage of aggregate patterns
|
||||
- Practical implementation guidance
|
||||
- Reference: Available in the book
|
||||
|
||||
### Tactical DDD Patterns
|
||||
|
||||
**Microsoft Azure Architecture Center:**
|
||||
- "Using tactical DDD to design microservices"
|
||||
- Official Microsoft documentation on aggregate boundaries
|
||||
- Comprehensive guide for microservices architecture
|
||||
- Reference: [Microsoft Learn - Tactical DDD](https://learn.microsoft.com/en-us/azure/architecture/microservices/model/tactical-ddd)
|
||||
|
||||
**SOCADK Design Practice Repository:**
|
||||
- Summaries of artifacts, templates, and techniques for tactical DDD
|
||||
- Practical examples of aggregate boundary enforcement
|
||||
- Reference: [SOCADK - Tactical DDD](https://socadk.github.io/design-practice-repository/activities/DPR-TacticDDD.html)
|
||||
|
||||
### Why Aggregate Boundaries Matter
|
||||
|
||||
**Transactional Boundary:**
|
||||
- What makes it an aggregate is the transactional boundary
|
||||
- Changes to aggregate must be atomic
|
||||
- Ensures consistency within the boundary
|
||||
- Reference: [Medium - Mastering Aggregate Design](https://medium.com/ssense-tech/ddd-beyond-the-basics-mastering-aggregate-design-26591e218c8c)
|
||||
|
||||
**Cross-Aggregate References:**
|
||||
- Aggregates should only reference other aggregates by ID, not direct entity references
|
||||
- Prevents tight coupling between aggregates
|
||||
- Maintains clear boundaries
|
||||
- Reference: [Lev Gorodinski - Two Sides of DDD](http://gorodinski.com/blog/2013/03/11/the-two-sides-of-domain-driven-design/)
|
||||
|
||||
---
|
||||
|
||||
## 13. Secret Detection & Security
|
||||
|
||||
### OWASP Standards
|
||||
|
||||
**OWASP Secrets Management Cheat Sheet**
|
||||
- Official OWASP best practices and guidelines for secrets management
|
||||
- Comprehensive coverage of hardcoded credentials risks
|
||||
- Reference: [OWASP - Secrets Management](https://cheatsheetseries.owasp.org/cheatsheets/Secrets_Management_Cheat_Sheet.html)
|
||||
|
||||
**OWASP DevSecOps Guideline**
|
||||
- Section on Secrets Management (v-0.2)
|
||||
- Integration with CI/CD pipelines
|
||||
- Reference: [OWASP - DevSecOps Secrets](https://owasp.org/www-project-devsecops-guideline/latest/01a-Secrets-Management)
|
||||
|
||||
**OWASP Password Management: Hardcoded Password**
|
||||
- Vulnerability documentation on hardcoded passwords
|
||||
- "It is never a good idea to hardcode a password"
|
||||
- Makes fixing the problem extremely difficult
|
||||
- Reference: [OWASP - Hardcoded Password Vulnerability](https://owasp.org/www-community/vulnerabilities/Use_of_hard-coded_password)
|
||||
|
||||
### Key Security Principles
|
||||
|
||||
**Don't Hardcode Secrets:**
|
||||
- Secrets should not be hardcoded
|
||||
- Should not be unencrypted
|
||||
- Should not be stored in source code
|
||||
- Reference: [OWASP Secrets Management Cheat Sheet](https://cheatsheetseries.owasp.org/cheatsheets/Secrets_Management_Cheat_Sheet.html)
|
||||
|
||||
**Centralized Management:**
|
||||
- Growing need to centralize storage, provisioning, auditing, rotation, and management of secrets
|
||||
- Control access and prevent secrets from leaking
|
||||
- Use purpose-built tools for encryption-at-rest
|
||||
- Reference: [OWASP SAMM - Secret Management](https://owaspsamm.org/model/implementation/secure-deployment/stream-b/)
|
||||
|
||||
**Prevention Tools:**
|
||||
- Use pre-commit hooks to prevent secrets from entering codebase
|
||||
- Automated scanning in CI/CD pipelines
|
||||
- Reference: [GitHub OWASP Secrets Management](https://github.com/dominikdesmit/owasp-secrets-management)
|
||||
|
||||
### GitHub Secret Scanning
|
||||
|
||||
**Official GitHub Documentation:**
|
||||
- About Secret Scanning: Automated detection of secrets in repositories
|
||||
- Scans for patterns and heuristics matching known types of secrets
|
||||
- Reference: [GitHub Docs - Secret Scanning](https://docs.github.com/code-security/secret-scanning/about-secret-scanning)
|
||||
|
||||
**How It Works:**
|
||||
- Automatically scans repository contents for sensitive data (API keys, passwords, tokens)
|
||||
- Scans commits, issues, and pull requests continuously
|
||||
- Real-time alerts to repository administrators
|
||||
- Reference: [GitHub Docs - Keeping Secrets Secure](https://docs.github.com/en/code-security/secret-scanning)
|
||||
|
||||
**AI-Powered Detection:**
|
||||
- Copilot Secret Scanning uses large language models (LLMs)
|
||||
- Identifies unstructured secrets (generic passwords) in source code
|
||||
- Enhances detection beyond pattern matching
|
||||
- Reference: [GitHub Docs - Copilot Secret Scanning](https://docs.github.com/en/code-security/secret-scanning/copilot-secret-scanning)
|
||||
|
||||
**Supported Patterns:**
|
||||
- 350+ secret patterns detected
|
||||
- AWS, GitHub, NPM, SSH, GCP, Slack, Basic Auth, JWT tokens
|
||||
- Reference: [GitHub Docs - Supported Patterns](https://docs.github.com/en/code-security/secret-scanning/introduction/supported-secret-scanning-patterns)
|
||||
|
||||
### Mobile Security
|
||||
|
||||
**OWASP Mobile Security:**
|
||||
- "Secrets security is the most important issue for mobile applications"
|
||||
- Only safe way: keep secrets off the client side entirely
|
||||
- Move sensitive information to backend
|
||||
- Reference: [GitGuardian - OWASP Top 10 Mobile](https://blog.gitguardian.com/owasp-top-10-for-mobile-secrets/)
|
||||
|
||||
### Third-Party Tools
|
||||
|
||||
**GitGuardian:**
|
||||
- Secrets security and non-human identity governance
|
||||
- Enterprise-grade secret detection
|
||||
- Reference: [GitGuardian Official Site](https://www.gitguardian.com/)
|
||||
|
||||
**Yelp detect-secrets:**
|
||||
- Open-source enterprise-friendly secret detection
|
||||
- Prevent secrets in code
|
||||
- Reference: [GitHub - Yelp detect-secrets](https://github.com/Yelp/detect-secrets)
|
||||
|
||||
---
|
||||
|
||||
## 14. Severity-Based Prioritization & Technical Debt
|
||||
|
||||
### Academic Research on Technical Debt Prioritization
|
||||
|
||||
**Systematic Literature Review** (2020)
|
||||
- Title: "A systematic literature review on Technical Debt prioritization"
|
||||
- Analyzed 557 unique papers, included 44 primary studies
|
||||
- Finding: "Technical Debt prioritization research is preliminary and there is no consensus on what the important factors are and how to measure them"
|
||||
- Reference: [ScienceDirect - TD Prioritization](https://www.sciencedirect.com/science/article/pii/S016412122030220X)
|
||||
|
||||
**IEEE Conference Paper** (2021)
|
||||
- Title: "Technical Debt Prioritization: Taxonomy, Methods Results, and Practical Characteristics"
|
||||
- Systematic mapping review of 112 studies, resulting in 51 unique papers
|
||||
- Classified methods in two-level taxonomy with 10 categories
|
||||
- Reference: [IEEE Xplore - TD Prioritization](https://ieeexplore.ieee.org/document/9582595/)
|
||||
|
||||
**Identifying Severity of Technical Debt** (2023)
|
||||
- Journal: Software Quality Journal
|
||||
- Title: "Identifying the severity of technical debt issues based on semantic and structural information"
|
||||
- Problem: "Existing studies mainly focus on detecting TD through source code or comments but usually ignore the severity degree of TD issues"
|
||||
- Proposed approach combining semantic and structural information
|
||||
- Reference: [Springer - TD Severity](https://link.springer.com/article/10.1007/s11219-023-09651-3)
|
||||
|
||||
### SonarQube Severity Classification
|
||||
|
||||
**Current Severity Levels** (SonarQube 10.2+)
|
||||
- Severity levels: **info, low, medium, high, and blocker**
|
||||
- Reference: [SonarQube Docs - Metrics Definition](https://docs.sonarsource.com/sonarqube-server/user-guide/code-metrics/metrics-definition)
|
||||
|
||||
**High/Blocker Severity:**
|
||||
- An issue with significant probability of severe unintended consequences
|
||||
- Should be fixed immediately
|
||||
- Includes bugs leading to production crashes
|
||||
- Security flaws allowing attackers to extract sensitive data or execute malicious code
|
||||
- Reference: [SonarQube Docs - Metrics](https://docs.sonarsource.com/sonarqube-server/10.8/user-guide/code-metrics/metrics-definition)
|
||||
|
||||
**Medium Severity:**
|
||||
- Quality flaw that can highly impact developer's productivity
|
||||
- Uncovered code, duplicated blocks, unused parameters
|
||||
- Reference: [SonarQube Documentation](https://docs.sonarsource.com/sonarqube-server/10.8/user-guide/code-metrics/metrics-definition)
|
||||
|
||||
**Low Severity:**
|
||||
- Quality flaw with slight impact on developer productivity
|
||||
- Lines too long, switch statements with few cases
|
||||
- Reference: [SonarQube Documentation](https://docs.sonarsource.com/sonarqube-server/10.8/user-guide/code-metrics/metrics-definition)
|
||||
|
||||
**Info Severity:**
|
||||
- No expected impact on application
|
||||
- Informational purposes only
|
||||
- Reference: [SonarQube Documentation](https://docs.sonarsource.com/sonarqube-server/10.8/user-guide/code-metrics/metrics-definition)
|
||||
|
||||
### Legacy SonarQube Classification (pre-10.2)
|
||||
|
||||
**Five Severity Levels:**
|
||||
- **BLOCKER**: Bug with high probability to impact behavior in production (memory leak, unclosed JDBC connection)
|
||||
- **CRITICAL**: Bug with low probability to impact production behavior OR security flaw (empty catch block, SQL injection)
|
||||
- **MAJOR**: Quality flaw highly impacting developer productivity (uncovered code, duplicated blocks, unused parameters)
|
||||
- **MINOR**: Quality flaw slightly impacting developer productivity (lines too long, switch statements < 3 cases)
|
||||
- **INFO**: Informational only
|
||||
- Reference: [SonarQube Community - Severity Categories](https://community.sonarsource.com/t/sonarqube-severity-categories/115287)
|
||||
|
||||
### Research on Impact and Effectiveness
|
||||
|
||||
**Empirical Study** (2020)
|
||||
- Title: "Some SonarQube issues have a significant but small effect on faults and changes"
|
||||
- Published in: ScienceDirect (Information and Software Technology)
|
||||
- Large-scale empirical study on SonarQube issue impact
|
||||
- Reference: [ScienceDirect - SonarQube Issues](https://www.sciencedirect.com/science/article/abs/pii/S0164121220301734)
|
||||
|
||||
**Machine Learning for Prioritization** (2024)
|
||||
- Recent approaches: "Development teams could integrate models into CI/CD pipelines"
|
||||
- Automatically flag potential TD issues during code reviews
|
||||
- Prioritize based on severity
|
||||
- Reference: [arXiv - Technical Debt Management](https://arxiv.org/html/2403.06484v1)
|
||||
|
||||
### Multiple-Case Study
|
||||
|
||||
**Aligning TD with Business Objectives** (2018)
|
||||
- Title: "Aligning Technical Debt Prioritization with Business Objectives: A Multiple-Case Study"
|
||||
- Demonstrates importance of priority-based technical debt management
|
||||
- Reference: [ResearchGate - TD Business Alignment](https://www.researchgate.net/publication/328903587_Aligning_Technical_Debt_Prioritization_with_Business_Objectives_A_Multiple-Case_Study)
|
||||
|
||||
---
|
||||
|
||||
## Conclusion
|
||||
|
||||
The code quality detection rules implemented in Guardian are firmly grounded in:
|
||||
|
||||
1. **Academic Research**: Peer-reviewed papers on software maintainability, complexity metrics, and code quality
|
||||
2. **Industry Standards**: ISO/IEC 25010, SonarQube rules, Google and Airbnb style guides
|
||||
1. **Academic Research**: Peer-reviewed papers on software maintainability, complexity metrics, code quality, technical debt prioritization, and severity classification
|
||||
2. **Industry Standards**: ISO/IEC 25010, SonarQube rules, OWASP security guidelines, Google and Airbnb style guides
|
||||
3. **Authoritative Books**:
|
||||
- Robert C. Martin's "Clean Architecture" (2017)
|
||||
- Vaughn Vernon's "Implementing Domain-Driven Design" (2013)
|
||||
- Eric Evans' "Domain-Driven Design" (2003)
|
||||
- Martin Fowler's "Patterns of Enterprise Application Architecture" (2002)
|
||||
- Martin Fowler's "Refactoring" (1999, 2018)
|
||||
- Steve McConnell's "Code Complete" (1993, 2004)
|
||||
4. **Expert Guidance**: Martin Fowler, Robert C. Martin (Uncle Bob), Eric Evans, Alistair Cockburn, Kent Beck
|
||||
5. **Open Source Tools**: ArchUnit, SonarQube, ESLint - widely adopted in enterprise environments
|
||||
4. **Expert Guidance**: Martin Fowler, Robert C. Martin (Uncle Bob), Eric Evans, Vaughn Vernon, Alistair Cockburn, Kent Beck
|
||||
5. **Security Standards**: OWASP Secrets Management, GitHub Secret Scanning, GitGuardian best practices
|
||||
6. **Open Source Tools**: ArchUnit, SonarQube, ESLint, Secretlint - widely adopted in enterprise environments
|
||||
|
||||
These rules represent decades of software engineering wisdom, empirical research, and battle-tested practices from the world's leading software organizations and thought leaders.
|
||||
These rules represent decades of software engineering wisdom, empirical research, security best practices, and battle-tested practices from the world's leading software organizations and thought leaders.
|
||||
|
||||
---
|
||||
|
||||
@@ -545,8 +845,8 @@ These rules represent decades of software engineering wisdom, empirical research
|
||||
|
||||
---
|
||||
|
||||
**Document Version**: 1.0
|
||||
**Last Updated**: 2025-11-24
|
||||
**Document Version**: 1.1
|
||||
**Last Updated**: 2025-11-26
|
||||
**Questions or want to contribute research?**
|
||||
- 📧 Email: fozilbek.samiyev@gmail.com
|
||||
- 🐙 GitHub: https://github.com/samiyev/puaros/issues
|
||||
|
||||
@@ -10,6 +10,10 @@ Guardian's detection rules are not invented - they're based on decades of softwa
|
||||
- [Entity Exposure](#entity-exposure)
|
||||
- [Repository Pattern](#repository-pattern)
|
||||
- [Naming Conventions](#naming-conventions)
|
||||
- [Anemic Domain Model Detection](#anemic-domain-model-detection)
|
||||
- [Aggregate Boundary Validation](#aggregate-boundary-validation)
|
||||
- [Secret Detection](#secret-detection)
|
||||
- [Severity-Based Prioritization](#severity-based-prioritization)
|
||||
- [Full Research Citations](#full-research-citations)
|
||||
|
||||
---
|
||||
@@ -319,6 +323,192 @@ Consistent naming:
|
||||
|
||||
---
|
||||
|
||||
## Anemic Domain Model Detection
|
||||
|
||||
### Why it matters
|
||||
|
||||
Anemic domain models violate core OOP principles:
|
||||
- ❌ **No behavior** - Entities become data bags with only getters/setters
|
||||
- ❌ **Logic in services** - Business logic scattered across service layers
|
||||
- ❌ **Violates OOP** - Separates data from behavior
|
||||
- ❌ **Higher complexity** - Loses benefits of domain modeling
|
||||
|
||||
### Who says so?
|
||||
|
||||
**Martin Fowler's Original Anti-Pattern:**
|
||||
- **Blog Post: "Anemic Domain Model"** (November 25, 2003)
|
||||
> "The basic symptom of an Anemic Domain Model is that at first blush it looks like the real thing. There are objects, many named after the nouns in the domain space... The catch comes when you look at the behavior, and you realize that there is hardly any behavior on these objects."
|
||||
- Published over 20 years ago, still relevant today
|
||||
- [Read Fowler's post](https://martinfowler.com/bliki/AnemicDomainModel.html)
|
||||
|
||||
**Why It's an Anti-pattern:**
|
||||
> "This is contrary to the basic idea of object-oriented design; which is to combine data and process together."
|
||||
- Incurs all costs of domain model without any benefits
|
||||
- Logic should be in domain objects: validations, calculations, business rules
|
||||
- [Wikipedia - Anemic Domain Model](https://en.wikipedia.org/wiki/Anemic_domain_model)
|
||||
|
||||
**Rich Domain Model vs Transaction Script:**
|
||||
- **Transaction Script**: Good for simple logic (Fowler, 2002)
|
||||
- **Rich Domain Model**: Better for complex, ever-changing business rules
|
||||
- Can refactor from Transaction Script to Domain Model, but it's harder than starting right
|
||||
- [Martin Fowler - Transaction Script](https://martinfowler.com/eaaCatalog/transactionScript.html)
|
||||
|
||||
**Domain-Driven Design Context:**
|
||||
- **Eric Evans (2003)**: Entities should have both identity AND behavior
|
||||
- Anemic models violate DDD by separating data from behavior
|
||||
- [Stack Overflow discussion](https://stackoverflow.com/questions/6293981/concrete-examples-on-why-the-anemic-domain-model-is-considered-an-anti-pattern)
|
||||
|
||||
[Read full research →](./RESEARCH_CITATIONS.md#11-anemic-domain-model-detection)
|
||||
|
||||
---
|
||||
|
||||
## Aggregate Boundary Validation
|
||||
|
||||
### Why it matters
|
||||
|
||||
Proper aggregate boundaries ensure:
|
||||
- ✅ **Consistency** - Atomic changes within boundaries
|
||||
- ✅ **Low coupling** - Aggregates are loosely connected
|
||||
- ✅ **Clear transactions** - One aggregate = one transaction
|
||||
- ✅ **Maintainability** - Boundaries prevent complexity spread
|
||||
|
||||
### The Rules
|
||||
|
||||
**Vaughn Vernon's Four Rules (2013):**
|
||||
1. **Model True Invariants in Consistency Boundaries**
|
||||
2. **Design Small Aggregates**
|
||||
3. **Reference Other Aggregates by Identity**
|
||||
4. **Use Eventual Consistency Outside the Boundary**
|
||||
|
||||
### Who says so?
|
||||
|
||||
**Eric Evans: Domain-Driven Design (2003)**
|
||||
- **Original Definition**:
|
||||
> "A cluster of associated objects that we treat as a unit for the purpose of data changes"
|
||||
- An aggregate defines a consistency boundary
|
||||
- Exactly one entity is the aggregate root
|
||||
- [Microsoft Learn - Tactical DDD](https://learn.microsoft.com/en-us/azure/architecture/microservices/model/tactical-ddd)
|
||||
|
||||
**Vaughn Vernon: Implementing Domain-Driven Design (2013)**
|
||||
- **Chapter 10: Aggregates** (Page 347)
|
||||
- ISBN: 978-0321834577
|
||||
- Comprehensive rules for aggregate design
|
||||
- Three-part essay series: "Effective Aggregate Design"
|
||||
- [Available at Kalele](https://kalele.io/effective-aggregate-design/)
|
||||
|
||||
**Why Boundaries Matter:**
|
||||
- **Transactional Boundary**: Changes must be atomic
|
||||
- **Reference by ID**: No direct entity references across aggregates
|
||||
- **Prevents tight coupling**: Maintains clear boundaries
|
||||
- [Medium - Mastering Aggregate Design](https://medium.com/ssense-tech/ddd-beyond-the-basics-mastering-aggregate-design-26591e218c8c)
|
||||
|
||||
**Microsoft Azure Documentation:**
|
||||
- Official guide for microservices architecture
|
||||
- Comprehensive aggregate boundary patterns
|
||||
- [Microsoft Learn - Tactical DDD](https://learn.microsoft.com/en-us/azure/architecture/microservices/model/tactical-ddd)
|
||||
|
||||
[Read full research →](./RESEARCH_CITATIONS.md#12-aggregate-boundary-validation-ddd-tactical-patterns)
|
||||
|
||||
---
|
||||
|
||||
## Secret Detection
|
||||
|
||||
### Why it matters
|
||||
|
||||
Hardcoded secrets create critical security risks:
|
||||
- 🔴 **Data breaches** - Exposed credentials lead to unauthorized access
|
||||
- 🔴 **Production incidents** - Leaked tokens cause service disruptions
|
||||
- 🔴 **Compliance violations** - GDPR, PCI-DSS, SOC 2 requirements
|
||||
- 🔴 **Impossible to rotate** - Secrets in code are difficult to change
|
||||
|
||||
### Who says so?
|
||||
|
||||
**OWASP Security Standards:**
|
||||
- **OWASP Secrets Management Cheat Sheet**
|
||||
> "Secrets should not be hardcoded, should not be unencrypted, and should not be stored in source code."
|
||||
- Official best practices from OWASP Foundation
|
||||
- [Read the cheat sheet](https://cheatsheetseries.owasp.org/cheatsheets/Secrets_Management_Cheat_Sheet.html)
|
||||
|
||||
- **OWASP Hardcoded Password Vulnerability**
|
||||
> "It is never a good idea to hardcode a password, as it allows all of the project's developers to view the password and makes fixing the problem extremely difficult."
|
||||
- [OWASP Documentation](https://owasp.org/www-community/vulnerabilities/Use_of_hard-coded_password)
|
||||
|
||||
**GitHub Secret Scanning:**
|
||||
- **Official GitHub Documentation**
|
||||
- Automatically scans 350+ secret patterns
|
||||
- Detects AWS, GitHub, NPM, SSH, GCP, Slack tokens
|
||||
- AI-powered detection with Copilot Secret Scanning
|
||||
- [GitHub Docs](https://docs.github.com/code-security/secret-scanning/about-secret-scanning)
|
||||
|
||||
**Key Security Principles:**
|
||||
- **Centralized Management**: Use purpose-built secret management tools
|
||||
- **Prevention Tools**: Pre-commit hooks to prevent secrets entering codebase
|
||||
- **Encryption at Rest**: Never store secrets in plaintext
|
||||
- [OWASP SAMM - Secret Management](https://owaspsamm.org/model/implementation/secure-deployment/stream-b/)
|
||||
|
||||
**Mobile Security:**
|
||||
- OWASP: "Secrets security is the most important issue for mobile applications"
|
||||
- Only safe way: keep secrets off the client side entirely
|
||||
- [GitGuardian - OWASP Top 10 Mobile](https://blog.gitguardian.com/owasp-top-10-for-mobile-secrets/)
|
||||
|
||||
[Read full research →](./RESEARCH_CITATIONS.md#13-secret-detection--security)
|
||||
|
||||
---
|
||||
|
||||
## Severity-Based Prioritization
|
||||
|
||||
### Why it matters
|
||||
|
||||
Severity classification enables:
|
||||
- ✅ **Focus on critical issues** - Fix what matters most first
|
||||
- ✅ **Reduced technical debt** - Prioritize based on impact
|
||||
- ✅ **Better CI/CD integration** - Fail builds on critical issues only
|
||||
- ✅ **Team efficiency** - Don't waste time on low-impact issues
|
||||
|
||||
### Who says so?
|
||||
|
||||
**Academic Research:**
|
||||
- **Systematic Literature Review (2020)**
|
||||
- Title: "A systematic literature review on Technical Debt prioritization"
|
||||
- Analyzed 557 papers, included 44 primary studies
|
||||
- Finding: Need for consensus on severity factors
|
||||
- [ScienceDirect](https://www.sciencedirect.com/science/article/pii/S016412122030220X)
|
||||
|
||||
- **IEEE Conference Paper (2021)**
|
||||
- "Technical Debt Prioritization: Taxonomy, Methods Results"
|
||||
- Reviewed 112 studies
|
||||
- Classified methods in 10 categories
|
||||
- [IEEE Xplore](https://ieeexplore.ieee.org/document/9582595/)
|
||||
|
||||
- **Software Quality Journal (2023)**
|
||||
- "Identifying the severity of technical debt issues"
|
||||
- Problem: Most studies ignore severity degree
|
||||
- Proposed semantic + structural approach
|
||||
- [Springer](https://link.springer.com/article/10.1007/s11219-023-09651-3)
|
||||
|
||||
**SonarQube Industry Standard:**
|
||||
- **Current Classification (10.2+)**:
|
||||
- **Blocker/High**: Severe unintended consequences, fix immediately
|
||||
- **Medium**: Impacts developer productivity
|
||||
- **Low**: Slight impact on productivity
|
||||
- **Info**: No expected impact
|
||||
- [SonarQube Docs](https://docs.sonarsource.com/sonarqube-server/user-guide/code-metrics/metrics-definition)
|
||||
|
||||
**Real-World Impact:**
|
||||
- Development teams integrate models into CI/CD pipelines
|
||||
- Automatically flag potential TD issues during code reviews
|
||||
- Prioritize based on severity
|
||||
- [arXiv - Technical Debt Management](https://arxiv.org/html/2403.06484v1)
|
||||
|
||||
**Business Alignment:**
|
||||
- "Aligning Technical Debt Prioritization with Business Objectives" (2018)
|
||||
- Multiple-case study demonstrating importance
|
||||
- [ResearchGate](https://www.researchgate.net/publication/328903587_Aligning_Technical_Debt_Prioritization_with_Business_Objectives_A_Multiple-Case_Study)
|
||||
|
||||
[Read full research →](./RESEARCH_CITATIONS.md#14-severity-based-prioritization--technical-debt)
|
||||
|
||||
---
|
||||
|
||||
## Full Research Citations
|
||||
|
||||
For complete academic papers, books, and authoritative sources, see:
|
||||
@@ -354,8 +544,9 @@ Guardian's rules align with international standards:
|
||||
|
||||
Guardian's rules are backed by:
|
||||
|
||||
✅ **5 Seminal Books** (1993-2017)
|
||||
✅ **6 Seminal Books** (1993-2017)
|
||||
- Clean Architecture (Robert C. Martin, 2017)
|
||||
- Implementing Domain-Driven Design (Vaughn Vernon, 2013)
|
||||
- Domain-Driven Design (Eric Evans, 2003)
|
||||
- Patterns of Enterprise Application Architecture (Martin Fowler, 2002)
|
||||
- Refactoring (Martin Fowler, 1999)
|
||||
@@ -363,9 +554,16 @@ Guardian's rules are backed by:
|
||||
|
||||
✅ **Academic Research** (1976-2024)
|
||||
- MIT Course 6.031
|
||||
- ScienceDirect peer-reviewed studies
|
||||
- ScienceDirect peer-reviewed studies (2020-2023)
|
||||
- IEEE Conference papers on Technical Debt
|
||||
- Software Quality Journal (2023)
|
||||
- Cyclomatic Complexity (Thomas McCabe, 1976)
|
||||
|
||||
✅ **Security Standards**
|
||||
- OWASP Secrets Management Cheat Sheet
|
||||
- GitHub Secret Scanning (350+ patterns)
|
||||
- OWASP Top 10 for Mobile
|
||||
|
||||
✅ **International Standards**
|
||||
- ISO/IEC 25010:2011
|
||||
|
||||
@@ -373,10 +571,11 @@ Guardian's rules are backed by:
|
||||
- Google, Microsoft, Airbnb style guides
|
||||
- SonarQube (400,000+ organizations)
|
||||
- AWS documentation
|
||||
- GitHub security practices
|
||||
|
||||
✅ **Thought Leaders**
|
||||
- Martin Fowler, Robert C. Martin (Uncle Bob), Eric Evans
|
||||
- Alistair Cockburn, Kent Beck, Thomas McCabe
|
||||
- Vaughn Vernon, Alistair Cockburn, Kent Beck, Thomas McCabe
|
||||
|
||||
---
|
||||
|
||||
@@ -388,4 +587,4 @@ Guardian's rules are backed by:
|
||||
|
||||
---
|
||||
|
||||
*Last updated: 2025-11-24*
|
||||
*Last updated: 2025-11-26*
|
||||
1176
packages/guardian/docs/v0.6.0-CONFIGURATION-SPEC.md
Normal file
1176
packages/guardian/docs/v0.6.0-CONFIGURATION-SPEC.md
Normal file
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,38 @@
|
||||
/**
|
||||
* BAD EXAMPLE: Anemic Domain Model
|
||||
*
|
||||
* This Order class only has getters and setters without any business logic.
|
||||
* All business logic is likely scattered in services (procedural approach).
|
||||
*
|
||||
* This violates Domain-Driven Design principles.
|
||||
*/
|
||||
|
||||
class Order {
|
||||
private status: string
|
||||
private total: number
|
||||
private items: any[]
|
||||
|
||||
getStatus(): string {
|
||||
return this.status
|
||||
}
|
||||
|
||||
setStatus(status: string): void {
|
||||
this.status = status
|
||||
}
|
||||
|
||||
getTotal(): number {
|
||||
return this.total
|
||||
}
|
||||
|
||||
setTotal(total: number): void {
|
||||
this.total = total
|
||||
}
|
||||
|
||||
getItems(): any[] {
|
||||
return this.items
|
||||
}
|
||||
|
||||
setItems(items: any[]): void {
|
||||
this.items = items
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,34 @@
|
||||
/**
|
||||
* BAD EXAMPLE: Anemic Domain Model with Public Setters
|
||||
*
|
||||
* This User class has public setters which is an anti-pattern in DDD.
|
||||
* Public setters allow uncontrolled state changes without validation or business rules.
|
||||
*
|
||||
* This violates Domain-Driven Design principles and encapsulation.
|
||||
*/
|
||||
|
||||
class User {
|
||||
private email: string
|
||||
private password: string
|
||||
private status: string
|
||||
|
||||
public setEmail(email: string): void {
|
||||
this.email = email
|
||||
}
|
||||
|
||||
public getEmail(): string {
|
||||
return this.email
|
||||
}
|
||||
|
||||
public setPassword(password: string): void {
|
||||
this.password = password
|
||||
}
|
||||
|
||||
public setStatus(status: string): void {
|
||||
this.status = status
|
||||
}
|
||||
|
||||
public getStatus(): string {
|
||||
return this.status
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,139 @@
|
||||
/**
|
||||
* GOOD EXAMPLE: Rich Domain Model with Business Logic
|
||||
*
|
||||
* This Customer class encapsulates business rules and state transitions.
|
||||
* No public setters - all changes go through business methods.
|
||||
*
|
||||
* This follows Domain-Driven Design and encapsulation principles.
|
||||
*/
|
||||
|
||||
interface Address {
|
||||
street: string
|
||||
city: string
|
||||
country: string
|
||||
postalCode: string
|
||||
}
|
||||
|
||||
interface DomainEvent {
|
||||
type: string
|
||||
data: any
|
||||
}
|
||||
|
||||
class Customer {
|
||||
private readonly id: string
|
||||
private email: string
|
||||
private isActive: boolean
|
||||
private loyaltyPoints: number
|
||||
private address: Address | null
|
||||
private readonly events: DomainEvent[] = []
|
||||
|
||||
constructor(id: string, email: string) {
|
||||
this.id = id
|
||||
this.email = email
|
||||
this.isActive = true
|
||||
this.loyaltyPoints = 0
|
||||
this.address = null
|
||||
}
|
||||
|
||||
public activate(): void {
|
||||
if (this.isActive) {
|
||||
throw new Error("Customer is already active")
|
||||
}
|
||||
this.isActive = true
|
||||
this.events.push({
|
||||
type: "CustomerActivated",
|
||||
data: { customerId: this.id },
|
||||
})
|
||||
}
|
||||
|
||||
public deactivate(reason: string): void {
|
||||
if (!this.isActive) {
|
||||
throw new Error("Customer is already inactive")
|
||||
}
|
||||
this.isActive = false
|
||||
this.events.push({
|
||||
type: "CustomerDeactivated",
|
||||
data: { customerId: this.id, reason },
|
||||
})
|
||||
}
|
||||
|
||||
public changeEmail(newEmail: string): void {
|
||||
if (!this.isValidEmail(newEmail)) {
|
||||
throw new Error("Invalid email format")
|
||||
}
|
||||
if (this.email === newEmail) {
|
||||
return
|
||||
}
|
||||
const oldEmail = this.email
|
||||
this.email = newEmail
|
||||
this.events.push({
|
||||
type: "EmailChanged",
|
||||
data: { customerId: this.id, oldEmail, newEmail },
|
||||
})
|
||||
}
|
||||
|
||||
public updateAddress(address: Address): void {
|
||||
if (!this.isValidAddress(address)) {
|
||||
throw new Error("Invalid address")
|
||||
}
|
||||
this.address = address
|
||||
this.events.push({
|
||||
type: "AddressUpdated",
|
||||
data: { customerId: this.id },
|
||||
})
|
||||
}
|
||||
|
||||
public addLoyaltyPoints(points: number): void {
|
||||
if (points <= 0) {
|
||||
throw new Error("Points must be positive")
|
||||
}
|
||||
if (!this.isActive) {
|
||||
throw new Error("Cannot add points to inactive customer")
|
||||
}
|
||||
this.loyaltyPoints += points
|
||||
this.events.push({
|
||||
type: "LoyaltyPointsAdded",
|
||||
data: { customerId: this.id, points },
|
||||
})
|
||||
}
|
||||
|
||||
public redeemLoyaltyPoints(points: number): void {
|
||||
if (points <= 0) {
|
||||
throw new Error("Points must be positive")
|
||||
}
|
||||
if (this.loyaltyPoints < points) {
|
||||
throw new Error("Insufficient loyalty points")
|
||||
}
|
||||
this.loyaltyPoints -= points
|
||||
this.events.push({
|
||||
type: "LoyaltyPointsRedeemed",
|
||||
data: { customerId: this.id, points },
|
||||
})
|
||||
}
|
||||
|
||||
public getEmail(): string {
|
||||
return this.email
|
||||
}
|
||||
|
||||
public getLoyaltyPoints(): number {
|
||||
return this.loyaltyPoints
|
||||
}
|
||||
|
||||
public getAddress(): Address | null {
|
||||
return this.address ? { ...this.address } : null
|
||||
}
|
||||
|
||||
public getEvents(): DomainEvent[] {
|
||||
return [...this.events]
|
||||
}
|
||||
|
||||
private isValidEmail(email: string): boolean {
|
||||
return /^[^\s@]+@[^\s@]+\.[^\s@]+$/.test(email)
|
||||
}
|
||||
|
||||
private isValidAddress(address: Address): boolean {
|
||||
return !!address.street && !!address.city && !!address.country && !!address.postalCode
|
||||
}
|
||||
}
|
||||
|
||||
export { Customer }
|
||||
@@ -0,0 +1,104 @@
|
||||
/**
|
||||
* GOOD EXAMPLE: Rich Domain Model
|
||||
*
|
||||
* This Order class contains business logic and enforces business rules.
|
||||
* State changes are made through business methods, not setters.
|
||||
*
|
||||
* This follows Domain-Driven Design principles.
|
||||
*/
|
||||
|
||||
type OrderStatus = "pending" | "approved" | "rejected" | "shipped"
|
||||
|
||||
interface OrderItem {
|
||||
productId: string
|
||||
quantity: number
|
||||
price: number
|
||||
}
|
||||
|
||||
interface DomainEvent {
|
||||
type: string
|
||||
data: any
|
||||
}
|
||||
|
||||
class Order {
|
||||
private readonly id: string
|
||||
private status: OrderStatus
|
||||
private items: OrderItem[]
|
||||
private readonly events: DomainEvent[] = []
|
||||
|
||||
constructor(id: string, items: OrderItem[]) {
|
||||
this.id = id
|
||||
this.status = "pending"
|
||||
this.items = items
|
||||
}
|
||||
|
||||
public approve(): void {
|
||||
if (!this.canBeApproved()) {
|
||||
throw new Error("Cannot approve order in current state")
|
||||
}
|
||||
this.status = "approved"
|
||||
this.events.push({
|
||||
type: "OrderApproved",
|
||||
data: { orderId: this.id },
|
||||
})
|
||||
}
|
||||
|
||||
public reject(reason: string): void {
|
||||
if (!this.canBeRejected()) {
|
||||
throw new Error("Cannot reject order in current state")
|
||||
}
|
||||
this.status = "rejected"
|
||||
this.events.push({
|
||||
type: "OrderRejected",
|
||||
data: { orderId: this.id, reason },
|
||||
})
|
||||
}
|
||||
|
||||
public ship(): void {
|
||||
if (!this.canBeShipped()) {
|
||||
throw new Error("Order must be approved before shipping")
|
||||
}
|
||||
this.status = "shipped"
|
||||
this.events.push({
|
||||
type: "OrderShipped",
|
||||
data: { orderId: this.id },
|
||||
})
|
||||
}
|
||||
|
||||
public addItem(item: OrderItem): void {
|
||||
if (this.status !== "pending") {
|
||||
throw new Error("Cannot modify approved or shipped order")
|
||||
}
|
||||
this.items.push(item)
|
||||
}
|
||||
|
||||
public calculateTotal(): number {
|
||||
return this.items.reduce((sum, item) => sum + item.price * item.quantity, 0)
|
||||
}
|
||||
|
||||
public getStatus(): OrderStatus {
|
||||
return this.status
|
||||
}
|
||||
|
||||
public getItems(): OrderItem[] {
|
||||
return [...this.items]
|
||||
}
|
||||
|
||||
public getEvents(): DomainEvent[] {
|
||||
return [...this.events]
|
||||
}
|
||||
|
||||
private canBeApproved(): boolean {
|
||||
return this.status === "pending" && this.items.length > 0
|
||||
}
|
||||
|
||||
private canBeRejected(): boolean {
|
||||
return this.status === "pending"
|
||||
}
|
||||
|
||||
private canBeShipped(): boolean {
|
||||
return this.status === "approved"
|
||||
}
|
||||
}
|
||||
|
||||
export { Order }
|
||||
@@ -1,7 +1,7 @@
|
||||
{
|
||||
"name": "@samiyev/guardian",
|
||||
"version": "0.7.1",
|
||||
"description": "Research-backed code quality guardian for AI-assisted development. Detects hardcodes, circular deps, framework leaks, entity exposure, and 8 architecture violations. Enforces Clean Architecture/DDD principles. Works with GitHub Copilot, Cursor, Windsurf, Claude, ChatGPT, Cline, and any AI coding tool.",
|
||||
"version": "0.9.0",
|
||||
"description": "Research-backed code quality guardian for AI-assisted development. Detects hardcodes, secrets, circular deps, framework leaks, entity exposure, and 9 architecture violations. Enforces Clean Architecture/DDD principles. Works with GitHub Copilot, Cursor, Windsurf, Claude, ChatGPT, Cline, and any AI coding tool.",
|
||||
"keywords": [
|
||||
"puaros",
|
||||
"guardian",
|
||||
@@ -82,6 +82,10 @@
|
||||
"guardian": "./bin/guardian.js"
|
||||
},
|
||||
"dependencies": {
|
||||
"@secretlint/core": "^11.2.5",
|
||||
"@secretlint/node": "^11.2.5",
|
||||
"@secretlint/secretlint-rule-preset-recommend": "^11.2.5",
|
||||
"@secretlint/types": "^11.2.5",
|
||||
"commander": "^12.1.0",
|
||||
"simple-git": "^3.30.0",
|
||||
"tree-sitter": "^0.21.1",
|
||||
|
||||
@@ -12,6 +12,9 @@ import { IEntityExposureDetector } from "./domain/services/IEntityExposureDetect
|
||||
import { IDependencyDirectionDetector } from "./domain/services/IDependencyDirectionDetector"
|
||||
import { IRepositoryPatternDetector } from "./domain/services/RepositoryPatternDetectorService"
|
||||
import { IAggregateBoundaryDetector } from "./domain/services/IAggregateBoundaryDetector"
|
||||
import { ISecretDetector } from "./domain/services/ISecretDetector"
|
||||
import { IAnemicModelDetector } from "./domain/services/IAnemicModelDetector"
|
||||
import { IDuplicateValueTracker } from "./domain/services/IDuplicateValueTracker"
|
||||
import { FileScanner } from "./infrastructure/scanners/FileScanner"
|
||||
import { CodeParser } from "./infrastructure/parsers/CodeParser"
|
||||
import { HardcodeDetector } from "./infrastructure/analyzers/HardcodeDetector"
|
||||
@@ -21,6 +24,9 @@ import { EntityExposureDetector } from "./infrastructure/analyzers/EntityExposur
|
||||
import { DependencyDirectionDetector } from "./infrastructure/analyzers/DependencyDirectionDetector"
|
||||
import { RepositoryPatternDetector } from "./infrastructure/analyzers/RepositoryPatternDetector"
|
||||
import { AggregateBoundaryDetector } from "./infrastructure/analyzers/AggregateBoundaryDetector"
|
||||
import { SecretDetector } from "./infrastructure/analyzers/SecretDetector"
|
||||
import { AnemicModelDetector } from "./infrastructure/analyzers/AnemicModelDetector"
|
||||
import { DuplicateValueTracker } from "./infrastructure/analyzers/DuplicateValueTracker"
|
||||
import { ERROR_MESSAGES } from "./shared/constants"
|
||||
|
||||
/**
|
||||
@@ -79,6 +85,9 @@ export async function analyzeProject(
|
||||
new DependencyDirectionDetector()
|
||||
const repositoryPatternDetector: IRepositoryPatternDetector = new RepositoryPatternDetector()
|
||||
const aggregateBoundaryDetector: IAggregateBoundaryDetector = new AggregateBoundaryDetector()
|
||||
const secretDetector: ISecretDetector = new SecretDetector()
|
||||
const anemicModelDetector: IAnemicModelDetector = new AnemicModelDetector()
|
||||
const duplicateValueTracker: IDuplicateValueTracker = new DuplicateValueTracker()
|
||||
const useCase = new AnalyzeProject(
|
||||
fileScanner,
|
||||
codeParser,
|
||||
@@ -89,6 +98,9 @@ export async function analyzeProject(
|
||||
dependencyDirectionDetector,
|
||||
repositoryPatternDetector,
|
||||
aggregateBoundaryDetector,
|
||||
secretDetector,
|
||||
anemicModelDetector,
|
||||
duplicateValueTracker,
|
||||
)
|
||||
|
||||
const result = await useCase.execute(options)
|
||||
@@ -112,5 +124,6 @@ export type {
|
||||
DependencyDirectionViolation,
|
||||
RepositoryPatternViolation,
|
||||
AggregateBoundaryViolation,
|
||||
AnemicModelViolation,
|
||||
ProjectMetrics,
|
||||
} from "./application/use-cases/AnalyzeProject"
|
||||
|
||||
@@ -9,20 +9,22 @@ import { IEntityExposureDetector } from "../../domain/services/IEntityExposureDe
|
||||
import { IDependencyDirectionDetector } from "../../domain/services/IDependencyDirectionDetector"
|
||||
import { IRepositoryPatternDetector } from "../../domain/services/RepositoryPatternDetectorService"
|
||||
import { IAggregateBoundaryDetector } from "../../domain/services/IAggregateBoundaryDetector"
|
||||
import { ISecretDetector } from "../../domain/services/ISecretDetector"
|
||||
import { IAnemicModelDetector } from "../../domain/services/IAnemicModelDetector"
|
||||
import { IDuplicateValueTracker } from "../../domain/services/IDuplicateValueTracker"
|
||||
import { SourceFile } from "../../domain/entities/SourceFile"
|
||||
import { DependencyGraph } from "../../domain/entities/DependencyGraph"
|
||||
import { ProjectPath } from "../../domain/value-objects/ProjectPath"
|
||||
import { CollectFiles } from "./pipeline/CollectFiles"
|
||||
import { ParseSourceFiles } from "./pipeline/ParseSourceFiles"
|
||||
import { ExecuteDetection } from "./pipeline/ExecuteDetection"
|
||||
import { AggregateResults } from "./pipeline/AggregateResults"
|
||||
import {
|
||||
ERROR_MESSAGES,
|
||||
HARDCODE_TYPES,
|
||||
LAYERS,
|
||||
NAMING_VIOLATION_TYPES,
|
||||
REGEX_PATTERNS,
|
||||
REPOSITORY_VIOLATION_TYPES,
|
||||
RULES,
|
||||
SEVERITY_ORDER,
|
||||
type SeverityLevel,
|
||||
VIOLATION_SEVERITY_MAP,
|
||||
} from "../../shared/constants"
|
||||
|
||||
export interface AnalyzeProjectRequest {
|
||||
@@ -43,6 +45,8 @@ export interface AnalyzeProjectResponse {
|
||||
dependencyDirectionViolations: DependencyDirectionViolation[]
|
||||
repositoryPatternViolations: RepositoryPatternViolation[]
|
||||
aggregateBoundaryViolations: AggregateBoundaryViolation[]
|
||||
secretViolations: SecretViolation[]
|
||||
anemicModelViolations: AnemicModelViolation[]
|
||||
metrics: ProjectMetrics
|
||||
}
|
||||
|
||||
@@ -59,8 +63,9 @@ export interface HardcodeViolation {
|
||||
type:
|
||||
| typeof HARDCODE_TYPES.MAGIC_NUMBER
|
||||
| typeof HARDCODE_TYPES.MAGIC_STRING
|
||||
| typeof HARDCODE_TYPES.MAGIC_BOOLEAN
|
||||
| typeof HARDCODE_TYPES.MAGIC_CONFIG
|
||||
value: string | number
|
||||
value: string | number | boolean
|
||||
file: string
|
||||
line: number
|
||||
column: number
|
||||
@@ -164,6 +169,32 @@ export interface AggregateBoundaryViolation {
|
||||
severity: SeverityLevel
|
||||
}
|
||||
|
||||
export interface SecretViolation {
|
||||
rule: typeof RULES.SECRET_EXPOSURE
|
||||
secretType: string
|
||||
file: string
|
||||
line: number
|
||||
column: number
|
||||
message: string
|
||||
suggestion: string
|
||||
severity: SeverityLevel
|
||||
}
|
||||
|
||||
export interface AnemicModelViolation {
|
||||
rule: typeof RULES.ANEMIC_MODEL
|
||||
className: string
|
||||
file: string
|
||||
layer: string
|
||||
line?: number
|
||||
methodCount: number
|
||||
propertyCount: number
|
||||
hasOnlyGettersSetters: boolean
|
||||
hasPublicSetters: boolean
|
||||
message: string
|
||||
suggestion: string
|
||||
severity: SeverityLevel
|
||||
}
|
||||
|
||||
export interface ProjectMetrics {
|
||||
totalFiles: number
|
||||
totalFunctions: number
|
||||
@@ -173,442 +204,80 @@ export interface ProjectMetrics {
|
||||
|
||||
/**
|
||||
* Main use case for analyzing a project's codebase
|
||||
* Orchestrates the analysis pipeline through focused components
|
||||
*/
|
||||
export class AnalyzeProject extends UseCase<
|
||||
AnalyzeProjectRequest,
|
||||
ResponseDto<AnalyzeProjectResponse>
|
||||
> {
|
||||
private readonly fileCollectionStep: CollectFiles
|
||||
private readonly parsingStep: ParseSourceFiles
|
||||
private readonly detectionPipeline: ExecuteDetection
|
||||
private readonly resultAggregator: AggregateResults
|
||||
|
||||
constructor(
|
||||
private readonly fileScanner: IFileScanner,
|
||||
private readonly codeParser: ICodeParser,
|
||||
private readonly hardcodeDetector: IHardcodeDetector,
|
||||
private readonly namingConventionDetector: INamingConventionDetector,
|
||||
private readonly frameworkLeakDetector: IFrameworkLeakDetector,
|
||||
private readonly entityExposureDetector: IEntityExposureDetector,
|
||||
private readonly dependencyDirectionDetector: IDependencyDirectionDetector,
|
||||
private readonly repositoryPatternDetector: IRepositoryPatternDetector,
|
||||
private readonly aggregateBoundaryDetector: IAggregateBoundaryDetector,
|
||||
fileScanner: IFileScanner,
|
||||
codeParser: ICodeParser,
|
||||
hardcodeDetector: IHardcodeDetector,
|
||||
namingConventionDetector: INamingConventionDetector,
|
||||
frameworkLeakDetector: IFrameworkLeakDetector,
|
||||
entityExposureDetector: IEntityExposureDetector,
|
||||
dependencyDirectionDetector: IDependencyDirectionDetector,
|
||||
repositoryPatternDetector: IRepositoryPatternDetector,
|
||||
aggregateBoundaryDetector: IAggregateBoundaryDetector,
|
||||
secretDetector: ISecretDetector,
|
||||
anemicModelDetector: IAnemicModelDetector,
|
||||
duplicateValueTracker: IDuplicateValueTracker,
|
||||
) {
|
||||
super()
|
||||
this.fileCollectionStep = new CollectFiles(fileScanner)
|
||||
this.parsingStep = new ParseSourceFiles(codeParser)
|
||||
this.detectionPipeline = new ExecuteDetection(
|
||||
hardcodeDetector,
|
||||
namingConventionDetector,
|
||||
frameworkLeakDetector,
|
||||
entityExposureDetector,
|
||||
dependencyDirectionDetector,
|
||||
repositoryPatternDetector,
|
||||
aggregateBoundaryDetector,
|
||||
secretDetector,
|
||||
anemicModelDetector,
|
||||
duplicateValueTracker,
|
||||
)
|
||||
this.resultAggregator = new AggregateResults()
|
||||
}
|
||||
|
||||
public async execute(
|
||||
request: AnalyzeProjectRequest,
|
||||
): Promise<ResponseDto<AnalyzeProjectResponse>> {
|
||||
try {
|
||||
const filePaths = await this.fileScanner.scan({
|
||||
const { sourceFiles } = await this.fileCollectionStep.execute({
|
||||
rootDir: request.rootDir,
|
||||
include: request.include,
|
||||
exclude: request.exclude,
|
||||
})
|
||||
|
||||
const sourceFiles: SourceFile[] = []
|
||||
const dependencyGraph = new DependencyGraph()
|
||||
let totalFunctions = 0
|
||||
|
||||
for (const filePath of filePaths) {
|
||||
const content = await this.fileScanner.readFile(filePath)
|
||||
const projectPath = ProjectPath.create(filePath, request.rootDir)
|
||||
|
||||
const imports = this.extractImports(content)
|
||||
const exports = this.extractExports(content)
|
||||
|
||||
const sourceFile = new SourceFile(projectPath, content, imports, exports)
|
||||
|
||||
sourceFiles.push(sourceFile)
|
||||
dependencyGraph.addFile(sourceFile)
|
||||
|
||||
if (projectPath.isTypeScript()) {
|
||||
const tree = this.codeParser.parseTypeScript(content)
|
||||
const functions = this.codeParser.extractFunctions(tree)
|
||||
totalFunctions += functions.length
|
||||
}
|
||||
|
||||
for (const imp of imports) {
|
||||
dependencyGraph.addDependency(
|
||||
projectPath.relative,
|
||||
this.resolveImportPath(imp, filePath, request.rootDir),
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
const violations = this.sortBySeverity(this.detectViolations(sourceFiles))
|
||||
const hardcodeViolations = this.sortBySeverity(this.detectHardcode(sourceFiles))
|
||||
const circularDependencyViolations = this.sortBySeverity(
|
||||
this.detectCircularDependencies(dependencyGraph),
|
||||
)
|
||||
const namingViolations = this.sortBySeverity(this.detectNamingConventions(sourceFiles))
|
||||
const frameworkLeakViolations = this.sortBySeverity(
|
||||
this.detectFrameworkLeaks(sourceFiles),
|
||||
)
|
||||
const entityExposureViolations = this.sortBySeverity(
|
||||
this.detectEntityExposures(sourceFiles),
|
||||
)
|
||||
const dependencyDirectionViolations = this.sortBySeverity(
|
||||
this.detectDependencyDirections(sourceFiles),
|
||||
)
|
||||
const repositoryPatternViolations = this.sortBySeverity(
|
||||
this.detectRepositoryPatternViolations(sourceFiles),
|
||||
)
|
||||
const aggregateBoundaryViolations = this.sortBySeverity(
|
||||
this.detectAggregateBoundaryViolations(sourceFiles),
|
||||
)
|
||||
const metrics = this.calculateMetrics(sourceFiles, totalFunctions, dependencyGraph)
|
||||
|
||||
return ResponseDto.ok({
|
||||
files: sourceFiles,
|
||||
dependencyGraph,
|
||||
violations,
|
||||
hardcodeViolations,
|
||||
circularDependencyViolations,
|
||||
namingViolations,
|
||||
frameworkLeakViolations,
|
||||
entityExposureViolations,
|
||||
dependencyDirectionViolations,
|
||||
repositoryPatternViolations,
|
||||
aggregateBoundaryViolations,
|
||||
metrics,
|
||||
const { dependencyGraph, totalFunctions } = this.parsingStep.execute({
|
||||
sourceFiles,
|
||||
rootDir: request.rootDir,
|
||||
})
|
||||
|
||||
const detectionResult = await this.detectionPipeline.execute({
|
||||
sourceFiles,
|
||||
dependencyGraph,
|
||||
})
|
||||
|
||||
const response = this.resultAggregator.execute({
|
||||
sourceFiles,
|
||||
dependencyGraph,
|
||||
totalFunctions,
|
||||
...detectionResult,
|
||||
})
|
||||
|
||||
return ResponseDto.ok(response)
|
||||
} catch (error) {
|
||||
const errorMessage = `${ERROR_MESSAGES.FAILED_TO_ANALYZE}: ${error instanceof Error ? error.message : String(error)}`
|
||||
return ResponseDto.fail(errorMessage)
|
||||
}
|
||||
}
|
||||
|
||||
private extractImports(content: string): string[] {
|
||||
const imports: string[] = []
|
||||
let match
|
||||
|
||||
while ((match = REGEX_PATTERNS.IMPORT_STATEMENT.exec(content)) !== null) {
|
||||
imports.push(match[1])
|
||||
}
|
||||
|
||||
return imports
|
||||
}
|
||||
|
||||
private extractExports(content: string): string[] {
|
||||
const exports: string[] = []
|
||||
let match
|
||||
|
||||
while ((match = REGEX_PATTERNS.EXPORT_STATEMENT.exec(content)) !== null) {
|
||||
exports.push(match[1])
|
||||
}
|
||||
|
||||
return exports
|
||||
}
|
||||
|
||||
private resolveImportPath(importPath: string, _currentFile: string, _rootDir: string): string {
|
||||
if (importPath.startsWith(".")) {
|
||||
return importPath
|
||||
}
|
||||
return importPath
|
||||
}
|
||||
|
||||
private detectViolations(sourceFiles: SourceFile[]): ArchitectureViolation[] {
|
||||
const violations: ArchitectureViolation[] = []
|
||||
|
||||
const layerRules: Record<string, string[]> = {
|
||||
[LAYERS.DOMAIN]: [LAYERS.SHARED],
|
||||
[LAYERS.APPLICATION]: [LAYERS.DOMAIN, LAYERS.SHARED],
|
||||
[LAYERS.INFRASTRUCTURE]: [LAYERS.DOMAIN, LAYERS.APPLICATION, LAYERS.SHARED],
|
||||
[LAYERS.SHARED]: [],
|
||||
}
|
||||
|
||||
for (const file of sourceFiles) {
|
||||
if (!file.layer) {
|
||||
continue
|
||||
}
|
||||
|
||||
const allowedLayers = layerRules[file.layer]
|
||||
|
||||
for (const imp of file.imports) {
|
||||
const importedLayer = this.detectLayerFromImport(imp)
|
||||
|
||||
if (
|
||||
importedLayer &&
|
||||
importedLayer !== file.layer &&
|
||||
!allowedLayers.includes(importedLayer)
|
||||
) {
|
||||
violations.push({
|
||||
rule: RULES.CLEAN_ARCHITECTURE,
|
||||
message: `Layer "${file.layer}" cannot import from "${importedLayer}"`,
|
||||
file: file.path.relative,
|
||||
severity: VIOLATION_SEVERITY_MAP.ARCHITECTURE,
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return violations
|
||||
}
|
||||
|
||||
private detectLayerFromImport(importPath: string): string | undefined {
|
||||
const layers = Object.values(LAYERS)
|
||||
|
||||
for (const layer of layers) {
|
||||
if (importPath.toLowerCase().includes(layer)) {
|
||||
return layer
|
||||
}
|
||||
}
|
||||
|
||||
return undefined
|
||||
}
|
||||
|
||||
private detectHardcode(sourceFiles: SourceFile[]): HardcodeViolation[] {
|
||||
const violations: HardcodeViolation[] = []
|
||||
|
||||
for (const file of sourceFiles) {
|
||||
const hardcodedValues = this.hardcodeDetector.detectAll(
|
||||
file.content,
|
||||
file.path.relative,
|
||||
)
|
||||
|
||||
for (const hardcoded of hardcodedValues) {
|
||||
violations.push({
|
||||
rule: RULES.HARDCODED_VALUE,
|
||||
type: hardcoded.type,
|
||||
value: hardcoded.value,
|
||||
file: file.path.relative,
|
||||
line: hardcoded.line,
|
||||
column: hardcoded.column,
|
||||
context: hardcoded.context,
|
||||
suggestion: {
|
||||
constantName: hardcoded.suggestConstantName(),
|
||||
location: hardcoded.suggestLocation(file.layer),
|
||||
},
|
||||
severity: VIOLATION_SEVERITY_MAP.HARDCODE,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return violations
|
||||
}
|
||||
|
||||
private detectCircularDependencies(
|
||||
dependencyGraph: DependencyGraph,
|
||||
): CircularDependencyViolation[] {
|
||||
const violations: CircularDependencyViolation[] = []
|
||||
const cycles = dependencyGraph.findCycles()
|
||||
|
||||
for (const cycle of cycles) {
|
||||
const cycleChain = [...cycle, cycle[0]].join(" → ")
|
||||
violations.push({
|
||||
rule: RULES.CIRCULAR_DEPENDENCY,
|
||||
message: `Circular dependency detected: ${cycleChain}`,
|
||||
cycle,
|
||||
severity: VIOLATION_SEVERITY_MAP.CIRCULAR_DEPENDENCY,
|
||||
})
|
||||
}
|
||||
|
||||
return violations
|
||||
}
|
||||
|
||||
private detectNamingConventions(sourceFiles: SourceFile[]): NamingConventionViolation[] {
|
||||
const violations: NamingConventionViolation[] = []
|
||||
|
||||
for (const file of sourceFiles) {
|
||||
const namingViolations = this.namingConventionDetector.detectViolations(
|
||||
file.path.filename,
|
||||
file.layer,
|
||||
file.path.relative,
|
||||
)
|
||||
|
||||
for (const violation of namingViolations) {
|
||||
violations.push({
|
||||
rule: RULES.NAMING_CONVENTION,
|
||||
type: violation.violationType,
|
||||
fileName: violation.fileName,
|
||||
layer: violation.layer,
|
||||
file: violation.filePath,
|
||||
expected: violation.expected,
|
||||
actual: violation.actual,
|
||||
message: violation.getMessage(),
|
||||
suggestion: violation.suggestion,
|
||||
severity: VIOLATION_SEVERITY_MAP.NAMING_CONVENTION,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return violations
|
||||
}
|
||||
|
||||
private detectFrameworkLeaks(sourceFiles: SourceFile[]): FrameworkLeakViolation[] {
|
||||
const violations: FrameworkLeakViolation[] = []
|
||||
|
||||
for (const file of sourceFiles) {
|
||||
const leaks = this.frameworkLeakDetector.detectLeaks(
|
||||
file.imports,
|
||||
file.path.relative,
|
||||
file.layer,
|
||||
)
|
||||
|
||||
for (const leak of leaks) {
|
||||
violations.push({
|
||||
rule: RULES.FRAMEWORK_LEAK,
|
||||
packageName: leak.packageName,
|
||||
category: leak.category,
|
||||
categoryDescription: leak.getCategoryDescription(),
|
||||
file: file.path.relative,
|
||||
layer: leak.layer,
|
||||
line: leak.line,
|
||||
message: leak.getMessage(),
|
||||
suggestion: leak.getSuggestion(),
|
||||
severity: VIOLATION_SEVERITY_MAP.FRAMEWORK_LEAK,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return violations
|
||||
}
|
||||
|
||||
private detectEntityExposures(sourceFiles: SourceFile[]): EntityExposureViolation[] {
|
||||
const violations: EntityExposureViolation[] = []
|
||||
|
||||
for (const file of sourceFiles) {
|
||||
const exposures = this.entityExposureDetector.detectExposures(
|
||||
file.content,
|
||||
file.path.relative,
|
||||
file.layer,
|
||||
)
|
||||
|
||||
for (const exposure of exposures) {
|
||||
violations.push({
|
||||
rule: RULES.ENTITY_EXPOSURE,
|
||||
entityName: exposure.entityName,
|
||||
returnType: exposure.returnType,
|
||||
file: file.path.relative,
|
||||
layer: exposure.layer,
|
||||
line: exposure.line,
|
||||
methodName: exposure.methodName,
|
||||
message: exposure.getMessage(),
|
||||
suggestion: exposure.getSuggestion(),
|
||||
severity: VIOLATION_SEVERITY_MAP.ENTITY_EXPOSURE,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return violations
|
||||
}
|
||||
|
||||
private detectDependencyDirections(sourceFiles: SourceFile[]): DependencyDirectionViolation[] {
|
||||
const violations: DependencyDirectionViolation[] = []
|
||||
|
||||
for (const file of sourceFiles) {
|
||||
const directionViolations = this.dependencyDirectionDetector.detectViolations(
|
||||
file.content,
|
||||
file.path.relative,
|
||||
file.layer,
|
||||
)
|
||||
|
||||
for (const violation of directionViolations) {
|
||||
violations.push({
|
||||
rule: RULES.DEPENDENCY_DIRECTION,
|
||||
fromLayer: violation.fromLayer,
|
||||
toLayer: violation.toLayer,
|
||||
importPath: violation.importPath,
|
||||
file: file.path.relative,
|
||||
line: violation.line,
|
||||
message: violation.getMessage(),
|
||||
suggestion: violation.getSuggestion(),
|
||||
severity: VIOLATION_SEVERITY_MAP.DEPENDENCY_DIRECTION,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return violations
|
||||
}
|
||||
|
||||
private detectRepositoryPatternViolations(
|
||||
sourceFiles: SourceFile[],
|
||||
): RepositoryPatternViolation[] {
|
||||
const violations: RepositoryPatternViolation[] = []
|
||||
|
||||
for (const file of sourceFiles) {
|
||||
const patternViolations = this.repositoryPatternDetector.detectViolations(
|
||||
file.content,
|
||||
file.path.relative,
|
||||
file.layer,
|
||||
)
|
||||
|
||||
for (const violation of patternViolations) {
|
||||
violations.push({
|
||||
rule: RULES.REPOSITORY_PATTERN,
|
||||
violationType: violation.violationType as
|
||||
| typeof REPOSITORY_VIOLATION_TYPES.ORM_TYPE_IN_INTERFACE
|
||||
| typeof REPOSITORY_VIOLATION_TYPES.CONCRETE_REPOSITORY_IN_USE_CASE
|
||||
| typeof REPOSITORY_VIOLATION_TYPES.NEW_REPOSITORY_IN_USE_CASE
|
||||
| typeof REPOSITORY_VIOLATION_TYPES.NON_DOMAIN_METHOD_NAME,
|
||||
file: file.path.relative,
|
||||
layer: violation.layer,
|
||||
line: violation.line,
|
||||
details: violation.details,
|
||||
message: violation.getMessage(),
|
||||
suggestion: violation.getSuggestion(),
|
||||
severity: VIOLATION_SEVERITY_MAP.REPOSITORY_PATTERN,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return violations
|
||||
}
|
||||
|
||||
private detectAggregateBoundaryViolations(
|
||||
sourceFiles: SourceFile[],
|
||||
): AggregateBoundaryViolation[] {
|
||||
const violations: AggregateBoundaryViolation[] = []
|
||||
|
||||
for (const file of sourceFiles) {
|
||||
const boundaryViolations = this.aggregateBoundaryDetector.detectViolations(
|
||||
file.content,
|
||||
file.path.relative,
|
||||
file.layer,
|
||||
)
|
||||
|
||||
for (const violation of boundaryViolations) {
|
||||
violations.push({
|
||||
rule: RULES.AGGREGATE_BOUNDARY,
|
||||
fromAggregate: violation.fromAggregate,
|
||||
toAggregate: violation.toAggregate,
|
||||
entityName: violation.entityName,
|
||||
importPath: violation.importPath,
|
||||
file: file.path.relative,
|
||||
line: violation.line,
|
||||
message: violation.getMessage(),
|
||||
suggestion: violation.getSuggestion(),
|
||||
severity: VIOLATION_SEVERITY_MAP.AGGREGATE_BOUNDARY,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return violations
|
||||
}
|
||||
|
||||
private calculateMetrics(
|
||||
sourceFiles: SourceFile[],
|
||||
totalFunctions: number,
|
||||
_dependencyGraph: DependencyGraph,
|
||||
): ProjectMetrics {
|
||||
const layerDistribution: Record<string, number> = {}
|
||||
let totalImports = 0
|
||||
|
||||
for (const file of sourceFiles) {
|
||||
if (file.layer) {
|
||||
layerDistribution[file.layer] = (layerDistribution[file.layer] || 0) + 1
|
||||
}
|
||||
totalImports += file.imports.length
|
||||
}
|
||||
|
||||
return {
|
||||
totalFiles: sourceFiles.length,
|
||||
totalFunctions,
|
||||
totalImports,
|
||||
layerDistribution,
|
||||
}
|
||||
}
|
||||
|
||||
private sortBySeverity<T extends { severity: SeverityLevel }>(violations: T[]): T[] {
|
||||
return violations.sort((a, b) => {
|
||||
return SEVERITY_ORDER[a.severity] - SEVERITY_ORDER[b.severity]
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -0,0 +1,87 @@
|
||||
import { SourceFile } from "../../../domain/entities/SourceFile"
|
||||
import { DependencyGraph } from "../../../domain/entities/DependencyGraph"
|
||||
import type {
|
||||
AggregateBoundaryViolation,
|
||||
AnalyzeProjectResponse,
|
||||
AnemicModelViolation,
|
||||
ArchitectureViolation,
|
||||
CircularDependencyViolation,
|
||||
DependencyDirectionViolation,
|
||||
EntityExposureViolation,
|
||||
FrameworkLeakViolation,
|
||||
HardcodeViolation,
|
||||
NamingConventionViolation,
|
||||
ProjectMetrics,
|
||||
RepositoryPatternViolation,
|
||||
SecretViolation,
|
||||
} from "../AnalyzeProject"
|
||||
|
||||
export interface AggregationRequest {
|
||||
sourceFiles: SourceFile[]
|
||||
dependencyGraph: DependencyGraph
|
||||
totalFunctions: number
|
||||
violations: ArchitectureViolation[]
|
||||
hardcodeViolations: HardcodeViolation[]
|
||||
circularDependencyViolations: CircularDependencyViolation[]
|
||||
namingViolations: NamingConventionViolation[]
|
||||
frameworkLeakViolations: FrameworkLeakViolation[]
|
||||
entityExposureViolations: EntityExposureViolation[]
|
||||
dependencyDirectionViolations: DependencyDirectionViolation[]
|
||||
repositoryPatternViolations: RepositoryPatternViolation[]
|
||||
aggregateBoundaryViolations: AggregateBoundaryViolation[]
|
||||
secretViolations: SecretViolation[]
|
||||
anemicModelViolations: AnemicModelViolation[]
|
||||
}
|
||||
|
||||
/**
|
||||
* Pipeline step responsible for building final response DTO
|
||||
*/
|
||||
export class AggregateResults {
|
||||
public execute(request: AggregationRequest): AnalyzeProjectResponse {
|
||||
const metrics = this.calculateMetrics(
|
||||
request.sourceFiles,
|
||||
request.totalFunctions,
|
||||
request.dependencyGraph,
|
||||
)
|
||||
|
||||
return {
|
||||
files: request.sourceFiles,
|
||||
dependencyGraph: request.dependencyGraph,
|
||||
violations: request.violations,
|
||||
hardcodeViolations: request.hardcodeViolations,
|
||||
circularDependencyViolations: request.circularDependencyViolations,
|
||||
namingViolations: request.namingViolations,
|
||||
frameworkLeakViolations: request.frameworkLeakViolations,
|
||||
entityExposureViolations: request.entityExposureViolations,
|
||||
dependencyDirectionViolations: request.dependencyDirectionViolations,
|
||||
repositoryPatternViolations: request.repositoryPatternViolations,
|
||||
aggregateBoundaryViolations: request.aggregateBoundaryViolations,
|
||||
secretViolations: request.secretViolations,
|
||||
anemicModelViolations: request.anemicModelViolations,
|
||||
metrics,
|
||||
}
|
||||
}
|
||||
|
||||
private calculateMetrics(
|
||||
sourceFiles: SourceFile[],
|
||||
totalFunctions: number,
|
||||
_dependencyGraph: DependencyGraph,
|
||||
): ProjectMetrics {
|
||||
const layerDistribution: Record<string, number> = {}
|
||||
let totalImports = 0
|
||||
|
||||
for (const file of sourceFiles) {
|
||||
if (file.layer) {
|
||||
layerDistribution[file.layer] = (layerDistribution[file.layer] || 0) + 1
|
||||
}
|
||||
totalImports += file.imports.length
|
||||
}
|
||||
|
||||
return {
|
||||
totalFiles: sourceFiles.length,
|
||||
totalFunctions,
|
||||
totalImports,
|
||||
layerDistribution,
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,66 @@
|
||||
import { IFileScanner } from "../../../domain/services/IFileScanner"
|
||||
import { SourceFile } from "../../../domain/entities/SourceFile"
|
||||
import { ProjectPath } from "../../../domain/value-objects/ProjectPath"
|
||||
import { REGEX_PATTERNS } from "../../../shared/constants"
|
||||
|
||||
export interface FileCollectionRequest {
|
||||
rootDir: string
|
||||
include?: string[]
|
||||
exclude?: string[]
|
||||
}
|
||||
|
||||
export interface FileCollectionResult {
|
||||
sourceFiles: SourceFile[]
|
||||
}
|
||||
|
||||
/**
|
||||
* Pipeline step responsible for file collection and basic parsing
|
||||
*/
|
||||
export class CollectFiles {
|
||||
constructor(private readonly fileScanner: IFileScanner) {}
|
||||
|
||||
public async execute(request: FileCollectionRequest): Promise<FileCollectionResult> {
|
||||
const filePaths = await this.fileScanner.scan({
|
||||
rootDir: request.rootDir,
|
||||
include: request.include,
|
||||
exclude: request.exclude,
|
||||
})
|
||||
|
||||
const sourceFiles: SourceFile[] = []
|
||||
|
||||
for (const filePath of filePaths) {
|
||||
const content = await this.fileScanner.readFile(filePath)
|
||||
const projectPath = ProjectPath.create(filePath, request.rootDir)
|
||||
|
||||
const imports = this.extractImports(content)
|
||||
const exports = this.extractExports(content)
|
||||
|
||||
const sourceFile = new SourceFile(projectPath, content, imports, exports)
|
||||
sourceFiles.push(sourceFile)
|
||||
}
|
||||
|
||||
return { sourceFiles }
|
||||
}
|
||||
|
||||
private extractImports(content: string): string[] {
|
||||
const imports: string[] = []
|
||||
let match
|
||||
|
||||
while ((match = REGEX_PATTERNS.IMPORT_STATEMENT.exec(content)) !== null) {
|
||||
imports.push(match[1])
|
||||
}
|
||||
|
||||
return imports
|
||||
}
|
||||
|
||||
private extractExports(content: string): string[] {
|
||||
const exports: string[] = []
|
||||
let match
|
||||
|
||||
while ((match = REGEX_PATTERNS.EXPORT_STATEMENT.exec(content)) !== null) {
|
||||
exports.push(match[1])
|
||||
}
|
||||
|
||||
return exports
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,480 @@
|
||||
import { IHardcodeDetector } from "../../../domain/services/IHardcodeDetector"
|
||||
import { INamingConventionDetector } from "../../../domain/services/INamingConventionDetector"
|
||||
import { IFrameworkLeakDetector } from "../../../domain/services/IFrameworkLeakDetector"
|
||||
import { IEntityExposureDetector } from "../../../domain/services/IEntityExposureDetector"
|
||||
import { IDependencyDirectionDetector } from "../../../domain/services/IDependencyDirectionDetector"
|
||||
import { IRepositoryPatternDetector } from "../../../domain/services/RepositoryPatternDetectorService"
|
||||
import { IAggregateBoundaryDetector } from "../../../domain/services/IAggregateBoundaryDetector"
|
||||
import { ISecretDetector } from "../../../domain/services/ISecretDetector"
|
||||
import { IAnemicModelDetector } from "../../../domain/services/IAnemicModelDetector"
|
||||
import { IDuplicateValueTracker } from "../../../domain/services/IDuplicateValueTracker"
|
||||
import { SourceFile } from "../../../domain/entities/SourceFile"
|
||||
import { DependencyGraph } from "../../../domain/entities/DependencyGraph"
|
||||
import { HardcodedValue } from "../../../domain/value-objects/HardcodedValue"
|
||||
import {
|
||||
LAYERS,
|
||||
REPOSITORY_VIOLATION_TYPES,
|
||||
RULES,
|
||||
SEVERITY_ORDER,
|
||||
type SeverityLevel,
|
||||
VIOLATION_SEVERITY_MAP,
|
||||
} from "../../../shared/constants"
|
||||
import type {
|
||||
AggregateBoundaryViolation,
|
||||
AnemicModelViolation,
|
||||
ArchitectureViolation,
|
||||
CircularDependencyViolation,
|
||||
DependencyDirectionViolation,
|
||||
EntityExposureViolation,
|
||||
FrameworkLeakViolation,
|
||||
HardcodeViolation,
|
||||
NamingConventionViolation,
|
||||
RepositoryPatternViolation,
|
||||
SecretViolation,
|
||||
} from "../AnalyzeProject"
|
||||
|
||||
export interface DetectionRequest {
|
||||
sourceFiles: SourceFile[]
|
||||
dependencyGraph: DependencyGraph
|
||||
}
|
||||
|
||||
export interface DetectionResult {
|
||||
violations: ArchitectureViolation[]
|
||||
hardcodeViolations: HardcodeViolation[]
|
||||
circularDependencyViolations: CircularDependencyViolation[]
|
||||
namingViolations: NamingConventionViolation[]
|
||||
frameworkLeakViolations: FrameworkLeakViolation[]
|
||||
entityExposureViolations: EntityExposureViolation[]
|
||||
dependencyDirectionViolations: DependencyDirectionViolation[]
|
||||
repositoryPatternViolations: RepositoryPatternViolation[]
|
||||
aggregateBoundaryViolations: AggregateBoundaryViolation[]
|
||||
secretViolations: SecretViolation[]
|
||||
anemicModelViolations: AnemicModelViolation[]
|
||||
}
|
||||
|
||||
/**
|
||||
* Pipeline step responsible for running all detectors
|
||||
*/
|
||||
export class ExecuteDetection {
|
||||
constructor(
|
||||
private readonly hardcodeDetector: IHardcodeDetector,
|
||||
private readonly namingConventionDetector: INamingConventionDetector,
|
||||
private readonly frameworkLeakDetector: IFrameworkLeakDetector,
|
||||
private readonly entityExposureDetector: IEntityExposureDetector,
|
||||
private readonly dependencyDirectionDetector: IDependencyDirectionDetector,
|
||||
private readonly repositoryPatternDetector: IRepositoryPatternDetector,
|
||||
private readonly aggregateBoundaryDetector: IAggregateBoundaryDetector,
|
||||
private readonly secretDetector: ISecretDetector,
|
||||
private readonly anemicModelDetector: IAnemicModelDetector,
|
||||
private readonly duplicateValueTracker: IDuplicateValueTracker,
|
||||
) {}
|
||||
|
||||
public async execute(request: DetectionRequest): Promise<DetectionResult> {
|
||||
const secretViolations = await this.detectSecrets(request.sourceFiles)
|
||||
|
||||
return {
|
||||
violations: this.sortBySeverity(this.detectViolations(request.sourceFiles)),
|
||||
hardcodeViolations: this.sortBySeverity(this.detectHardcode(request.sourceFiles)),
|
||||
circularDependencyViolations: this.sortBySeverity(
|
||||
this.detectCircularDependencies(request.dependencyGraph),
|
||||
),
|
||||
namingViolations: this.sortBySeverity(
|
||||
this.detectNamingConventions(request.sourceFiles),
|
||||
),
|
||||
frameworkLeakViolations: this.sortBySeverity(
|
||||
this.detectFrameworkLeaks(request.sourceFiles),
|
||||
),
|
||||
entityExposureViolations: this.sortBySeverity(
|
||||
this.detectEntityExposures(request.sourceFiles),
|
||||
),
|
||||
dependencyDirectionViolations: this.sortBySeverity(
|
||||
this.detectDependencyDirections(request.sourceFiles),
|
||||
),
|
||||
repositoryPatternViolations: this.sortBySeverity(
|
||||
this.detectRepositoryPatternViolations(request.sourceFiles),
|
||||
),
|
||||
aggregateBoundaryViolations: this.sortBySeverity(
|
||||
this.detectAggregateBoundaryViolations(request.sourceFiles),
|
||||
),
|
||||
secretViolations: this.sortBySeverity(secretViolations),
|
||||
anemicModelViolations: this.sortBySeverity(
|
||||
this.detectAnemicModels(request.sourceFiles),
|
||||
),
|
||||
}
|
||||
}
|
||||
|
||||
private detectViolations(sourceFiles: SourceFile[]): ArchitectureViolation[] {
|
||||
const violations: ArchitectureViolation[] = []
|
||||
|
||||
const layerRules: Record<string, string[]> = {
|
||||
[LAYERS.DOMAIN]: [LAYERS.SHARED],
|
||||
[LAYERS.APPLICATION]: [LAYERS.DOMAIN, LAYERS.SHARED],
|
||||
[LAYERS.INFRASTRUCTURE]: [LAYERS.DOMAIN, LAYERS.APPLICATION, LAYERS.SHARED],
|
||||
[LAYERS.SHARED]: [],
|
||||
}
|
||||
|
||||
for (const file of sourceFiles) {
|
||||
if (!file.layer) {
|
||||
continue
|
||||
}
|
||||
|
||||
const allowedLayers = layerRules[file.layer]
|
||||
|
||||
for (const imp of file.imports) {
|
||||
const importedLayer = this.detectLayerFromImport(imp)
|
||||
|
||||
if (
|
||||
importedLayer &&
|
||||
importedLayer !== file.layer &&
|
||||
!allowedLayers.includes(importedLayer)
|
||||
) {
|
||||
violations.push({
|
||||
rule: RULES.CLEAN_ARCHITECTURE,
|
||||
message: `Layer "${file.layer}" cannot import from "${importedLayer}"`,
|
||||
file: file.path.relative,
|
||||
severity: VIOLATION_SEVERITY_MAP.ARCHITECTURE,
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return violations
|
||||
}
|
||||
|
||||
private detectLayerFromImport(importPath: string): string | undefined {
|
||||
const layers = Object.values(LAYERS)
|
||||
|
||||
for (const layer of layers) {
|
||||
if (importPath.toLowerCase().includes(layer)) {
|
||||
return layer
|
||||
}
|
||||
}
|
||||
|
||||
return undefined
|
||||
}
|
||||
|
||||
private detectHardcode(sourceFiles: SourceFile[]): HardcodeViolation[] {
|
||||
const allHardcodedValues: {
|
||||
value: HardcodedValue
|
||||
file: SourceFile
|
||||
}[] = []
|
||||
|
||||
for (const file of sourceFiles) {
|
||||
const hardcodedValues = this.hardcodeDetector.detectAll(
|
||||
file.content,
|
||||
file.path.relative,
|
||||
)
|
||||
|
||||
for (const hardcoded of hardcodedValues) {
|
||||
allHardcodedValues.push({ value: hardcoded, file })
|
||||
}
|
||||
}
|
||||
|
||||
this.duplicateValueTracker.clear()
|
||||
for (const { value, file } of allHardcodedValues) {
|
||||
this.duplicateValueTracker.track(value, file.path.relative)
|
||||
}
|
||||
|
||||
const violations: HardcodeViolation[] = []
|
||||
for (const { value, file } of allHardcodedValues) {
|
||||
const duplicateLocations = this.duplicateValueTracker.getDuplicateLocations(
|
||||
value.value,
|
||||
value.type,
|
||||
)
|
||||
const enrichedValue = duplicateLocations
|
||||
? HardcodedValue.create(
|
||||
value.value,
|
||||
value.type,
|
||||
value.line,
|
||||
value.column,
|
||||
value.context,
|
||||
value.valueType,
|
||||
duplicateLocations.filter((loc) => loc.file !== file.path.relative),
|
||||
)
|
||||
: value
|
||||
|
||||
if (enrichedValue.shouldSkip(file.layer)) {
|
||||
continue
|
||||
}
|
||||
|
||||
violations.push({
|
||||
rule: RULES.HARDCODED_VALUE,
|
||||
type: enrichedValue.type,
|
||||
value: enrichedValue.value,
|
||||
file: file.path.relative,
|
||||
line: enrichedValue.line,
|
||||
column: enrichedValue.column,
|
||||
context: enrichedValue.context,
|
||||
suggestion: {
|
||||
constantName: enrichedValue.suggestConstantName(),
|
||||
location: enrichedValue.suggestLocation(file.layer),
|
||||
},
|
||||
severity: VIOLATION_SEVERITY_MAP.HARDCODE,
|
||||
})
|
||||
}
|
||||
|
||||
return violations
|
||||
}
|
||||
|
||||
private detectCircularDependencies(
|
||||
dependencyGraph: DependencyGraph,
|
||||
): CircularDependencyViolation[] {
|
||||
const violations: CircularDependencyViolation[] = []
|
||||
const cycles = dependencyGraph.findCycles()
|
||||
|
||||
for (const cycle of cycles) {
|
||||
const cycleChain = [...cycle, cycle[0]].join(" → ")
|
||||
violations.push({
|
||||
rule: RULES.CIRCULAR_DEPENDENCY,
|
||||
message: `Circular dependency detected: ${cycleChain}`,
|
||||
cycle,
|
||||
severity: VIOLATION_SEVERITY_MAP.CIRCULAR_DEPENDENCY,
|
||||
})
|
||||
}
|
||||
|
||||
return violations
|
||||
}
|
||||
|
||||
private detectNamingConventions(sourceFiles: SourceFile[]): NamingConventionViolation[] {
|
||||
const violations: NamingConventionViolation[] = []
|
||||
|
||||
for (const file of sourceFiles) {
|
||||
const namingViolations = this.namingConventionDetector.detectViolations(
|
||||
file.path.filename,
|
||||
file.layer,
|
||||
file.path.relative,
|
||||
)
|
||||
|
||||
for (const violation of namingViolations) {
|
||||
violations.push({
|
||||
rule: RULES.NAMING_CONVENTION,
|
||||
type: violation.violationType,
|
||||
fileName: violation.fileName,
|
||||
layer: violation.layer,
|
||||
file: violation.filePath,
|
||||
expected: violation.expected,
|
||||
actual: violation.actual,
|
||||
message: violation.getMessage(),
|
||||
suggestion: violation.suggestion,
|
||||
severity: VIOLATION_SEVERITY_MAP.NAMING_CONVENTION,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return violations
|
||||
}
|
||||
|
||||
private detectFrameworkLeaks(sourceFiles: SourceFile[]): FrameworkLeakViolation[] {
|
||||
const violations: FrameworkLeakViolation[] = []
|
||||
|
||||
for (const file of sourceFiles) {
|
||||
const leaks = this.frameworkLeakDetector.detectLeaks(
|
||||
file.imports,
|
||||
file.path.relative,
|
||||
file.layer,
|
||||
)
|
||||
|
||||
for (const leak of leaks) {
|
||||
violations.push({
|
||||
rule: RULES.FRAMEWORK_LEAK,
|
||||
packageName: leak.packageName,
|
||||
category: leak.category,
|
||||
categoryDescription: leak.getCategoryDescription(),
|
||||
file: file.path.relative,
|
||||
layer: leak.layer,
|
||||
line: leak.line,
|
||||
message: leak.getMessage(),
|
||||
suggestion: leak.getSuggestion(),
|
||||
severity: VIOLATION_SEVERITY_MAP.FRAMEWORK_LEAK,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return violations
|
||||
}
|
||||
|
||||
private detectEntityExposures(sourceFiles: SourceFile[]): EntityExposureViolation[] {
|
||||
const violations: EntityExposureViolation[] = []
|
||||
|
||||
for (const file of sourceFiles) {
|
||||
const exposures = this.entityExposureDetector.detectExposures(
|
||||
file.content,
|
||||
file.path.relative,
|
||||
file.layer,
|
||||
)
|
||||
|
||||
for (const exposure of exposures) {
|
||||
violations.push({
|
||||
rule: RULES.ENTITY_EXPOSURE,
|
||||
entityName: exposure.entityName,
|
||||
returnType: exposure.returnType,
|
||||
file: file.path.relative,
|
||||
layer: exposure.layer,
|
||||
line: exposure.line,
|
||||
methodName: exposure.methodName,
|
||||
message: exposure.getMessage(),
|
||||
suggestion: exposure.getSuggestion(),
|
||||
severity: VIOLATION_SEVERITY_MAP.ENTITY_EXPOSURE,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return violations
|
||||
}
|
||||
|
||||
private detectDependencyDirections(sourceFiles: SourceFile[]): DependencyDirectionViolation[] {
|
||||
const violations: DependencyDirectionViolation[] = []
|
||||
|
||||
for (const file of sourceFiles) {
|
||||
const directionViolations = this.dependencyDirectionDetector.detectViolations(
|
||||
file.content,
|
||||
file.path.relative,
|
||||
file.layer,
|
||||
)
|
||||
|
||||
for (const violation of directionViolations) {
|
||||
violations.push({
|
||||
rule: RULES.DEPENDENCY_DIRECTION,
|
||||
fromLayer: violation.fromLayer,
|
||||
toLayer: violation.toLayer,
|
||||
importPath: violation.importPath,
|
||||
file: file.path.relative,
|
||||
line: violation.line,
|
||||
message: violation.getMessage(),
|
||||
suggestion: violation.getSuggestion(),
|
||||
severity: VIOLATION_SEVERITY_MAP.DEPENDENCY_DIRECTION,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return violations
|
||||
}
|
||||
|
||||
private detectRepositoryPatternViolations(
|
||||
sourceFiles: SourceFile[],
|
||||
): RepositoryPatternViolation[] {
|
||||
const violations: RepositoryPatternViolation[] = []
|
||||
|
||||
for (const file of sourceFiles) {
|
||||
const patternViolations = this.repositoryPatternDetector.detectViolations(
|
||||
file.content,
|
||||
file.path.relative,
|
||||
file.layer,
|
||||
)
|
||||
|
||||
for (const violation of patternViolations) {
|
||||
violations.push({
|
||||
rule: RULES.REPOSITORY_PATTERN,
|
||||
violationType: violation.violationType as
|
||||
| typeof REPOSITORY_VIOLATION_TYPES.ORM_TYPE_IN_INTERFACE
|
||||
| typeof REPOSITORY_VIOLATION_TYPES.CONCRETE_REPOSITORY_IN_USE_CASE
|
||||
| typeof REPOSITORY_VIOLATION_TYPES.NEW_REPOSITORY_IN_USE_CASE
|
||||
| typeof REPOSITORY_VIOLATION_TYPES.NON_DOMAIN_METHOD_NAME,
|
||||
file: file.path.relative,
|
||||
layer: violation.layer,
|
||||
line: violation.line,
|
||||
details: violation.details,
|
||||
message: violation.getMessage(),
|
||||
suggestion: violation.getSuggestion(),
|
||||
severity: VIOLATION_SEVERITY_MAP.REPOSITORY_PATTERN,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return violations
|
||||
}
|
||||
|
||||
private detectAggregateBoundaryViolations(
|
||||
sourceFiles: SourceFile[],
|
||||
): AggregateBoundaryViolation[] {
|
||||
const violations: AggregateBoundaryViolation[] = []
|
||||
|
||||
for (const file of sourceFiles) {
|
||||
const boundaryViolations = this.aggregateBoundaryDetector.detectViolations(
|
||||
file.content,
|
||||
file.path.relative,
|
||||
file.layer,
|
||||
)
|
||||
|
||||
for (const violation of boundaryViolations) {
|
||||
violations.push({
|
||||
rule: RULES.AGGREGATE_BOUNDARY,
|
||||
fromAggregate: violation.fromAggregate,
|
||||
toAggregate: violation.toAggregate,
|
||||
entityName: violation.entityName,
|
||||
importPath: violation.importPath,
|
||||
file: file.path.relative,
|
||||
line: violation.line,
|
||||
message: violation.getMessage(),
|
||||
suggestion: violation.getSuggestion(),
|
||||
severity: VIOLATION_SEVERITY_MAP.AGGREGATE_BOUNDARY,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return violations
|
||||
}
|
||||
|
||||
private async detectSecrets(sourceFiles: SourceFile[]): Promise<SecretViolation[]> {
|
||||
const violations: SecretViolation[] = []
|
||||
|
||||
for (const file of sourceFiles) {
|
||||
const secretViolations = await this.secretDetector.detectAll(
|
||||
file.content,
|
||||
file.path.relative,
|
||||
)
|
||||
|
||||
for (const secret of secretViolations) {
|
||||
violations.push({
|
||||
rule: RULES.SECRET_EXPOSURE,
|
||||
secretType: secret.secretType,
|
||||
file: file.path.relative,
|
||||
line: secret.line,
|
||||
column: secret.column,
|
||||
message: secret.getMessage(),
|
||||
suggestion: secret.getSuggestion(),
|
||||
severity: "critical",
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return violations
|
||||
}
|
||||
|
||||
private detectAnemicModels(sourceFiles: SourceFile[]): AnemicModelViolation[] {
|
||||
const violations: AnemicModelViolation[] = []
|
||||
|
||||
for (const file of sourceFiles) {
|
||||
const anemicModels = this.anemicModelDetector.detectAnemicModels(
|
||||
file.content,
|
||||
file.path.relative,
|
||||
file.layer,
|
||||
)
|
||||
|
||||
for (const anemicModel of anemicModels) {
|
||||
violations.push({
|
||||
rule: RULES.ANEMIC_MODEL,
|
||||
className: anemicModel.className,
|
||||
file: file.path.relative,
|
||||
layer: anemicModel.layer,
|
||||
line: anemicModel.line,
|
||||
methodCount: anemicModel.methodCount,
|
||||
propertyCount: anemicModel.propertyCount,
|
||||
hasOnlyGettersSetters: anemicModel.hasOnlyGettersSetters,
|
||||
hasPublicSetters: anemicModel.hasPublicSetters,
|
||||
message: anemicModel.getMessage(),
|
||||
suggestion: anemicModel.getSuggestion(),
|
||||
severity: VIOLATION_SEVERITY_MAP.ANEMIC_MODEL,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return violations
|
||||
}
|
||||
|
||||
private sortBySeverity<T extends { severity: SeverityLevel }>(violations: T[]): T[] {
|
||||
return violations.sort((a, b) => {
|
||||
return SEVERITY_ORDER[a.severity] - SEVERITY_ORDER[b.severity]
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,51 @@
|
||||
import { ICodeParser } from "../../../domain/services/ICodeParser"
|
||||
import { SourceFile } from "../../../domain/entities/SourceFile"
|
||||
import { DependencyGraph } from "../../../domain/entities/DependencyGraph"
|
||||
|
||||
export interface ParsingRequest {
|
||||
sourceFiles: SourceFile[]
|
||||
rootDir: string
|
||||
}
|
||||
|
||||
export interface ParsingResult {
|
||||
dependencyGraph: DependencyGraph
|
||||
totalFunctions: number
|
||||
}
|
||||
|
||||
/**
|
||||
* Pipeline step responsible for AST parsing and dependency graph construction
|
||||
*/
|
||||
export class ParseSourceFiles {
|
||||
constructor(private readonly codeParser: ICodeParser) {}
|
||||
|
||||
public execute(request: ParsingRequest): ParsingResult {
|
||||
const dependencyGraph = new DependencyGraph()
|
||||
let totalFunctions = 0
|
||||
|
||||
for (const sourceFile of request.sourceFiles) {
|
||||
dependencyGraph.addFile(sourceFile)
|
||||
|
||||
if (sourceFile.path.isTypeScript()) {
|
||||
const tree = this.codeParser.parseTypeScript(sourceFile.content)
|
||||
const functions = this.codeParser.extractFunctions(tree)
|
||||
totalFunctions += functions.length
|
||||
}
|
||||
|
||||
for (const imp of sourceFile.imports) {
|
||||
dependencyGraph.addDependency(
|
||||
sourceFile.path.relative,
|
||||
this.resolveImportPath(imp, sourceFile.path.relative, request.rootDir),
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
return { dependencyGraph, totalFunctions }
|
||||
}
|
||||
|
||||
private resolveImportPath(importPath: string, _currentFile: string, _rootDir: string): string {
|
||||
if (importPath.startsWith(".")) {
|
||||
return importPath
|
||||
}
|
||||
return importPath
|
||||
}
|
||||
}
|
||||
@@ -150,4 +150,30 @@ export const CLI_HELP_TEXT = {
|
||||
FIX_REPOSITORY:
|
||||
" Repository pattern → Create IUserRepository in domain, implement in infra\n\n",
|
||||
FOOTER: "Each violation includes a 💡 Suggestion with specific fix instructions.\n",
|
||||
AI_AGENT_HEADER: "AI AGENT INSTRUCTIONS:\n",
|
||||
AI_AGENT_INTRO:
|
||||
" When an AI coding assistant (Claude, Copilot, Cursor, etc.) uses Guardian:\n\n",
|
||||
AI_AGENT_STEP1: " STEP 1: Run initial scan\n",
|
||||
AI_AGENT_STEP1_CMD: " $ guardian check ./src --only-critical --limit 5\n\n",
|
||||
AI_AGENT_STEP2: " STEP 2: For each violation in output:\n",
|
||||
AI_AGENT_STEP2_DETAIL:
|
||||
" - Read the file at reported location (file:line:column)\n" +
|
||||
" - Apply the 💡 Suggestion provided\n" +
|
||||
" - The suggestion contains exact fix instructions\n\n",
|
||||
AI_AGENT_STEP3: " STEP 3: After fixing, verify:\n",
|
||||
AI_AGENT_STEP3_CMD: " $ guardian check ./src --only-critical\n\n",
|
||||
AI_AGENT_STEP4: " STEP 4: Expand scope progressively:\n",
|
||||
AI_AGENT_STEP4_CMDS:
|
||||
" $ guardian check ./src --min-severity high # Fix HIGH issues\n" +
|
||||
" $ guardian check ./src --min-severity medium # Fix MEDIUM issues\n" +
|
||||
" $ guardian check ./src # Full scan\n\n",
|
||||
AI_AGENT_OUTPUT: " OUTPUT FORMAT (parse this):\n",
|
||||
AI_AGENT_OUTPUT_DETAIL:
|
||||
" <index>. <file>:<line>:<column>\n" +
|
||||
" Severity: <emoji> <LEVEL>\n" +
|
||||
" Type: <violation-type>\n" +
|
||||
" Value: <problematic-value>\n" +
|
||||
" Context: <code-snippet>\n" +
|
||||
" 💡 Suggestion: <exact-fix-instruction>\n\n",
|
||||
AI_AGENT_PRIORITY: " PRIORITY ORDER: CRITICAL → HIGH → MEDIUM → LOW\n\n",
|
||||
} as const
|
||||
|
||||
235
packages/guardian/src/cli/formatters/OutputFormatter.ts
Normal file
235
packages/guardian/src/cli/formatters/OutputFormatter.ts
Normal file
@@ -0,0 +1,235 @@
|
||||
import { SEVERITY_LEVELS, type SeverityLevel } from "../../shared/constants"
|
||||
import type {
|
||||
AggregateBoundaryViolation,
|
||||
AnemicModelViolation,
|
||||
ArchitectureViolation,
|
||||
CircularDependencyViolation,
|
||||
DependencyDirectionViolation,
|
||||
EntityExposureViolation,
|
||||
FrameworkLeakViolation,
|
||||
HardcodeViolation,
|
||||
NamingConventionViolation,
|
||||
RepositoryPatternViolation,
|
||||
SecretViolation,
|
||||
} from "../../application/use-cases/AnalyzeProject"
|
||||
import { SEVERITY_DISPLAY_LABELS, SEVERITY_SECTION_HEADERS } from "../constants"
|
||||
import { ViolationGrouper } from "../groupers/ViolationGrouper"
|
||||
|
||||
const SEVERITY_LABELS: Record<SeverityLevel, string> = {
|
||||
[SEVERITY_LEVELS.CRITICAL]: SEVERITY_DISPLAY_LABELS.CRITICAL,
|
||||
[SEVERITY_LEVELS.HIGH]: SEVERITY_DISPLAY_LABELS.HIGH,
|
||||
[SEVERITY_LEVELS.MEDIUM]: SEVERITY_DISPLAY_LABELS.MEDIUM,
|
||||
[SEVERITY_LEVELS.LOW]: SEVERITY_DISPLAY_LABELS.LOW,
|
||||
}
|
||||
|
||||
const SEVERITY_HEADER: Record<SeverityLevel, string> = {
|
||||
[SEVERITY_LEVELS.CRITICAL]: SEVERITY_SECTION_HEADERS.CRITICAL,
|
||||
[SEVERITY_LEVELS.HIGH]: SEVERITY_SECTION_HEADERS.HIGH,
|
||||
[SEVERITY_LEVELS.MEDIUM]: SEVERITY_SECTION_HEADERS.MEDIUM,
|
||||
[SEVERITY_LEVELS.LOW]: SEVERITY_SECTION_HEADERS.LOW,
|
||||
}
|
||||
|
||||
export class OutputFormatter {
|
||||
private readonly grouper = new ViolationGrouper()
|
||||
|
||||
displayGroupedViolations<T extends { severity: SeverityLevel }>(
|
||||
violations: T[],
|
||||
displayFn: (v: T, index: number) => void,
|
||||
limit?: number,
|
||||
): void {
|
||||
const grouped = this.grouper.groupBySeverity(violations)
|
||||
const severities: SeverityLevel[] = [
|
||||
SEVERITY_LEVELS.CRITICAL,
|
||||
SEVERITY_LEVELS.HIGH,
|
||||
SEVERITY_LEVELS.MEDIUM,
|
||||
SEVERITY_LEVELS.LOW,
|
||||
]
|
||||
|
||||
let totalDisplayed = 0
|
||||
const totalAvailable = violations.length
|
||||
|
||||
for (const severity of severities) {
|
||||
const items = grouped.get(severity)
|
||||
if (items && items.length > 0) {
|
||||
console.warn(SEVERITY_HEADER[severity])
|
||||
console.warn(`Found ${String(items.length)} issue(s)\n`)
|
||||
|
||||
const itemsToDisplay =
|
||||
limit !== undefined ? items.slice(0, limit - totalDisplayed) : items
|
||||
itemsToDisplay.forEach((item, index) => {
|
||||
displayFn(item, totalDisplayed + index)
|
||||
})
|
||||
totalDisplayed += itemsToDisplay.length
|
||||
|
||||
if (limit !== undefined && totalDisplayed >= limit) {
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (limit !== undefined && totalAvailable > limit) {
|
||||
console.warn(
|
||||
`\n⚠️ Showing first ${String(limit)} of ${String(totalAvailable)} issues (use --limit to adjust)\n`,
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
formatArchitectureViolation(v: ArchitectureViolation, index: number): void {
|
||||
console.log(`${String(index + 1)}. ${v.file}`)
|
||||
console.log(` Severity: ${SEVERITY_LABELS[v.severity]}`)
|
||||
console.log(` Rule: ${v.rule}`)
|
||||
console.log(` ${v.message}`)
|
||||
console.log("")
|
||||
}
|
||||
|
||||
formatCircularDependency(cd: CircularDependencyViolation, index: number): void {
|
||||
console.log(`${String(index + 1)}. ${cd.message}`)
|
||||
console.log(` Severity: ${SEVERITY_LABELS[cd.severity]}`)
|
||||
console.log(" Cycle path:")
|
||||
cd.cycle.forEach((file, i) => {
|
||||
console.log(` ${String(i + 1)}. ${file}`)
|
||||
})
|
||||
console.log(` ${String(cd.cycle.length + 1)}. ${cd.cycle[0]} (back to start)`)
|
||||
console.log("")
|
||||
}
|
||||
|
||||
formatNamingViolation(nc: NamingConventionViolation, index: number): void {
|
||||
console.log(`${String(index + 1)}. ${nc.file}`)
|
||||
console.log(` Severity: ${SEVERITY_LABELS[nc.severity]}`)
|
||||
console.log(` File: ${nc.fileName}`)
|
||||
console.log(` Layer: ${nc.layer}`)
|
||||
console.log(` Type: ${nc.type}`)
|
||||
console.log(` Message: ${nc.message}`)
|
||||
if (nc.suggestion) {
|
||||
console.log(` 💡 Suggestion: ${nc.suggestion}`)
|
||||
}
|
||||
console.log("")
|
||||
}
|
||||
|
||||
formatFrameworkLeak(fl: FrameworkLeakViolation, index: number): void {
|
||||
console.log(`${String(index + 1)}. ${fl.file}`)
|
||||
console.log(` Severity: ${SEVERITY_LABELS[fl.severity]}`)
|
||||
console.log(` Package: ${fl.packageName}`)
|
||||
console.log(` Category: ${fl.categoryDescription}`)
|
||||
console.log(` Layer: ${fl.layer}`)
|
||||
console.log(` Rule: ${fl.rule}`)
|
||||
console.log(` ${fl.message}`)
|
||||
console.log(` 💡 Suggestion: ${fl.suggestion}`)
|
||||
console.log("")
|
||||
}
|
||||
|
||||
formatEntityExposure(ee: EntityExposureViolation, index: number): void {
|
||||
const location = ee.line ? `${ee.file}:${String(ee.line)}` : ee.file
|
||||
console.log(`${String(index + 1)}. ${location}`)
|
||||
console.log(` Severity: ${SEVERITY_LABELS[ee.severity]}`)
|
||||
console.log(` Entity: ${ee.entityName}`)
|
||||
console.log(` Return Type: ${ee.returnType}`)
|
||||
if (ee.methodName) {
|
||||
console.log(` Method: ${ee.methodName}`)
|
||||
}
|
||||
console.log(` Layer: ${ee.layer}`)
|
||||
console.log(` Rule: ${ee.rule}`)
|
||||
console.log(` ${ee.message}`)
|
||||
console.log(" 💡 Suggestion:")
|
||||
ee.suggestion.split("\n").forEach((line) => {
|
||||
if (line.trim()) {
|
||||
console.log(` ${line}`)
|
||||
}
|
||||
})
|
||||
console.log("")
|
||||
}
|
||||
|
||||
formatDependencyDirection(dd: DependencyDirectionViolation, index: number): void {
|
||||
console.log(`${String(index + 1)}. ${dd.file}`)
|
||||
console.log(` Severity: ${SEVERITY_LABELS[dd.severity]}`)
|
||||
console.log(` From Layer: ${dd.fromLayer}`)
|
||||
console.log(` To Layer: ${dd.toLayer}`)
|
||||
console.log(` Import: ${dd.importPath}`)
|
||||
console.log(` ${dd.message}`)
|
||||
console.log(` 💡 Suggestion: ${dd.suggestion}`)
|
||||
console.log("")
|
||||
}
|
||||
|
||||
formatRepositoryPattern(rp: RepositoryPatternViolation, index: number): void {
|
||||
console.log(`${String(index + 1)}. ${rp.file}`)
|
||||
console.log(` Severity: ${SEVERITY_LABELS[rp.severity]}`)
|
||||
console.log(` Layer: ${rp.layer}`)
|
||||
console.log(` Type: ${rp.violationType}`)
|
||||
console.log(` Details: ${rp.details}`)
|
||||
console.log(` ${rp.message}`)
|
||||
console.log(` 💡 Suggestion: ${rp.suggestion}`)
|
||||
console.log("")
|
||||
}
|
||||
|
||||
formatAggregateBoundary(ab: AggregateBoundaryViolation, index: number): void {
|
||||
const location = ab.line ? `${ab.file}:${String(ab.line)}` : ab.file
|
||||
console.log(`${String(index + 1)}. ${location}`)
|
||||
console.log(` Severity: ${SEVERITY_LABELS[ab.severity]}`)
|
||||
console.log(` From Aggregate: ${ab.fromAggregate}`)
|
||||
console.log(` To Aggregate: ${ab.toAggregate}`)
|
||||
console.log(` Entity: ${ab.entityName}`)
|
||||
console.log(` Import: ${ab.importPath}`)
|
||||
console.log(` ${ab.message}`)
|
||||
console.log(" 💡 Suggestion:")
|
||||
ab.suggestion.split("\n").forEach((line) => {
|
||||
if (line.trim()) {
|
||||
console.log(` ${line}`)
|
||||
}
|
||||
})
|
||||
console.log("")
|
||||
}
|
||||
|
||||
formatSecretViolation(sv: SecretViolation, index: number): void {
|
||||
const location = `${sv.file}:${String(sv.line)}:${String(sv.column)}`
|
||||
console.log(`${String(index + 1)}. ${location}`)
|
||||
console.log(` Severity: ${SEVERITY_LABELS[sv.severity]} ⚠️`)
|
||||
console.log(` Secret Type: ${sv.secretType}`)
|
||||
console.log(` ${sv.message}`)
|
||||
console.log(" 🔐 CRITICAL: Rotate this secret immediately!")
|
||||
console.log(" 💡 Suggestion:")
|
||||
sv.suggestion.split("\n").forEach((line) => {
|
||||
if (line.trim()) {
|
||||
console.log(` ${line}`)
|
||||
}
|
||||
})
|
||||
console.log("")
|
||||
}
|
||||
|
||||
formatHardcodeViolation(hc: HardcodeViolation, index: number): void {
|
||||
console.log(`${String(index + 1)}. ${hc.file}:${String(hc.line)}:${String(hc.column)}`)
|
||||
console.log(` Severity: ${SEVERITY_LABELS[hc.severity]}`)
|
||||
console.log(` Type: ${hc.type}`)
|
||||
console.log(` Value: ${JSON.stringify(hc.value)}`)
|
||||
console.log(` Context: ${hc.context.trim()}`)
|
||||
console.log(` 💡 Suggested: ${hc.suggestion.constantName}`)
|
||||
console.log(` 📁 Location: ${hc.suggestion.location}`)
|
||||
console.log("")
|
||||
}
|
||||
|
||||
formatAnemicModelViolation(am: AnemicModelViolation, index: number): void {
|
||||
const location = am.line ? `${am.file}:${String(am.line)}` : am.file
|
||||
console.log(`${String(index + 1)}. ${location}`)
|
||||
console.log(` Severity: ${SEVERITY_LABELS[am.severity]}`)
|
||||
console.log(` Class: ${am.className}`)
|
||||
console.log(` Layer: ${am.layer}`)
|
||||
console.log(
|
||||
` Methods: ${String(am.methodCount)} | Properties: ${String(am.propertyCount)}`,
|
||||
)
|
||||
|
||||
if (am.hasPublicSetters) {
|
||||
console.log(" ⚠️ Has public setters (DDD anti-pattern)")
|
||||
}
|
||||
if (am.hasOnlyGettersSetters) {
|
||||
console.log(" ⚠️ Only getters/setters (no business logic)")
|
||||
}
|
||||
|
||||
console.log(` ${am.message}`)
|
||||
console.log(" 💡 Suggestion:")
|
||||
am.suggestion.split("\n").forEach((line) => {
|
||||
if (line.trim()) {
|
||||
console.log(` ${line}`)
|
||||
}
|
||||
})
|
||||
console.log("")
|
||||
}
|
||||
}
|
||||
59
packages/guardian/src/cli/formatters/StatisticsFormatter.ts
Normal file
59
packages/guardian/src/cli/formatters/StatisticsFormatter.ts
Normal file
@@ -0,0 +1,59 @@
|
||||
import { CLI_LABELS, CLI_MESSAGES } from "../constants"
|
||||
|
||||
interface ProjectMetrics {
|
||||
totalFiles: number
|
||||
totalFunctions: number
|
||||
totalImports: number
|
||||
layerDistribution: Record<string, number>
|
||||
}
|
||||
|
||||
export class StatisticsFormatter {
|
||||
displayMetrics(metrics: ProjectMetrics): void {
|
||||
console.log(CLI_MESSAGES.METRICS_HEADER)
|
||||
console.log(` ${CLI_LABELS.FILES_ANALYZED} ${String(metrics.totalFiles)}`)
|
||||
console.log(` ${CLI_LABELS.TOTAL_FUNCTIONS} ${String(metrics.totalFunctions)}`)
|
||||
console.log(` ${CLI_LABELS.TOTAL_IMPORTS} ${String(metrics.totalImports)}`)
|
||||
|
||||
if (Object.keys(metrics.layerDistribution).length > 0) {
|
||||
console.log(CLI_MESSAGES.LAYER_DISTRIBUTION_HEADER)
|
||||
for (const [layer, count] of Object.entries(metrics.layerDistribution)) {
|
||||
console.log(` ${layer}: ${String(count)} ${CLI_LABELS.FILES}`)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
displaySummary(totalIssues: number, verbose: boolean): void {
|
||||
if (totalIssues === 0) {
|
||||
console.log(CLI_MESSAGES.NO_ISSUES)
|
||||
process.exit(0)
|
||||
} else {
|
||||
console.log(
|
||||
`${CLI_MESSAGES.ISSUES_TOTAL} ${String(totalIssues)} ${CLI_LABELS.ISSUES_TOTAL}`,
|
||||
)
|
||||
console.log(CLI_MESSAGES.TIP)
|
||||
|
||||
if (verbose) {
|
||||
console.log(CLI_MESSAGES.HELP_FOOTER)
|
||||
}
|
||||
|
||||
process.exit(1)
|
||||
}
|
||||
}
|
||||
|
||||
displaySeverityFilterMessage(onlyCritical: boolean, minSeverity?: string): void {
|
||||
if (onlyCritical) {
|
||||
console.log("\n🔴 Filtering: Showing only CRITICAL severity issues\n")
|
||||
} else if (minSeverity) {
|
||||
console.log(
|
||||
`\n⚠️ Filtering: Showing ${minSeverity.toUpperCase()} severity and above\n`,
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
displayError(message: string): void {
|
||||
console.error(`\n❌ ${CLI_MESSAGES.ERROR_PREFIX}`)
|
||||
console.error(message)
|
||||
console.error("")
|
||||
process.exit(1)
|
||||
}
|
||||
}
|
||||
29
packages/guardian/src/cli/groupers/ViolationGrouper.ts
Normal file
29
packages/guardian/src/cli/groupers/ViolationGrouper.ts
Normal file
@@ -0,0 +1,29 @@
|
||||
import { SEVERITY_ORDER, type SeverityLevel } from "../../shared/constants"
|
||||
|
||||
export class ViolationGrouper {
|
||||
groupBySeverity<T extends { severity: SeverityLevel }>(
|
||||
violations: T[],
|
||||
): Map<SeverityLevel, T[]> {
|
||||
const grouped = new Map<SeverityLevel, T[]>()
|
||||
|
||||
for (const violation of violations) {
|
||||
const existing = grouped.get(violation.severity) ?? []
|
||||
existing.push(violation)
|
||||
grouped.set(violation.severity, existing)
|
||||
}
|
||||
|
||||
return grouped
|
||||
}
|
||||
|
||||
filterBySeverity<T extends { severity: SeverityLevel }>(
|
||||
violations: T[],
|
||||
minSeverity?: SeverityLevel,
|
||||
): T[] {
|
||||
if (!minSeverity) {
|
||||
return violations
|
||||
}
|
||||
|
||||
const minSeverityOrder = SEVERITY_ORDER[minSeverity]
|
||||
return violations.filter((v) => SEVERITY_ORDER[v.severity] <= minSeverityOrder)
|
||||
}
|
||||
}
|
||||
@@ -11,92 +11,11 @@ import {
|
||||
CLI_MESSAGES,
|
||||
CLI_OPTIONS,
|
||||
DEFAULT_EXCLUDES,
|
||||
SEVERITY_DISPLAY_LABELS,
|
||||
SEVERITY_SECTION_HEADERS,
|
||||
} from "./constants"
|
||||
import { SEVERITY_LEVELS, SEVERITY_ORDER, type SeverityLevel } from "../shared/constants"
|
||||
|
||||
const SEVERITY_LABELS: Record<SeverityLevel, string> = {
|
||||
[SEVERITY_LEVELS.CRITICAL]: SEVERITY_DISPLAY_LABELS.CRITICAL,
|
||||
[SEVERITY_LEVELS.HIGH]: SEVERITY_DISPLAY_LABELS.HIGH,
|
||||
[SEVERITY_LEVELS.MEDIUM]: SEVERITY_DISPLAY_LABELS.MEDIUM,
|
||||
[SEVERITY_LEVELS.LOW]: SEVERITY_DISPLAY_LABELS.LOW,
|
||||
}
|
||||
|
||||
const SEVERITY_HEADER: Record<SeverityLevel, string> = {
|
||||
[SEVERITY_LEVELS.CRITICAL]: SEVERITY_SECTION_HEADERS.CRITICAL,
|
||||
[SEVERITY_LEVELS.HIGH]: SEVERITY_SECTION_HEADERS.HIGH,
|
||||
[SEVERITY_LEVELS.MEDIUM]: SEVERITY_SECTION_HEADERS.MEDIUM,
|
||||
[SEVERITY_LEVELS.LOW]: SEVERITY_SECTION_HEADERS.LOW,
|
||||
}
|
||||
|
||||
function groupBySeverity<T extends { severity: SeverityLevel }>(
|
||||
violations: T[],
|
||||
): Map<SeverityLevel, T[]> {
|
||||
const grouped = new Map<SeverityLevel, T[]>()
|
||||
|
||||
for (const violation of violations) {
|
||||
const existing = grouped.get(violation.severity) ?? []
|
||||
existing.push(violation)
|
||||
grouped.set(violation.severity, existing)
|
||||
}
|
||||
|
||||
return grouped
|
||||
}
|
||||
|
||||
function filterBySeverity<T extends { severity: SeverityLevel }>(
|
||||
violations: T[],
|
||||
minSeverity?: SeverityLevel,
|
||||
): T[] {
|
||||
if (!minSeverity) {
|
||||
return violations
|
||||
}
|
||||
|
||||
const minSeverityOrder = SEVERITY_ORDER[minSeverity]
|
||||
return violations.filter((v) => SEVERITY_ORDER[v.severity] <= minSeverityOrder)
|
||||
}
|
||||
|
||||
function displayGroupedViolations<T extends { severity: SeverityLevel }>(
|
||||
violations: T[],
|
||||
displayFn: (v: T, index: number) => void,
|
||||
limit?: number,
|
||||
): void {
|
||||
const grouped = groupBySeverity(violations)
|
||||
const severities: SeverityLevel[] = [
|
||||
SEVERITY_LEVELS.CRITICAL,
|
||||
SEVERITY_LEVELS.HIGH,
|
||||
SEVERITY_LEVELS.MEDIUM,
|
||||
SEVERITY_LEVELS.LOW,
|
||||
]
|
||||
|
||||
let totalDisplayed = 0
|
||||
const totalAvailable = violations.length
|
||||
|
||||
for (const severity of severities) {
|
||||
const items = grouped.get(severity)
|
||||
if (items && items.length > 0) {
|
||||
console.warn(SEVERITY_HEADER[severity])
|
||||
console.warn(`Found ${String(items.length)} issue(s)\n`)
|
||||
|
||||
const itemsToDisplay =
|
||||
limit !== undefined ? items.slice(0, limit - totalDisplayed) : items
|
||||
itemsToDisplay.forEach((item, index) => {
|
||||
displayFn(item, totalDisplayed + index)
|
||||
})
|
||||
totalDisplayed += itemsToDisplay.length
|
||||
|
||||
if (limit !== undefined && totalDisplayed >= limit) {
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (limit !== undefined && totalAvailable > limit) {
|
||||
console.warn(
|
||||
`\n⚠️ Showing first ${String(limit)} of ${String(totalAvailable)} issues (use --limit to adjust)\n`,
|
||||
)
|
||||
}
|
||||
}
|
||||
import { SEVERITY_LEVELS, type SeverityLevel } from "../shared/constants"
|
||||
import { ViolationGrouper } from "./groupers/ViolationGrouper"
|
||||
import { OutputFormatter } from "./formatters/OutputFormatter"
|
||||
import { StatisticsFormatter } from "./formatters/StatisticsFormatter"
|
||||
|
||||
const program = new Command()
|
||||
|
||||
@@ -122,7 +41,20 @@ program
|
||||
CLI_HELP_TEXT.FIX_ENTITY +
|
||||
CLI_HELP_TEXT.FIX_DEPENDENCY +
|
||||
CLI_HELP_TEXT.FIX_REPOSITORY +
|
||||
CLI_HELP_TEXT.FOOTER,
|
||||
CLI_HELP_TEXT.FOOTER +
|
||||
CLI_HELP_TEXT.AI_AGENT_HEADER +
|
||||
CLI_HELP_TEXT.AI_AGENT_INTRO +
|
||||
CLI_HELP_TEXT.AI_AGENT_STEP1 +
|
||||
CLI_HELP_TEXT.AI_AGENT_STEP1_CMD +
|
||||
CLI_HELP_TEXT.AI_AGENT_STEP2 +
|
||||
CLI_HELP_TEXT.AI_AGENT_STEP2_DETAIL +
|
||||
CLI_HELP_TEXT.AI_AGENT_STEP3 +
|
||||
CLI_HELP_TEXT.AI_AGENT_STEP3_CMD +
|
||||
CLI_HELP_TEXT.AI_AGENT_STEP4 +
|
||||
CLI_HELP_TEXT.AI_AGENT_STEP4_CMDS +
|
||||
CLI_HELP_TEXT.AI_AGENT_OUTPUT +
|
||||
CLI_HELP_TEXT.AI_AGENT_OUTPUT_DETAIL +
|
||||
CLI_HELP_TEXT.AI_AGENT_PRIORITY,
|
||||
)
|
||||
|
||||
program
|
||||
@@ -137,6 +69,10 @@ program
|
||||
.option(CLI_OPTIONS.ONLY_CRITICAL, CLI_DESCRIPTIONS.ONLY_CRITICAL_OPTION, false)
|
||||
.option(CLI_OPTIONS.LIMIT, CLI_DESCRIPTIONS.LIMIT_OPTION)
|
||||
.action(async (path: string, options) => {
|
||||
const grouper = new ViolationGrouper()
|
||||
const outputFormatter = new OutputFormatter()
|
||||
const statsFormatter = new StatisticsFormatter()
|
||||
|
||||
try {
|
||||
console.log(CLI_MESSAGES.ANALYZING)
|
||||
|
||||
@@ -156,6 +92,8 @@ program
|
||||
dependencyDirectionViolations,
|
||||
repositoryPatternViolations,
|
||||
aggregateBoundaryViolations,
|
||||
secretViolations,
|
||||
anemicModelViolations,
|
||||
} = result
|
||||
|
||||
const minSeverity: SeverityLevel | undefined = options.onlyCritical
|
||||
@@ -169,270 +107,187 @@ program
|
||||
: undefined
|
||||
|
||||
if (minSeverity) {
|
||||
violations = filterBySeverity(violations, minSeverity)
|
||||
hardcodeViolations = filterBySeverity(hardcodeViolations, minSeverity)
|
||||
circularDependencyViolations = filterBySeverity(
|
||||
violations = grouper.filterBySeverity(violations, minSeverity)
|
||||
hardcodeViolations = grouper.filterBySeverity(hardcodeViolations, minSeverity)
|
||||
circularDependencyViolations = grouper.filterBySeverity(
|
||||
circularDependencyViolations,
|
||||
minSeverity,
|
||||
)
|
||||
namingViolations = filterBySeverity(namingViolations, minSeverity)
|
||||
frameworkLeakViolations = filterBySeverity(frameworkLeakViolations, minSeverity)
|
||||
entityExposureViolations = filterBySeverity(entityExposureViolations, minSeverity)
|
||||
dependencyDirectionViolations = filterBySeverity(
|
||||
namingViolations = grouper.filterBySeverity(namingViolations, minSeverity)
|
||||
frameworkLeakViolations = grouper.filterBySeverity(
|
||||
frameworkLeakViolations,
|
||||
minSeverity,
|
||||
)
|
||||
entityExposureViolations = grouper.filterBySeverity(
|
||||
entityExposureViolations,
|
||||
minSeverity,
|
||||
)
|
||||
dependencyDirectionViolations = grouper.filterBySeverity(
|
||||
dependencyDirectionViolations,
|
||||
minSeverity,
|
||||
)
|
||||
repositoryPatternViolations = filterBySeverity(
|
||||
repositoryPatternViolations = grouper.filterBySeverity(
|
||||
repositoryPatternViolations,
|
||||
minSeverity,
|
||||
)
|
||||
aggregateBoundaryViolations = filterBySeverity(
|
||||
aggregateBoundaryViolations = grouper.filterBySeverity(
|
||||
aggregateBoundaryViolations,
|
||||
minSeverity,
|
||||
)
|
||||
secretViolations = grouper.filterBySeverity(secretViolations, minSeverity)
|
||||
anemicModelViolations = grouper.filterBySeverity(anemicModelViolations, minSeverity)
|
||||
|
||||
if (options.onlyCritical) {
|
||||
console.log("\n🔴 Filtering: Showing only CRITICAL severity issues\n")
|
||||
} else {
|
||||
console.log(
|
||||
`\n⚠️ Filtering: Showing ${minSeverity.toUpperCase()} severity and above\n`,
|
||||
)
|
||||
}
|
||||
statsFormatter.displaySeverityFilterMessage(
|
||||
options.onlyCritical,
|
||||
options.minSeverity,
|
||||
)
|
||||
}
|
||||
|
||||
// Display metrics
|
||||
console.log(CLI_MESSAGES.METRICS_HEADER)
|
||||
console.log(` ${CLI_LABELS.FILES_ANALYZED} ${String(metrics.totalFiles)}`)
|
||||
console.log(` ${CLI_LABELS.TOTAL_FUNCTIONS} ${String(metrics.totalFunctions)}`)
|
||||
console.log(` ${CLI_LABELS.TOTAL_IMPORTS} ${String(metrics.totalImports)}`)
|
||||
statsFormatter.displayMetrics(metrics)
|
||||
|
||||
if (Object.keys(metrics.layerDistribution).length > 0) {
|
||||
console.log(CLI_MESSAGES.LAYER_DISTRIBUTION_HEADER)
|
||||
for (const [layer, count] of Object.entries(metrics.layerDistribution)) {
|
||||
console.log(` ${layer}: ${String(count)} ${CLI_LABELS.FILES}`)
|
||||
}
|
||||
}
|
||||
|
||||
// Architecture violations
|
||||
if (options.architecture && violations.length > 0) {
|
||||
console.log(
|
||||
`\n${CLI_MESSAGES.VIOLATIONS_HEADER} ${String(violations.length)} ${CLI_LABELS.ARCHITECTURE_VIOLATIONS}`,
|
||||
)
|
||||
|
||||
displayGroupedViolations(
|
||||
outputFormatter.displayGroupedViolations(
|
||||
violations,
|
||||
(v, index) => {
|
||||
console.log(`${String(index + 1)}. ${v.file}`)
|
||||
console.log(` Severity: ${SEVERITY_LABELS[v.severity]}`)
|
||||
console.log(` Rule: ${v.rule}`)
|
||||
console.log(` ${v.message}`)
|
||||
console.log("")
|
||||
(v, i) => {
|
||||
outputFormatter.formatArchitectureViolation(v, i)
|
||||
},
|
||||
limit,
|
||||
)
|
||||
}
|
||||
|
||||
// Circular dependency violations
|
||||
if (options.architecture && circularDependencyViolations.length > 0) {
|
||||
console.log(
|
||||
`\n${CLI_MESSAGES.CIRCULAR_DEPS_HEADER} ${String(circularDependencyViolations.length)} ${CLI_LABELS.CIRCULAR_DEPENDENCIES}`,
|
||||
)
|
||||
|
||||
displayGroupedViolations(
|
||||
outputFormatter.displayGroupedViolations(
|
||||
circularDependencyViolations,
|
||||
(cd, index) => {
|
||||
console.log(`${String(index + 1)}. ${cd.message}`)
|
||||
console.log(` Severity: ${SEVERITY_LABELS[cd.severity]}`)
|
||||
console.log(" Cycle path:")
|
||||
cd.cycle.forEach((file, i) => {
|
||||
console.log(` ${String(i + 1)}. ${file}`)
|
||||
})
|
||||
console.log(
|
||||
` ${String(cd.cycle.length + 1)}. ${cd.cycle[0]} (back to start)`,
|
||||
)
|
||||
console.log("")
|
||||
(cd, i) => {
|
||||
outputFormatter.formatCircularDependency(cd, i)
|
||||
},
|
||||
limit,
|
||||
)
|
||||
}
|
||||
|
||||
// Naming convention violations
|
||||
if (options.architecture && namingViolations.length > 0) {
|
||||
console.log(
|
||||
`\n${CLI_MESSAGES.NAMING_VIOLATIONS_HEADER} ${String(namingViolations.length)} ${CLI_LABELS.NAMING_VIOLATIONS}`,
|
||||
)
|
||||
|
||||
displayGroupedViolations(
|
||||
outputFormatter.displayGroupedViolations(
|
||||
namingViolations,
|
||||
(nc, index) => {
|
||||
console.log(`${String(index + 1)}. ${nc.file}`)
|
||||
console.log(` Severity: ${SEVERITY_LABELS[nc.severity]}`)
|
||||
console.log(` File: ${nc.fileName}`)
|
||||
console.log(` Layer: ${nc.layer}`)
|
||||
console.log(` Type: ${nc.type}`)
|
||||
console.log(` Message: ${nc.message}`)
|
||||
if (nc.suggestion) {
|
||||
console.log(` 💡 Suggestion: ${nc.suggestion}`)
|
||||
}
|
||||
console.log("")
|
||||
(nc, i) => {
|
||||
outputFormatter.formatNamingViolation(nc, i)
|
||||
},
|
||||
limit,
|
||||
)
|
||||
}
|
||||
|
||||
// Framework leak violations
|
||||
if (options.architecture && frameworkLeakViolations.length > 0) {
|
||||
console.log(
|
||||
`\n🏗️ Found ${String(frameworkLeakViolations.length)} framework leak(s)`,
|
||||
)
|
||||
|
||||
displayGroupedViolations(
|
||||
outputFormatter.displayGroupedViolations(
|
||||
frameworkLeakViolations,
|
||||
(fl, index) => {
|
||||
console.log(`${String(index + 1)}. ${fl.file}`)
|
||||
console.log(` Severity: ${SEVERITY_LABELS[fl.severity]}`)
|
||||
console.log(` Package: ${fl.packageName}`)
|
||||
console.log(` Category: ${fl.categoryDescription}`)
|
||||
console.log(` Layer: ${fl.layer}`)
|
||||
console.log(` Rule: ${fl.rule}`)
|
||||
console.log(` ${fl.message}`)
|
||||
console.log(` 💡 Suggestion: ${fl.suggestion}`)
|
||||
console.log("")
|
||||
(fl, i) => {
|
||||
outputFormatter.formatFrameworkLeak(fl, i)
|
||||
},
|
||||
limit,
|
||||
)
|
||||
}
|
||||
|
||||
// Entity exposure violations
|
||||
if (options.architecture && entityExposureViolations.length > 0) {
|
||||
console.log(
|
||||
`\n🎭 Found ${String(entityExposureViolations.length)} entity exposure(s)`,
|
||||
)
|
||||
|
||||
displayGroupedViolations(
|
||||
outputFormatter.displayGroupedViolations(
|
||||
entityExposureViolations,
|
||||
(ee, index) => {
|
||||
const location = ee.line ? `${ee.file}:${String(ee.line)}` : ee.file
|
||||
console.log(`${String(index + 1)}. ${location}`)
|
||||
console.log(` Severity: ${SEVERITY_LABELS[ee.severity]}`)
|
||||
console.log(` Entity: ${ee.entityName}`)
|
||||
console.log(` Return Type: ${ee.returnType}`)
|
||||
if (ee.methodName) {
|
||||
console.log(` Method: ${ee.methodName}`)
|
||||
}
|
||||
console.log(` Layer: ${ee.layer}`)
|
||||
console.log(` Rule: ${ee.rule}`)
|
||||
console.log(` ${ee.message}`)
|
||||
console.log(" 💡 Suggestion:")
|
||||
ee.suggestion.split("\n").forEach((line) => {
|
||||
if (line.trim()) {
|
||||
console.log(` ${line}`)
|
||||
}
|
||||
})
|
||||
console.log("")
|
||||
(ee, i) => {
|
||||
outputFormatter.formatEntityExposure(ee, i)
|
||||
},
|
||||
limit,
|
||||
)
|
||||
}
|
||||
|
||||
// Dependency direction violations
|
||||
if (options.architecture && dependencyDirectionViolations.length > 0) {
|
||||
console.log(
|
||||
`\n⚠️ Found ${String(dependencyDirectionViolations.length)} dependency direction violation(s)`,
|
||||
)
|
||||
|
||||
displayGroupedViolations(
|
||||
outputFormatter.displayGroupedViolations(
|
||||
dependencyDirectionViolations,
|
||||
(dd, index) => {
|
||||
console.log(`${String(index + 1)}. ${dd.file}`)
|
||||
console.log(` Severity: ${SEVERITY_LABELS[dd.severity]}`)
|
||||
console.log(` From Layer: ${dd.fromLayer}`)
|
||||
console.log(` To Layer: ${dd.toLayer}`)
|
||||
console.log(` Import: ${dd.importPath}`)
|
||||
console.log(` ${dd.message}`)
|
||||
console.log(` 💡 Suggestion: ${dd.suggestion}`)
|
||||
console.log("")
|
||||
(dd, i) => {
|
||||
outputFormatter.formatDependencyDirection(dd, i)
|
||||
},
|
||||
limit,
|
||||
)
|
||||
}
|
||||
|
||||
// Repository pattern violations
|
||||
if (options.architecture && repositoryPatternViolations.length > 0) {
|
||||
console.log(
|
||||
`\n📦 Found ${String(repositoryPatternViolations.length)} repository pattern violation(s)`,
|
||||
)
|
||||
|
||||
displayGroupedViolations(
|
||||
outputFormatter.displayGroupedViolations(
|
||||
repositoryPatternViolations,
|
||||
(rp, index) => {
|
||||
console.log(`${String(index + 1)}. ${rp.file}`)
|
||||
console.log(` Severity: ${SEVERITY_LABELS[rp.severity]}`)
|
||||
console.log(` Layer: ${rp.layer}`)
|
||||
console.log(` Type: ${rp.violationType}`)
|
||||
console.log(` Details: ${rp.details}`)
|
||||
console.log(` ${rp.message}`)
|
||||
console.log(` 💡 Suggestion: ${rp.suggestion}`)
|
||||
console.log("")
|
||||
(rp, i) => {
|
||||
outputFormatter.formatRepositoryPattern(rp, i)
|
||||
},
|
||||
limit,
|
||||
)
|
||||
}
|
||||
|
||||
// Aggregate boundary violations
|
||||
if (options.architecture && aggregateBoundaryViolations.length > 0) {
|
||||
console.log(
|
||||
`\n🔒 Found ${String(aggregateBoundaryViolations.length)} aggregate boundary violation(s)`,
|
||||
)
|
||||
|
||||
displayGroupedViolations(
|
||||
outputFormatter.displayGroupedViolations(
|
||||
aggregateBoundaryViolations,
|
||||
(ab, index) => {
|
||||
const location = ab.line ? `${ab.file}:${String(ab.line)}` : ab.file
|
||||
console.log(`${String(index + 1)}. ${location}`)
|
||||
console.log(` Severity: ${SEVERITY_LABELS[ab.severity]}`)
|
||||
console.log(` From Aggregate: ${ab.fromAggregate}`)
|
||||
console.log(` To Aggregate: ${ab.toAggregate}`)
|
||||
console.log(` Entity: ${ab.entityName}`)
|
||||
console.log(` Import: ${ab.importPath}`)
|
||||
console.log(` ${ab.message}`)
|
||||
console.log(" 💡 Suggestion:")
|
||||
ab.suggestion.split("\n").forEach((line) => {
|
||||
if (line.trim()) {
|
||||
console.log(` ${line}`)
|
||||
}
|
||||
})
|
||||
console.log("")
|
||||
(ab, i) => {
|
||||
outputFormatter.formatAggregateBoundary(ab, i)
|
||||
},
|
||||
limit,
|
||||
)
|
||||
}
|
||||
|
||||
if (secretViolations.length > 0) {
|
||||
console.log(
|
||||
`\n🔐 Found ${String(secretViolations.length)} hardcoded secret(s) - CRITICAL SECURITY RISK`,
|
||||
)
|
||||
outputFormatter.displayGroupedViolations(
|
||||
secretViolations,
|
||||
(sv, i) => {
|
||||
outputFormatter.formatSecretViolation(sv, i)
|
||||
},
|
||||
limit,
|
||||
)
|
||||
}
|
||||
|
||||
if (anemicModelViolations.length > 0) {
|
||||
console.log(
|
||||
`\n🩺 Found ${String(anemicModelViolations.length)} anemic domain model(s)`,
|
||||
)
|
||||
outputFormatter.displayGroupedViolations(
|
||||
anemicModelViolations,
|
||||
(am, i) => {
|
||||
outputFormatter.formatAnemicModelViolation(am, i)
|
||||
},
|
||||
limit,
|
||||
)
|
||||
}
|
||||
|
||||
// Hardcode violations
|
||||
if (options.hardcode && hardcodeViolations.length > 0) {
|
||||
console.log(
|
||||
`\n${CLI_MESSAGES.HARDCODE_VIOLATIONS_HEADER} ${String(hardcodeViolations.length)} ${CLI_LABELS.HARDCODE_VIOLATIONS}`,
|
||||
)
|
||||
|
||||
displayGroupedViolations(
|
||||
outputFormatter.displayGroupedViolations(
|
||||
hardcodeViolations,
|
||||
(hc, index) => {
|
||||
console.log(
|
||||
`${String(index + 1)}. ${hc.file}:${String(hc.line)}:${String(hc.column)}`,
|
||||
)
|
||||
console.log(` Severity: ${SEVERITY_LABELS[hc.severity]}`)
|
||||
console.log(` Type: ${hc.type}`)
|
||||
console.log(` Value: ${JSON.stringify(hc.value)}`)
|
||||
console.log(` Context: ${hc.context.trim()}`)
|
||||
console.log(` 💡 Suggested: ${hc.suggestion.constantName}`)
|
||||
console.log(` 📁 Location: ${hc.suggestion.location}`)
|
||||
console.log("")
|
||||
(hc, i) => {
|
||||
outputFormatter.formatHardcodeViolation(hc, i)
|
||||
},
|
||||
limit,
|
||||
)
|
||||
}
|
||||
|
||||
// Summary
|
||||
const totalIssues =
|
||||
violations.length +
|
||||
hardcodeViolations.length +
|
||||
@@ -442,28 +297,13 @@ program
|
||||
entityExposureViolations.length +
|
||||
dependencyDirectionViolations.length +
|
||||
repositoryPatternViolations.length +
|
||||
aggregateBoundaryViolations.length
|
||||
aggregateBoundaryViolations.length +
|
||||
secretViolations.length +
|
||||
anemicModelViolations.length
|
||||
|
||||
if (totalIssues === 0) {
|
||||
console.log(CLI_MESSAGES.NO_ISSUES)
|
||||
process.exit(0)
|
||||
} else {
|
||||
console.log(
|
||||
`${CLI_MESSAGES.ISSUES_TOTAL} ${String(totalIssues)} ${CLI_LABELS.ISSUES_TOTAL}`,
|
||||
)
|
||||
console.log(CLI_MESSAGES.TIP)
|
||||
|
||||
if (options.verbose) {
|
||||
console.log(CLI_MESSAGES.HELP_FOOTER)
|
||||
}
|
||||
|
||||
process.exit(1)
|
||||
}
|
||||
statsFormatter.displaySummary(totalIssues, options.verbose)
|
||||
} catch (error) {
|
||||
console.error(`\n❌ ${CLI_MESSAGES.ERROR_PREFIX}`)
|
||||
console.error(error instanceof Error ? error.message : String(error))
|
||||
console.error("")
|
||||
process.exit(1)
|
||||
statsFormatter.displayError(error instanceof Error ? error.message : String(error))
|
||||
}
|
||||
})
|
||||
|
||||
|
||||
@@ -60,3 +60,23 @@ export const AGGREGATE_VIOLATION_MESSAGES = {
|
||||
AVOID_DIRECT_REFERENCE: "3. Avoid direct entity references to maintain aggregate independence",
|
||||
MAINTAIN_INDEPENDENCE: "4. Each aggregate should be independently modifiable and deployable",
|
||||
}
|
||||
|
||||
export const SECRET_VIOLATION_MESSAGES = {
|
||||
USE_ENV_VARIABLES: "1. Use environment variables for sensitive data (process.env.API_KEY)",
|
||||
USE_SECRET_MANAGER:
|
||||
"2. Use secret management services (AWS Secrets Manager, HashiCorp Vault, etc.)",
|
||||
NEVER_COMMIT_SECRETS: "3. Never commit secrets to version control",
|
||||
ROTATE_IF_EXPOSED: "4. If secret was committed, rotate it immediately",
|
||||
USE_GITIGNORE: "5. Add secret files to .gitignore (.env, credentials.json, etc.)",
|
||||
}
|
||||
|
||||
export const ANEMIC_MODEL_MESSAGES = {
|
||||
REMOVE_PUBLIC_SETTERS: "1. Remove public setters - they allow uncontrolled state changes",
|
||||
USE_METHODS_FOR_CHANGES: "2. Use business methods instead (approve(), cancel(), addItem())",
|
||||
ENCAPSULATE_INVARIANTS: "3. Encapsulate business rules and invariants in methods",
|
||||
ADD_BUSINESS_METHODS: "1. Add business logic methods to the entity",
|
||||
MOVE_LOGIC_FROM_SERVICES:
|
||||
"2. Move business logic from services to domain entities where it belongs",
|
||||
ENCAPSULATE_BUSINESS_RULES: "3. Encapsulate business rules inside entity methods",
|
||||
USE_DOMAIN_EVENTS: "4. Use domain events to communicate state changes",
|
||||
}
|
||||
|
||||
79
packages/guardian/src/domain/constants/SecretExamples.ts
Normal file
79
packages/guardian/src/domain/constants/SecretExamples.ts
Normal file
@@ -0,0 +1,79 @@
|
||||
/**
|
||||
* Secret detection constants
|
||||
* All hardcoded strings related to secret detection and examples
|
||||
*/
|
||||
|
||||
export const SECRET_KEYWORDS = {
|
||||
AWS: "aws",
|
||||
GITHUB: "github",
|
||||
NPM: "npm",
|
||||
SSH: "ssh",
|
||||
PRIVATE_KEY: "private key",
|
||||
SLACK: "slack",
|
||||
API_KEY: "api key",
|
||||
APIKEY: "apikey",
|
||||
ACCESS_KEY: "access key",
|
||||
SECRET: "secret",
|
||||
TOKEN: "token",
|
||||
PASSWORD: "password",
|
||||
USER: "user",
|
||||
BOT: "bot",
|
||||
RSA: "rsa",
|
||||
DSA: "dsa",
|
||||
ECDSA: "ecdsa",
|
||||
ED25519: "ed25519",
|
||||
BASICAUTH: "basicauth",
|
||||
GCP: "gcp",
|
||||
GOOGLE: "google",
|
||||
PRIVATEKEY: "privatekey",
|
||||
PERSONAL_ACCESS_TOKEN: "personal access token",
|
||||
OAUTH: "oauth",
|
||||
} as const
|
||||
|
||||
export const SECRET_TYPE_NAMES = {
|
||||
AWS_ACCESS_KEY: "AWS Access Key",
|
||||
AWS_SECRET_KEY: "AWS Secret Key",
|
||||
AWS_CREDENTIAL: "AWS Credential",
|
||||
GITHUB_PERSONAL_ACCESS_TOKEN: "GitHub Personal Access Token",
|
||||
GITHUB_OAUTH_TOKEN: "GitHub OAuth Token",
|
||||
GITHUB_TOKEN: "GitHub Token",
|
||||
NPM_TOKEN: "NPM Token",
|
||||
GCP_SERVICE_ACCOUNT_KEY: "GCP Service Account Key",
|
||||
SSH_RSA_PRIVATE_KEY: "SSH RSA Private Key",
|
||||
SSH_DSA_PRIVATE_KEY: "SSH DSA Private Key",
|
||||
SSH_ECDSA_PRIVATE_KEY: "SSH ECDSA Private Key",
|
||||
SSH_ED25519_PRIVATE_KEY: "SSH Ed25519 Private Key",
|
||||
SSH_PRIVATE_KEY: "SSH Private Key",
|
||||
SLACK_BOT_TOKEN: "Slack Bot Token",
|
||||
SLACK_USER_TOKEN: "Slack User Token",
|
||||
SLACK_TOKEN: "Slack Token",
|
||||
BASIC_AUTH_CREDENTIALS: "Basic Authentication Credentials",
|
||||
API_KEY: "API Key",
|
||||
AUTHENTICATION_TOKEN: "Authentication Token",
|
||||
PASSWORD: "Password",
|
||||
SECRET: "Secret",
|
||||
SENSITIVE_DATA: "Sensitive Data",
|
||||
} as const
|
||||
|
||||
export const SECRET_EXAMPLE_VALUES = {
|
||||
AWS_ACCESS_KEY_ID: "AKIA1234567890ABCDEF",
|
||||
AWS_SECRET_ACCESS_KEY: "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
|
||||
GITHUB_TOKEN: "ghp_1234567890abcdefghijklmnopqrstuv",
|
||||
NPM_TOKEN: "npm_abc123xyz",
|
||||
SLACK_TOKEN: "xoxb-<token-here>",
|
||||
API_KEY: "sk_live_XXXXXXXXXXXXXXXXXXXX_example_key",
|
||||
HARDCODED_SECRET: "hardcoded-secret-value",
|
||||
} as const
|
||||
|
||||
export const FILE_ENCODING = {
|
||||
UTF8: "utf-8",
|
||||
} as const
|
||||
|
||||
export const REGEX_ESCAPE_PATTERN = {
|
||||
DOLLAR_AMPERSAND: "\\$&",
|
||||
} as const
|
||||
|
||||
export const DYNAMIC_IMPORT_PATTERN_PARTS = {
|
||||
QUOTE_START: '"`][^',
|
||||
QUOTE_END: "`]+['\"",
|
||||
} as const
|
||||
@@ -0,0 +1,29 @@
|
||||
import { AnemicModelViolation } from "../value-objects/AnemicModelViolation"
|
||||
|
||||
/**
|
||||
* Interface for detecting anemic domain model violations in the codebase
|
||||
*
|
||||
* Anemic domain models are entities that contain only getters/setters
|
||||
* without business logic. This anti-pattern violates Domain-Driven Design
|
||||
* principles and leads to procedural code scattered in services.
|
||||
*/
|
||||
export interface IAnemicModelDetector {
|
||||
/**
|
||||
* Detects anemic model violations in the given code
|
||||
*
|
||||
* Analyzes classes in domain/entities to identify:
|
||||
* - Classes with only getters and setters (no business logic)
|
||||
* - Classes with public setters (DDD anti-pattern)
|
||||
* - Classes with low method-to-property ratio
|
||||
*
|
||||
* @param code - Source code to analyze
|
||||
* @param filePath - Path to the file being analyzed
|
||||
* @param layer - The architectural layer of the file (domain, application, infrastructure, shared)
|
||||
* @returns Array of detected anemic model violations
|
||||
*/
|
||||
detectAnemicModels(
|
||||
code: string,
|
||||
filePath: string,
|
||||
layer: string | undefined,
|
||||
): AnemicModelViolation[]
|
||||
}
|
||||
@@ -0,0 +1,55 @@
|
||||
import { HardcodedValue } from "../value-objects/HardcodedValue"
|
||||
|
||||
export interface ValueLocation {
|
||||
file: string
|
||||
line: number
|
||||
context: string
|
||||
}
|
||||
|
||||
export interface DuplicateInfo {
|
||||
value: string | number | boolean
|
||||
locations: ValueLocation[]
|
||||
count: number
|
||||
}
|
||||
|
||||
/**
|
||||
* Interface for tracking duplicate hardcoded values across files
|
||||
*
|
||||
* Helps identify values that are used in multiple places
|
||||
* and should be extracted to a shared constant.
|
||||
*/
|
||||
export interface IDuplicateValueTracker {
|
||||
/**
|
||||
* Adds a hardcoded value to tracking
|
||||
*/
|
||||
track(violation: HardcodedValue, filePath: string): void
|
||||
|
||||
/**
|
||||
* Gets all duplicate values (values used in 2+ places)
|
||||
*/
|
||||
getDuplicates(): DuplicateInfo[]
|
||||
|
||||
/**
|
||||
* Gets duplicate locations for a specific value
|
||||
*/
|
||||
getDuplicateLocations(value: string | number | boolean, type: string): ValueLocation[] | null
|
||||
|
||||
/**
|
||||
* Checks if a value is duplicated
|
||||
*/
|
||||
isDuplicate(value: string | number | boolean, type: string): boolean
|
||||
|
||||
/**
|
||||
* Gets statistics about duplicates
|
||||
*/
|
||||
getStats(): {
|
||||
totalValues: number
|
||||
duplicateValues: number
|
||||
duplicatePercentage: number
|
||||
}
|
||||
|
||||
/**
|
||||
* Clears all tracked values
|
||||
*/
|
||||
clear(): void
|
||||
}
|
||||
34
packages/guardian/src/domain/services/ISecretDetector.ts
Normal file
34
packages/guardian/src/domain/services/ISecretDetector.ts
Normal file
@@ -0,0 +1,34 @@
|
||||
import { SecretViolation } from "../value-objects/SecretViolation"
|
||||
|
||||
/**
|
||||
* Interface for detecting hardcoded secrets in source code
|
||||
*
|
||||
* Detects sensitive data like API keys, tokens, passwords, and credentials
|
||||
* that should never be hardcoded in source code. Uses industry-standard
|
||||
* Secretlint library for pattern matching.
|
||||
*
|
||||
* All detected secrets are marked as CRITICAL severity violations.
|
||||
*
|
||||
* @example
|
||||
* ```typescript
|
||||
* const detector: ISecretDetector = new SecretDetector()
|
||||
* const violations = await detector.detectAll(
|
||||
* 'const AWS_KEY = "AKIA1234567890ABCDEF"',
|
||||
* 'src/config/aws.ts'
|
||||
* )
|
||||
*
|
||||
* violations.forEach(v => {
|
||||
* console.log(v.getMessage()) // "Hardcoded AWS Access Key detected"
|
||||
* })
|
||||
* ```
|
||||
*/
|
||||
export interface ISecretDetector {
|
||||
/**
|
||||
* Detect all types of hardcoded secrets in the provided code
|
||||
*
|
||||
* @param code - Source code to analyze
|
||||
* @param filePath - Path to the file being analyzed
|
||||
* @returns Array of secret violations found
|
||||
*/
|
||||
detectAll(code: string, filePath: string): Promise<SecretViolation[]>
|
||||
}
|
||||
@@ -0,0 +1,240 @@
|
||||
import { ValueObject } from "./ValueObject"
|
||||
import { ANEMIC_MODEL_MESSAGES } from "../constants/Messages"
|
||||
import { EXAMPLE_CODE_CONSTANTS } from "../../shared/constants"
|
||||
|
||||
interface AnemicModelViolationProps {
|
||||
readonly className: string
|
||||
readonly filePath: string
|
||||
readonly layer: string
|
||||
readonly line?: number
|
||||
readonly methodCount: number
|
||||
readonly propertyCount: number
|
||||
readonly hasOnlyGettersSetters: boolean
|
||||
readonly hasPublicSetters: boolean
|
||||
}
|
||||
|
||||
/**
|
||||
* Represents an anemic domain model violation in the codebase
|
||||
*
|
||||
* Anemic domain model occurs when entities have only getters/setters
|
||||
* without business logic. This violates Domain-Driven Design principles
|
||||
* and leads to procedural code instead of object-oriented design.
|
||||
*
|
||||
* @example
|
||||
* ```typescript
|
||||
* // Bad: Anemic model with only getters/setters
|
||||
* const violation = AnemicModelViolation.create(
|
||||
* 'Order',
|
||||
* 'src/domain/entities/Order.ts',
|
||||
* 'domain',
|
||||
* 10,
|
||||
* 4,
|
||||
* 2,
|
||||
* true,
|
||||
* true
|
||||
* )
|
||||
*
|
||||
* console.log(violation.getMessage())
|
||||
* // "Class 'Order' is anemic: 4 methods (all getters/setters) for 2 properties"
|
||||
* ```
|
||||
*/
|
||||
export class AnemicModelViolation extends ValueObject<AnemicModelViolationProps> {
|
||||
private constructor(props: AnemicModelViolationProps) {
|
||||
super(props)
|
||||
}
|
||||
|
||||
public static create(
|
||||
className: string,
|
||||
filePath: string,
|
||||
layer: string,
|
||||
line: number | undefined,
|
||||
methodCount: number,
|
||||
propertyCount: number,
|
||||
hasOnlyGettersSetters: boolean,
|
||||
hasPublicSetters: boolean,
|
||||
): AnemicModelViolation {
|
||||
return new AnemicModelViolation({
|
||||
className,
|
||||
filePath,
|
||||
layer,
|
||||
line,
|
||||
methodCount,
|
||||
propertyCount,
|
||||
hasOnlyGettersSetters,
|
||||
hasPublicSetters,
|
||||
})
|
||||
}
|
||||
|
||||
public get className(): string {
|
||||
return this.props.className
|
||||
}
|
||||
|
||||
public get filePath(): string {
|
||||
return this.props.filePath
|
||||
}
|
||||
|
||||
public get layer(): string {
|
||||
return this.props.layer
|
||||
}
|
||||
|
||||
public get line(): number | undefined {
|
||||
return this.props.line
|
||||
}
|
||||
|
||||
public get methodCount(): number {
|
||||
return this.props.methodCount
|
||||
}
|
||||
|
||||
public get propertyCount(): number {
|
||||
return this.props.propertyCount
|
||||
}
|
||||
|
||||
public get hasOnlyGettersSetters(): boolean {
|
||||
return this.props.hasOnlyGettersSetters
|
||||
}
|
||||
|
||||
public get hasPublicSetters(): boolean {
|
||||
return this.props.hasPublicSetters
|
||||
}
|
||||
|
||||
public getMessage(): string {
|
||||
if (this.props.hasPublicSetters) {
|
||||
return `Class '${this.props.className}' has public setters (anti-pattern in DDD)`
|
||||
}
|
||||
|
||||
if (this.props.hasOnlyGettersSetters) {
|
||||
return `Class '${this.props.className}' is anemic: ${String(this.props.methodCount)} methods (all getters/setters) for ${String(this.props.propertyCount)} properties`
|
||||
}
|
||||
|
||||
const ratio = this.props.methodCount / Math.max(this.props.propertyCount, 1)
|
||||
return `Class '${this.props.className}' appears anemic: low method-to-property ratio (${ratio.toFixed(1)}:1)`
|
||||
}
|
||||
|
||||
public getSuggestion(): string {
|
||||
const suggestions: string[] = []
|
||||
|
||||
if (this.props.hasPublicSetters) {
|
||||
suggestions.push(ANEMIC_MODEL_MESSAGES.REMOVE_PUBLIC_SETTERS)
|
||||
suggestions.push(ANEMIC_MODEL_MESSAGES.USE_METHODS_FOR_CHANGES)
|
||||
suggestions.push(ANEMIC_MODEL_MESSAGES.ENCAPSULATE_INVARIANTS)
|
||||
}
|
||||
|
||||
if (this.props.hasOnlyGettersSetters || this.props.methodCount < 2) {
|
||||
suggestions.push(ANEMIC_MODEL_MESSAGES.ADD_BUSINESS_METHODS)
|
||||
suggestions.push(ANEMIC_MODEL_MESSAGES.MOVE_LOGIC_FROM_SERVICES)
|
||||
suggestions.push(ANEMIC_MODEL_MESSAGES.ENCAPSULATE_BUSINESS_RULES)
|
||||
suggestions.push(ANEMIC_MODEL_MESSAGES.USE_DOMAIN_EVENTS)
|
||||
}
|
||||
|
||||
return suggestions.join("\n")
|
||||
}
|
||||
|
||||
public getExampleFix(): string {
|
||||
if (this.props.hasPublicSetters) {
|
||||
return `
|
||||
// ❌ Bad: Public setters allow uncontrolled state changes
|
||||
class ${this.props.className} {
|
||||
private status: string
|
||||
|
||||
public setStatus(status: string): void {
|
||||
this.status = status // No validation!
|
||||
}
|
||||
|
||||
public getStatus(): string {
|
||||
return this.status
|
||||
}
|
||||
}
|
||||
|
||||
// ✅ Good: Business methods with validation
|
||||
class ${this.props.className} {
|
||||
private status: OrderStatus
|
||||
|
||||
public approve(): void {
|
||||
if (!this.canBeApproved()) {
|
||||
throw new CannotApproveOrderError()
|
||||
}
|
||||
this.status = OrderStatus.APPROVED
|
||||
this.events.push(new OrderApprovedEvent(this.id))
|
||||
}
|
||||
|
||||
public reject(reason: string): void {
|
||||
if (!this.canBeRejected()) {
|
||||
throw new CannotRejectOrderError()
|
||||
}
|
||||
this.status = OrderStatus.REJECTED
|
||||
this.rejectionReason = reason
|
||||
this.events.push(new OrderRejectedEvent(this.id, reason))
|
||||
}
|
||||
|
||||
public getStatus(): OrderStatus {
|
||||
return this.status
|
||||
}
|
||||
|
||||
private canBeApproved(): boolean {
|
||||
return this.status === OrderStatus.PENDING && this.hasItems()
|
||||
}
|
||||
}`
|
||||
}
|
||||
|
||||
return `
|
||||
// ❌ Bad: Anemic model (only getters/setters)
|
||||
class ${this.props.className} {
|
||||
getStatus() { return this.status }
|
||||
setStatus(status: string) { this.status = status }
|
||||
|
||||
getTotal() { return this.total }
|
||||
setTotal(total: number) { this.total = total }
|
||||
}
|
||||
|
||||
class OrderService {
|
||||
approve(order: ${this.props.className}): void {
|
||||
if (order.getStatus() !== '${EXAMPLE_CODE_CONSTANTS.ORDER_STATUS_PENDING}') {
|
||||
throw new Error('${EXAMPLE_CODE_CONSTANTS.CANNOT_APPROVE_ERROR}')
|
||||
}
|
||||
order.setStatus('${EXAMPLE_CODE_CONSTANTS.ORDER_STATUS_APPROVED}')
|
||||
}
|
||||
}
|
||||
|
||||
// ✅ Good: Rich domain model with business logic
|
||||
class ${this.props.className} {
|
||||
private readonly id: OrderId
|
||||
private status: OrderStatus
|
||||
private items: OrderItem[]
|
||||
private events: DomainEvent[] = []
|
||||
|
||||
public approve(): void {
|
||||
if (!this.isPending()) {
|
||||
throw new CannotApproveOrderError()
|
||||
}
|
||||
this.status = OrderStatus.APPROVED
|
||||
this.events.push(new OrderApprovedEvent(this.id))
|
||||
}
|
||||
|
||||
public calculateTotal(): Money {
|
||||
return this.items.reduce(
|
||||
(sum, item) => sum.add(item.getPrice()),
|
||||
Money.zero()
|
||||
)
|
||||
}
|
||||
|
||||
public addItem(item: OrderItem): void {
|
||||
if (this.isApproved()) {
|
||||
throw new CannotModifyApprovedOrderError()
|
||||
}
|
||||
this.items.push(item)
|
||||
}
|
||||
|
||||
public getStatus(): OrderStatus {
|
||||
return this.status
|
||||
}
|
||||
|
||||
private isPending(): boolean {
|
||||
return this.status === OrderStatus.PENDING
|
||||
}
|
||||
|
||||
private isApproved(): boolean {
|
||||
return this.status === OrderStatus.APPROVED
|
||||
}
|
||||
}`
|
||||
}
|
||||
}
|
||||
@@ -1,15 +1,40 @@
|
||||
import { ValueObject } from "./ValueObject"
|
||||
import { HARDCODE_TYPES } from "../../shared/constants/rules"
|
||||
import { DETECTION_PATTERNS, HARDCODE_TYPES } from "../../shared/constants/rules"
|
||||
import { CONSTANT_NAMES, LOCATIONS, SUGGESTION_KEYWORDS } from "../constants/Suggestions"
|
||||
|
||||
export type HardcodeType = (typeof HARDCODE_TYPES)[keyof typeof HARDCODE_TYPES]
|
||||
|
||||
export type ValueType =
|
||||
| "email"
|
||||
| "url"
|
||||
| "ip_address"
|
||||
| "file_path"
|
||||
| "date"
|
||||
| "api_key"
|
||||
| "uuid"
|
||||
| "version"
|
||||
| "color"
|
||||
| "mac_address"
|
||||
| "base64"
|
||||
| "config"
|
||||
| "generic"
|
||||
|
||||
export type ValueImportance = "critical" | "high" | "medium" | "low"
|
||||
|
||||
export interface DuplicateLocation {
|
||||
file: string
|
||||
line: number
|
||||
}
|
||||
|
||||
interface HardcodedValueProps {
|
||||
readonly value: string | number
|
||||
readonly value: string | number | boolean
|
||||
readonly type: HardcodeType
|
||||
readonly valueType?: ValueType
|
||||
readonly line: number
|
||||
readonly column: number
|
||||
readonly context: string
|
||||
readonly duplicateLocations?: DuplicateLocation[]
|
||||
readonly withinFileUsageCount?: number
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -21,22 +46,28 @@ export class HardcodedValue extends ValueObject<HardcodedValueProps> {
|
||||
}
|
||||
|
||||
public static create(
|
||||
value: string | number,
|
||||
value: string | number | boolean,
|
||||
type: HardcodeType,
|
||||
line: number,
|
||||
column: number,
|
||||
context: string,
|
||||
valueType?: ValueType,
|
||||
duplicateLocations?: DuplicateLocation[],
|
||||
withinFileUsageCount?: number,
|
||||
): HardcodedValue {
|
||||
return new HardcodedValue({
|
||||
value,
|
||||
type,
|
||||
valueType,
|
||||
line,
|
||||
column,
|
||||
context,
|
||||
duplicateLocations,
|
||||
withinFileUsageCount,
|
||||
})
|
||||
}
|
||||
|
||||
public get value(): string | number {
|
||||
public get value(): string | number | boolean {
|
||||
return this.props.value
|
||||
}
|
||||
|
||||
@@ -56,6 +87,28 @@ export class HardcodedValue extends ValueObject<HardcodedValueProps> {
|
||||
return this.props.context
|
||||
}
|
||||
|
||||
public get valueType(): ValueType | undefined {
|
||||
return this.props.valueType
|
||||
}
|
||||
|
||||
public get duplicateLocations(): DuplicateLocation[] | undefined {
|
||||
return this.props.duplicateLocations
|
||||
}
|
||||
|
||||
public get withinFileUsageCount(): number | undefined {
|
||||
return this.props.withinFileUsageCount
|
||||
}
|
||||
|
||||
public hasDuplicates(): boolean {
|
||||
return (
|
||||
this.props.duplicateLocations !== undefined && this.props.duplicateLocations.length > 0
|
||||
)
|
||||
}
|
||||
|
||||
public isAlmostConstant(): boolean {
|
||||
return this.props.withinFileUsageCount !== undefined && this.props.withinFileUsageCount >= 2
|
||||
}
|
||||
|
||||
public isMagicNumber(): boolean {
|
||||
return this.props.type === HARDCODE_TYPES.MAGIC_NUMBER
|
||||
}
|
||||
@@ -106,6 +159,154 @@ export class HardcodedValue extends ValueObject<HardcodedValueProps> {
|
||||
private suggestStringConstantName(): string {
|
||||
const value = String(this.props.value)
|
||||
const context = this.props.context.toLowerCase()
|
||||
const valueType = this.props.valueType
|
||||
|
||||
if (valueType === "email") {
|
||||
if (context.includes("admin")) {
|
||||
return "ADMIN_EMAIL"
|
||||
}
|
||||
if (context.includes("support")) {
|
||||
return "SUPPORT_EMAIL"
|
||||
}
|
||||
if (context.includes("noreply") || context.includes("no-reply")) {
|
||||
return "NOREPLY_EMAIL"
|
||||
}
|
||||
return "DEFAULT_EMAIL"
|
||||
}
|
||||
|
||||
if (valueType === "api_key") {
|
||||
if (context.includes("secret")) {
|
||||
return "API_SECRET_KEY"
|
||||
}
|
||||
if (context.includes("public")) {
|
||||
return "API_PUBLIC_KEY"
|
||||
}
|
||||
return "API_KEY"
|
||||
}
|
||||
|
||||
if (valueType === "url") {
|
||||
if (context.includes("api")) {
|
||||
return "API_BASE_URL"
|
||||
}
|
||||
if (context.includes("database") || context.includes("db")) {
|
||||
return "DATABASE_URL"
|
||||
}
|
||||
if (context.includes("mongo")) {
|
||||
return "MONGODB_CONNECTION_STRING"
|
||||
}
|
||||
if (context.includes("postgres") || context.includes("pg")) {
|
||||
return "POSTGRES_URL"
|
||||
}
|
||||
return "BASE_URL"
|
||||
}
|
||||
|
||||
if (valueType === "ip_address") {
|
||||
if (context.includes("server")) {
|
||||
return "SERVER_IP"
|
||||
}
|
||||
if (context.includes("database") || context.includes("db")) {
|
||||
return "DATABASE_HOST"
|
||||
}
|
||||
if (context.includes("redis")) {
|
||||
return "REDIS_HOST"
|
||||
}
|
||||
return "HOST_IP"
|
||||
}
|
||||
|
||||
if (valueType === "file_path") {
|
||||
if (context.includes("log")) {
|
||||
return "LOG_FILE_PATH"
|
||||
}
|
||||
if (context.includes("config")) {
|
||||
return "CONFIG_FILE_PATH"
|
||||
}
|
||||
if (context.includes("data")) {
|
||||
return "DATA_DIR_PATH"
|
||||
}
|
||||
if (context.includes("temp")) {
|
||||
return "TEMP_DIR_PATH"
|
||||
}
|
||||
return "FILE_PATH"
|
||||
}
|
||||
|
||||
if (valueType === "date") {
|
||||
if (context.includes("deadline")) {
|
||||
return "DEADLINE"
|
||||
}
|
||||
if (context.includes("start")) {
|
||||
return "START_DATE"
|
||||
}
|
||||
if (context.includes("end")) {
|
||||
return "END_DATE"
|
||||
}
|
||||
if (context.includes("expir")) {
|
||||
return "EXPIRATION_DATE"
|
||||
}
|
||||
return "DEFAULT_DATE"
|
||||
}
|
||||
|
||||
if (valueType === "uuid") {
|
||||
if (context.includes("id") || context.includes("identifier")) {
|
||||
return "DEFAULT_ID"
|
||||
}
|
||||
if (context.includes("request")) {
|
||||
return "REQUEST_ID"
|
||||
}
|
||||
if (context.includes("session")) {
|
||||
return "SESSION_ID"
|
||||
}
|
||||
return "UUID_CONSTANT"
|
||||
}
|
||||
|
||||
if (valueType === "version") {
|
||||
if (context.includes("api")) {
|
||||
return "API_VERSION"
|
||||
}
|
||||
if (context.includes("app")) {
|
||||
return "APP_VERSION"
|
||||
}
|
||||
return "VERSION"
|
||||
}
|
||||
|
||||
if (valueType === "color") {
|
||||
if (context.includes("primary")) {
|
||||
return "PRIMARY_COLOR"
|
||||
}
|
||||
if (context.includes("secondary")) {
|
||||
return "SECONDARY_COLOR"
|
||||
}
|
||||
if (context.includes("background")) {
|
||||
return "BACKGROUND_COLOR"
|
||||
}
|
||||
return "COLOR_CONSTANT"
|
||||
}
|
||||
|
||||
if (valueType === "mac_address") {
|
||||
return "MAC_ADDRESS"
|
||||
}
|
||||
|
||||
if (valueType === "base64") {
|
||||
if (context.includes("token")) {
|
||||
return "ENCODED_TOKEN"
|
||||
}
|
||||
if (context.includes("key")) {
|
||||
return "ENCODED_KEY"
|
||||
}
|
||||
return "BASE64_VALUE"
|
||||
}
|
||||
|
||||
if (valueType === "config") {
|
||||
if (context.includes("endpoint")) {
|
||||
return "API_ENDPOINT"
|
||||
}
|
||||
if (context.includes("route")) {
|
||||
return "ROUTE_PATH"
|
||||
}
|
||||
if (context.includes("connection")) {
|
||||
return "CONNECTION_STRING"
|
||||
}
|
||||
return "CONFIG_VALUE"
|
||||
}
|
||||
|
||||
if (value.includes(SUGGESTION_KEYWORDS.HTTP)) {
|
||||
return CONSTANT_NAMES.API_BASE_URL
|
||||
@@ -135,6 +336,23 @@ export class HardcodedValue extends ValueObject<HardcodedValueProps> {
|
||||
}
|
||||
|
||||
const context = this.props.context.toLowerCase()
|
||||
const valueType = this.props.valueType
|
||||
|
||||
if (valueType === "api_key" || valueType === "url" || valueType === "ip_address") {
|
||||
return "src/config/environment.ts"
|
||||
}
|
||||
|
||||
if (valueType === "email") {
|
||||
return "src/config/contacts.ts"
|
||||
}
|
||||
|
||||
if (valueType === "file_path") {
|
||||
return "src/config/paths.ts"
|
||||
}
|
||||
|
||||
if (valueType === "date") {
|
||||
return "src/config/dates.ts"
|
||||
}
|
||||
|
||||
if (
|
||||
context.includes(SUGGESTION_KEYWORDS.ENTITY) ||
|
||||
@@ -153,4 +371,122 @@ export class HardcodedValue extends ValueObject<HardcodedValueProps> {
|
||||
|
||||
return LOCATIONS.SHARED_CONSTANTS
|
||||
}
|
||||
|
||||
public getDetailedSuggestion(currentLayer?: string): string {
|
||||
const constantName = this.suggestConstantName()
|
||||
const location = this.suggestLocation(currentLayer)
|
||||
const valueTypeLabel = this.valueType ? ` (${this.valueType})` : ""
|
||||
|
||||
let suggestion = `Extract${valueTypeLabel} to constant ${constantName} in ${location}`
|
||||
|
||||
if (this.isAlmostConstant() && this.withinFileUsageCount) {
|
||||
suggestion += `. This value appears ${String(this.withinFileUsageCount)} times in this file`
|
||||
}
|
||||
|
||||
if (this.hasDuplicates() && this.duplicateLocations) {
|
||||
const count = this.duplicateLocations.length
|
||||
const fileList = this.duplicateLocations
|
||||
.slice(0, 3)
|
||||
.map((loc) => `${loc.file}:${String(loc.line)}`)
|
||||
.join(", ")
|
||||
|
||||
const more = count > 3 ? ` and ${String(count - 3)} more` : ""
|
||||
suggestion += `. Also duplicated in ${String(count)} other file(s): ${fileList}${more}`
|
||||
}
|
||||
|
||||
return suggestion
|
||||
}
|
||||
|
||||
/**
|
||||
* Analyzes variable name and context to determine importance
|
||||
*/
|
||||
public getImportance(): ValueImportance {
|
||||
const context = this.props.context.toLowerCase()
|
||||
const valueType = this.props.valueType
|
||||
|
||||
if (valueType === "api_key") {
|
||||
return "critical"
|
||||
}
|
||||
|
||||
const criticalKeywords = [
|
||||
...DETECTION_PATTERNS.SENSITIVE_KEYWORDS,
|
||||
...DETECTION_PATTERNS.BUSINESS_KEYWORDS,
|
||||
"key",
|
||||
"age",
|
||||
]
|
||||
|
||||
if (criticalKeywords.some((keyword) => context.includes(keyword))) {
|
||||
return "critical"
|
||||
}
|
||||
|
||||
const highKeywords = [...DETECTION_PATTERNS.TECHNICAL_KEYWORDS, "db", "api"]
|
||||
|
||||
if (highKeywords.some((keyword) => context.includes(keyword))) {
|
||||
return "high"
|
||||
}
|
||||
|
||||
if (valueType === "url" || valueType === "ip_address" || valueType === "email") {
|
||||
return "high"
|
||||
}
|
||||
|
||||
const mediumKeywords = DETECTION_PATTERNS.MEDIUM_KEYWORDS
|
||||
|
||||
if (mediumKeywords.some((keyword) => context.includes(keyword))) {
|
||||
return "medium"
|
||||
}
|
||||
|
||||
const lowKeywords = DETECTION_PATTERNS.UI_KEYWORDS
|
||||
|
||||
if (lowKeywords.some((keyword) => context.includes(keyword))) {
|
||||
return "low"
|
||||
}
|
||||
|
||||
return "medium"
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if this violation should be skipped based on layer strictness
|
||||
*
|
||||
* Different layers have different tolerance levels:
|
||||
* - domain: strictest (no hardcoded values allowed)
|
||||
* - application: strict (only low importance allowed)
|
||||
* - infrastructure: moderate (low and some medium allowed)
|
||||
* - cli: lenient (UI constants allowed)
|
||||
*/
|
||||
public shouldSkip(layer?: string): boolean {
|
||||
if (!layer) {
|
||||
return false
|
||||
}
|
||||
|
||||
const importance = this.getImportance()
|
||||
|
||||
if (layer === "domain") {
|
||||
return false
|
||||
}
|
||||
|
||||
if (layer === "application") {
|
||||
return false
|
||||
}
|
||||
|
||||
if (layer === "infrastructure") {
|
||||
return importance === "low" && this.isUIConstant()
|
||||
}
|
||||
|
||||
if (layer === "cli") {
|
||||
return importance === "low" && this.isUIConstant()
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if this value is a UI-related constant
|
||||
*/
|
||||
private isUIConstant(): boolean {
|
||||
const context = this.props.context.toLowerCase()
|
||||
|
||||
const uiKeywords = DETECTION_PATTERNS.UI_KEYWORDS
|
||||
|
||||
return uiKeywords.some((keyword) => context.includes(keyword))
|
||||
}
|
||||
}
|
||||
|
||||
204
packages/guardian/src/domain/value-objects/SecretViolation.ts
Normal file
204
packages/guardian/src/domain/value-objects/SecretViolation.ts
Normal file
@@ -0,0 +1,204 @@
|
||||
import { ValueObject } from "./ValueObject"
|
||||
import { SECRET_VIOLATION_MESSAGES } from "../constants/Messages"
|
||||
import { SEVERITY_LEVELS } from "../../shared/constants"
|
||||
import { FILE_ENCODING, SECRET_EXAMPLE_VALUES, SECRET_KEYWORDS } from "../constants/SecretExamples"
|
||||
|
||||
interface SecretViolationProps {
|
||||
readonly file: string
|
||||
readonly line: number
|
||||
readonly column: number
|
||||
readonly secretType: string
|
||||
readonly matchedPattern: string
|
||||
}
|
||||
|
||||
/**
|
||||
* Represents a secret exposure violation in the codebase
|
||||
*
|
||||
* Secret violations occur when sensitive data like API keys, tokens, passwords,
|
||||
* or credentials are hardcoded in the source code instead of being stored
|
||||
* in secure environment variables or secret management systems.
|
||||
*
|
||||
* All secret violations are marked as CRITICAL severity because they represent
|
||||
* serious security risks that could lead to unauthorized access, data breaches,
|
||||
* or service compromise.
|
||||
*
|
||||
* @example
|
||||
* ```typescript
|
||||
* const violation = SecretViolation.create(
|
||||
* 'src/config/aws.ts',
|
||||
* 10,
|
||||
* 15,
|
||||
* 'AWS Access Key',
|
||||
* 'AKIA1234567890ABCDEF'
|
||||
* )
|
||||
*
|
||||
* console.log(violation.getMessage())
|
||||
* // "Hardcoded AWS Access Key detected"
|
||||
*
|
||||
* console.log(violation.getSeverity())
|
||||
* // "critical"
|
||||
* ```
|
||||
*/
|
||||
export class SecretViolation extends ValueObject<SecretViolationProps> {
|
||||
private constructor(props: SecretViolationProps) {
|
||||
super(props)
|
||||
}
|
||||
|
||||
public static create(
|
||||
file: string,
|
||||
line: number,
|
||||
column: number,
|
||||
secretType: string,
|
||||
matchedPattern: string,
|
||||
): SecretViolation {
|
||||
return new SecretViolation({
|
||||
file,
|
||||
line,
|
||||
column,
|
||||
secretType,
|
||||
matchedPattern,
|
||||
})
|
||||
}
|
||||
|
||||
public get file(): string {
|
||||
return this.props.file
|
||||
}
|
||||
|
||||
public get line(): number {
|
||||
return this.props.line
|
||||
}
|
||||
|
||||
public get column(): number {
|
||||
return this.props.column
|
||||
}
|
||||
|
||||
public get secretType(): string {
|
||||
return this.props.secretType
|
||||
}
|
||||
|
||||
public get matchedPattern(): string {
|
||||
return this.props.matchedPattern
|
||||
}
|
||||
|
||||
public getMessage(): string {
|
||||
return `Hardcoded ${this.props.secretType} detected`
|
||||
}
|
||||
|
||||
public getSuggestion(): string {
|
||||
const suggestions: string[] = [
|
||||
SECRET_VIOLATION_MESSAGES.USE_ENV_VARIABLES,
|
||||
SECRET_VIOLATION_MESSAGES.USE_SECRET_MANAGER,
|
||||
SECRET_VIOLATION_MESSAGES.NEVER_COMMIT_SECRETS,
|
||||
SECRET_VIOLATION_MESSAGES.ROTATE_IF_EXPOSED,
|
||||
SECRET_VIOLATION_MESSAGES.USE_GITIGNORE,
|
||||
]
|
||||
|
||||
return suggestions.join("\n")
|
||||
}
|
||||
|
||||
public getExampleFix(): string {
|
||||
return this.getExampleFixForSecretType(this.props.secretType)
|
||||
}
|
||||
|
||||
public getSeverity(): typeof SEVERITY_LEVELS.CRITICAL {
|
||||
return SEVERITY_LEVELS.CRITICAL
|
||||
}
|
||||
|
||||
private getExampleFixForSecretType(secretType: string): string {
|
||||
const lowerType = secretType.toLowerCase()
|
||||
|
||||
if (lowerType.includes(SECRET_KEYWORDS.AWS)) {
|
||||
return `
|
||||
// ❌ Bad: Hardcoded AWS credentials
|
||||
const AWS_ACCESS_KEY_ID = "${SECRET_EXAMPLE_VALUES.AWS_ACCESS_KEY_ID}"
|
||||
const AWS_SECRET_ACCESS_KEY = "${SECRET_EXAMPLE_VALUES.AWS_SECRET_ACCESS_KEY}"
|
||||
|
||||
// ✅ Good: Use environment variables
|
||||
const AWS_ACCESS_KEY_ID = process.env.AWS_ACCESS_KEY_ID
|
||||
const AWS_SECRET_ACCESS_KEY = process.env.AWS_SECRET_ACCESS_KEY
|
||||
|
||||
// ✅ Good: Use credentials provider (in infrastructure layer)
|
||||
// Load credentials from environment or credentials file`
|
||||
}
|
||||
|
||||
if (lowerType.includes(SECRET_KEYWORDS.GITHUB)) {
|
||||
return `
|
||||
// ❌ Bad: Hardcoded GitHub token
|
||||
const GITHUB_TOKEN = "${SECRET_EXAMPLE_VALUES.GITHUB_TOKEN}"
|
||||
|
||||
// ✅ Good: Use environment variables
|
||||
const GITHUB_TOKEN = process.env.GITHUB_TOKEN
|
||||
|
||||
// ✅ Good: GitHub Apps with temporary tokens
|
||||
// Use GitHub Apps for automated workflows instead of personal access tokens`
|
||||
}
|
||||
|
||||
if (lowerType.includes(SECRET_KEYWORDS.NPM)) {
|
||||
return `
|
||||
// ❌ Bad: Hardcoded NPM token in code
|
||||
const NPM_TOKEN = "${SECRET_EXAMPLE_VALUES.NPM_TOKEN}"
|
||||
|
||||
// ✅ Good: Use .npmrc file (add to .gitignore)
|
||||
// .npmrc
|
||||
//registry.npmjs.org/:_authToken=\${NPM_TOKEN}
|
||||
|
||||
// ✅ Good: Use environment variable
|
||||
const NPM_TOKEN = process.env.NPM_TOKEN`
|
||||
}
|
||||
|
||||
if (
|
||||
lowerType.includes(SECRET_KEYWORDS.SSH) ||
|
||||
lowerType.includes(SECRET_KEYWORDS.PRIVATE_KEY)
|
||||
) {
|
||||
return `
|
||||
// ❌ Bad: Hardcoded SSH private key
|
||||
const privateKey = \`-----BEGIN RSA PRIVATE KEY-----
|
||||
MIIEpAIBAAKCAQEA...\`
|
||||
|
||||
// ✅ Good: Load from secure file (not in repository)
|
||||
import fs from "fs"
|
||||
const privateKey = fs.readFileSync(process.env.SSH_KEY_PATH, "${FILE_ENCODING.UTF8}")
|
||||
|
||||
// ✅ Good: Use SSH agent
|
||||
// Configure SSH agent to handle keys securely`
|
||||
}
|
||||
|
||||
if (lowerType.includes(SECRET_KEYWORDS.SLACK)) {
|
||||
return `
|
||||
// ❌ Bad: Hardcoded Slack token
|
||||
const SLACK_TOKEN = "${SECRET_EXAMPLE_VALUES.SLACK_TOKEN}"
|
||||
|
||||
// ✅ Good: Use environment variables
|
||||
const SLACK_TOKEN = process.env.SLACK_BOT_TOKEN
|
||||
|
||||
// ✅ Good: Use OAuth flow for user tokens
|
||||
// Implement OAuth 2.0 flow instead of hardcoding tokens`
|
||||
}
|
||||
|
||||
if (
|
||||
lowerType.includes(SECRET_KEYWORDS.API_KEY) ||
|
||||
lowerType.includes(SECRET_KEYWORDS.APIKEY)
|
||||
) {
|
||||
return `
|
||||
// ❌ Bad: Hardcoded API key
|
||||
const API_KEY = "${SECRET_EXAMPLE_VALUES.API_KEY}"
|
||||
|
||||
// ✅ Good: Use environment variables
|
||||
const API_KEY = process.env.API_KEY
|
||||
|
||||
// ✅ Good: Use secret management service (in infrastructure layer)
|
||||
// AWS Secrets Manager, HashiCorp Vault, Azure Key Vault
|
||||
// Implement secret retrieval in infrastructure and inject via DI`
|
||||
}
|
||||
|
||||
return `
|
||||
// ❌ Bad: Hardcoded secret
|
||||
const SECRET = "${SECRET_EXAMPLE_VALUES.HARDCODED_SECRET}"
|
||||
|
||||
// ✅ Good: Use environment variables
|
||||
const SECRET = process.env.SECRET_KEY
|
||||
|
||||
// ✅ Good: Use secret management
|
||||
// AWS Secrets Manager, HashiCorp Vault, Azure Key Vault, etc.`
|
||||
}
|
||||
}
|
||||
@@ -1,8 +1,9 @@
|
||||
import { IAggregateBoundaryDetector } from "../../domain/services/IAggregateBoundaryDetector"
|
||||
import { AggregateBoundaryViolation } from "../../domain/value-objects/AggregateBoundaryViolation"
|
||||
import { LAYERS } from "../../shared/constants/rules"
|
||||
import { IMPORT_PATTERNS } from "../constants/paths"
|
||||
import { DDD_FOLDER_NAMES } from "../constants/detectorPatterns"
|
||||
import { AggregatePathAnalyzer } from "../strategies/AggregatePathAnalyzer"
|
||||
import { FolderRegistry } from "../strategies/FolderRegistry"
|
||||
import { ImportValidator } from "../strategies/ImportValidator"
|
||||
|
||||
/**
|
||||
* Detects aggregate boundary violations in Domain-Driven Design
|
||||
@@ -38,38 +39,15 @@ import { DDD_FOLDER_NAMES } from "../constants/detectorPatterns"
|
||||
* ```
|
||||
*/
|
||||
export class AggregateBoundaryDetector implements IAggregateBoundaryDetector {
|
||||
private readonly entityFolderNames = new Set<string>([
|
||||
DDD_FOLDER_NAMES.ENTITIES,
|
||||
DDD_FOLDER_NAMES.AGGREGATES,
|
||||
])
|
||||
private readonly valueObjectFolderNames = new Set<string>([
|
||||
DDD_FOLDER_NAMES.VALUE_OBJECTS,
|
||||
DDD_FOLDER_NAMES.VO,
|
||||
])
|
||||
private readonly allowedFolderNames = new Set<string>([
|
||||
DDD_FOLDER_NAMES.VALUE_OBJECTS,
|
||||
DDD_FOLDER_NAMES.VO,
|
||||
DDD_FOLDER_NAMES.EVENTS,
|
||||
DDD_FOLDER_NAMES.DOMAIN_EVENTS,
|
||||
DDD_FOLDER_NAMES.REPOSITORIES,
|
||||
DDD_FOLDER_NAMES.SERVICES,
|
||||
DDD_FOLDER_NAMES.SPECIFICATIONS,
|
||||
])
|
||||
private readonly nonAggregateFolderNames = new Set<string>([
|
||||
DDD_FOLDER_NAMES.VALUE_OBJECTS,
|
||||
DDD_FOLDER_NAMES.VO,
|
||||
DDD_FOLDER_NAMES.EVENTS,
|
||||
DDD_FOLDER_NAMES.DOMAIN_EVENTS,
|
||||
DDD_FOLDER_NAMES.REPOSITORIES,
|
||||
DDD_FOLDER_NAMES.SERVICES,
|
||||
DDD_FOLDER_NAMES.SPECIFICATIONS,
|
||||
DDD_FOLDER_NAMES.ENTITIES,
|
||||
DDD_FOLDER_NAMES.CONSTANTS,
|
||||
DDD_FOLDER_NAMES.SHARED,
|
||||
DDD_FOLDER_NAMES.FACTORIES,
|
||||
DDD_FOLDER_NAMES.PORTS,
|
||||
DDD_FOLDER_NAMES.INTERFACES,
|
||||
])
|
||||
private readonly folderRegistry: FolderRegistry
|
||||
private readonly pathAnalyzer: AggregatePathAnalyzer
|
||||
private readonly importValidator: ImportValidator
|
||||
|
||||
constructor() {
|
||||
this.folderRegistry = new FolderRegistry()
|
||||
this.pathAnalyzer = new AggregatePathAnalyzer(this.folderRegistry)
|
||||
this.importValidator = new ImportValidator(this.folderRegistry, this.pathAnalyzer)
|
||||
}
|
||||
|
||||
/**
|
||||
* Detects aggregate boundary violations in the given code
|
||||
@@ -91,41 +69,12 @@ export class AggregateBoundaryDetector implements IAggregateBoundaryDetector {
|
||||
return []
|
||||
}
|
||||
|
||||
const currentAggregate = this.extractAggregateFromPath(filePath)
|
||||
const currentAggregate = this.pathAnalyzer.extractAggregateFromPath(filePath)
|
||||
if (!currentAggregate) {
|
||||
return []
|
||||
}
|
||||
|
||||
const violations: AggregateBoundaryViolation[] = []
|
||||
const lines = code.split("\n")
|
||||
|
||||
for (let i = 0; i < lines.length; i++) {
|
||||
const line = lines[i]
|
||||
const lineNumber = i + 1
|
||||
|
||||
const imports = this.extractImports(line)
|
||||
for (const importPath of imports) {
|
||||
if (this.isAggregateBoundaryViolation(importPath, currentAggregate)) {
|
||||
const targetAggregate = this.extractAggregateFromImport(importPath)
|
||||
const entityName = this.extractEntityName(importPath)
|
||||
|
||||
if (targetAggregate && entityName) {
|
||||
violations.push(
|
||||
AggregateBoundaryViolation.create(
|
||||
currentAggregate,
|
||||
targetAggregate,
|
||||
entityName,
|
||||
importPath,
|
||||
filePath,
|
||||
lineNumber,
|
||||
),
|
||||
)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return violations
|
||||
return this.analyzeImports(code, filePath, currentAggregate)
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -140,37 +89,7 @@ export class AggregateBoundaryDetector implements IAggregateBoundaryDetector {
|
||||
* @returns The aggregate name if found, undefined otherwise
|
||||
*/
|
||||
public extractAggregateFromPath(filePath: string): string | undefined {
|
||||
const normalizedPath = filePath.toLowerCase().replace(/\\/g, "/")
|
||||
|
||||
const domainMatch = /(?:^|\/)(domain)\//.exec(normalizedPath)
|
||||
if (!domainMatch) {
|
||||
return undefined
|
||||
}
|
||||
|
||||
const domainEndIndex = domainMatch.index + domainMatch[0].length
|
||||
const pathAfterDomain = normalizedPath.substring(domainEndIndex)
|
||||
const segments = pathAfterDomain.split("/").filter(Boolean)
|
||||
|
||||
if (segments.length < 2) {
|
||||
return undefined
|
||||
}
|
||||
|
||||
if (this.entityFolderNames.has(segments[0])) {
|
||||
if (segments.length < 3) {
|
||||
return undefined
|
||||
}
|
||||
const aggregate = segments[1]
|
||||
if (this.nonAggregateFolderNames.has(aggregate)) {
|
||||
return undefined
|
||||
}
|
||||
return aggregate
|
||||
}
|
||||
|
||||
const aggregate = segments[0]
|
||||
if (this.nonAggregateFolderNames.has(aggregate)) {
|
||||
return undefined
|
||||
}
|
||||
return aggregate
|
||||
return this.pathAnalyzer.extractAggregateFromPath(filePath)
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -181,162 +100,68 @@ export class AggregateBoundaryDetector implements IAggregateBoundaryDetector {
|
||||
* @returns True if the import crosses aggregate boundaries inappropriately
|
||||
*/
|
||||
public isAggregateBoundaryViolation(importPath: string, currentAggregate: string): boolean {
|
||||
const normalizedPath = importPath.replace(IMPORT_PATTERNS.QUOTE, "").toLowerCase()
|
||||
|
||||
if (!normalizedPath.includes("/")) {
|
||||
return false
|
||||
}
|
||||
|
||||
if (!normalizedPath.startsWith(".") && !normalizedPath.startsWith("/")) {
|
||||
return false
|
||||
}
|
||||
|
||||
const targetAggregate = this.extractAggregateFromImport(normalizedPath)
|
||||
if (!targetAggregate || targetAggregate === currentAggregate) {
|
||||
return false
|
||||
}
|
||||
|
||||
if (this.isAllowedImport(normalizedPath)) {
|
||||
return false
|
||||
}
|
||||
|
||||
return this.seemsLikeEntityImport(normalizedPath)
|
||||
return this.importValidator.isViolation(importPath, currentAggregate)
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if the import path is from an allowed folder (value-objects, events, etc.)
|
||||
* Analyzes all imports in code and detects violations
|
||||
*/
|
||||
private isAllowedImport(normalizedPath: string): boolean {
|
||||
for (const folderName of this.allowedFolderNames) {
|
||||
if (normalizedPath.includes(`/${folderName}/`)) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
private analyzeImports(
|
||||
code: string,
|
||||
filePath: string,
|
||||
currentAggregate: string,
|
||||
): AggregateBoundaryViolation[] {
|
||||
const violations: AggregateBoundaryViolation[] = []
|
||||
const lines = code.split("\n")
|
||||
|
||||
/**
|
||||
* Checks if the import seems to be an entity (not a value object, event, etc.)
|
||||
*
|
||||
* Note: normalizedPath is already lowercased, so we check if the first character
|
||||
* is a letter (indicating it was likely PascalCase originally)
|
||||
*/
|
||||
private seemsLikeEntityImport(normalizedPath: string): boolean {
|
||||
const pathParts = normalizedPath.split("/")
|
||||
const lastPart = pathParts[pathParts.length - 1]
|
||||
for (let i = 0; i < lines.length; i++) {
|
||||
const line = lines[i]
|
||||
const lineNumber = i + 1
|
||||
|
||||
if (!lastPart) {
|
||||
return false
|
||||
}
|
||||
|
||||
const filename = lastPart.replace(/\.(ts|js)$/, "")
|
||||
|
||||
if (filename.length > 0 && /^[a-z][a-z]/.exec(filename)) {
|
||||
return true
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
/**
|
||||
* Extracts the aggregate name from an import path
|
||||
*
|
||||
* Handles both absolute and relative paths:
|
||||
* - ../user/User → user
|
||||
* - ../../domain/user/User → user
|
||||
* - ../user/value-objects/UserId → user (but filtered as value object)
|
||||
*/
|
||||
private extractAggregateFromImport(importPath: string): string | undefined {
|
||||
const normalizedPath = importPath.replace(IMPORT_PATTERNS.QUOTE, "").toLowerCase()
|
||||
|
||||
const segments = normalizedPath.split("/").filter((seg) => seg !== ".." && seg !== ".")
|
||||
|
||||
if (segments.length === 0) {
|
||||
return undefined
|
||||
}
|
||||
|
||||
for (let i = 0; i < segments.length; i++) {
|
||||
if (
|
||||
segments[i] === DDD_FOLDER_NAMES.DOMAIN ||
|
||||
segments[i] === DDD_FOLDER_NAMES.AGGREGATES
|
||||
) {
|
||||
if (i + 1 < segments.length) {
|
||||
if (
|
||||
this.entityFolderNames.has(segments[i + 1]) ||
|
||||
segments[i + 1] === DDD_FOLDER_NAMES.AGGREGATES
|
||||
) {
|
||||
if (i + 2 < segments.length) {
|
||||
return segments[i + 2]
|
||||
}
|
||||
} else {
|
||||
return segments[i + 1]
|
||||
}
|
||||
const imports = this.importValidator.extractImports(line)
|
||||
for (const importPath of imports) {
|
||||
const violation = this.checkImport(
|
||||
importPath,
|
||||
currentAggregate,
|
||||
filePath,
|
||||
lineNumber,
|
||||
)
|
||||
if (violation) {
|
||||
violations.push(violation)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (segments.length >= 2) {
|
||||
const secondLastSegment = segments[segments.length - 2]
|
||||
return violations
|
||||
}
|
||||
|
||||
if (
|
||||
!this.entityFolderNames.has(secondLastSegment) &&
|
||||
!this.valueObjectFolderNames.has(secondLastSegment) &&
|
||||
!this.allowedFolderNames.has(secondLastSegment) &&
|
||||
secondLastSegment !== DDD_FOLDER_NAMES.DOMAIN
|
||||
) {
|
||||
return secondLastSegment
|
||||
}
|
||||
}
|
||||
|
||||
if (segments.length === 1) {
|
||||
/**
|
||||
* Checks a single import for boundary violations
|
||||
*/
|
||||
private checkImport(
|
||||
importPath: string,
|
||||
currentAggregate: string,
|
||||
filePath: string,
|
||||
lineNumber: number,
|
||||
): AggregateBoundaryViolation | undefined {
|
||||
if (!this.importValidator.isViolation(importPath, currentAggregate)) {
|
||||
return undefined
|
||||
}
|
||||
|
||||
return undefined
|
||||
}
|
||||
const targetAggregate = this.pathAnalyzer.extractAggregateFromImport(importPath)
|
||||
const entityName = this.pathAnalyzer.extractEntityName(importPath)
|
||||
|
||||
/**
|
||||
* Extracts the entity name from an import path
|
||||
*/
|
||||
private extractEntityName(importPath: string): string | undefined {
|
||||
const normalizedPath = importPath.replace(IMPORT_PATTERNS.QUOTE, "")
|
||||
const segments = normalizedPath.split("/")
|
||||
const lastSegment = segments[segments.length - 1]
|
||||
|
||||
if (lastSegment) {
|
||||
return lastSegment.replace(/\.(ts|js)$/, "")
|
||||
if (targetAggregate && entityName) {
|
||||
return AggregateBoundaryViolation.create(
|
||||
currentAggregate,
|
||||
targetAggregate,
|
||||
entityName,
|
||||
importPath,
|
||||
filePath,
|
||||
lineNumber,
|
||||
)
|
||||
}
|
||||
|
||||
return undefined
|
||||
}
|
||||
|
||||
/**
|
||||
* Extracts import paths from a line of code
|
||||
*
|
||||
* Handles various import statement formats:
|
||||
* - import { X } from 'path'
|
||||
* - import X from 'path'
|
||||
* - import * as X from 'path'
|
||||
* - const X = require('path')
|
||||
*
|
||||
* @param line - A line of code to analyze
|
||||
* @returns Array of import paths found in the line
|
||||
*/
|
||||
private extractImports(line: string): string[] {
|
||||
const imports: string[] = []
|
||||
|
||||
let match = IMPORT_PATTERNS.ES_IMPORT.exec(line)
|
||||
while (match) {
|
||||
imports.push(match[1])
|
||||
match = IMPORT_PATTERNS.ES_IMPORT.exec(line)
|
||||
}
|
||||
|
||||
match = IMPORT_PATTERNS.REQUIRE.exec(line)
|
||||
while (match) {
|
||||
imports.push(match[1])
|
||||
match = IMPORT_PATTERNS.REQUIRE.exec(line)
|
||||
}
|
||||
|
||||
return imports
|
||||
}
|
||||
}
|
||||
|
||||
@@ -0,0 +1,318 @@
|
||||
import { IAnemicModelDetector } from "../../domain/services/IAnemicModelDetector"
|
||||
import { AnemicModelViolation } from "../../domain/value-objects/AnemicModelViolation"
|
||||
import { CLASS_KEYWORDS } from "../../shared/constants"
|
||||
import { ANALYZER_DEFAULTS, ANEMIC_MODEL_FLAGS, LAYERS } from "../../shared/constants/rules"
|
||||
|
||||
/**
|
||||
* Detects anemic domain model violations
|
||||
*
|
||||
* This detector identifies entities that lack business logic and contain
|
||||
* only getters/setters. Anemic models violate Domain-Driven Design principles.
|
||||
*
|
||||
* @example
|
||||
* ```typescript
|
||||
* const detector = new AnemicModelDetector()
|
||||
*
|
||||
* // Detect anemic models in entity file
|
||||
* const code = `
|
||||
* class Order {
|
||||
* getStatus() { return this.status }
|
||||
* setStatus(status: string) { this.status = status }
|
||||
* getTotal() { return this.total }
|
||||
* setTotal(total: number) { this.total = total }
|
||||
* }
|
||||
* `
|
||||
* const violations = detector.detectAnemicModels(
|
||||
* code,
|
||||
* 'src/domain/entities/Order.ts',
|
||||
* 'domain'
|
||||
* )
|
||||
*
|
||||
* // violations will contain anemic model violation
|
||||
* console.log(violations.length) // 1
|
||||
* console.log(violations[0].className) // 'Order'
|
||||
* ```
|
||||
*/
|
||||
export class AnemicModelDetector implements IAnemicModelDetector {
|
||||
private readonly entityPatterns = [/\/entities\//, /\/aggregates\//]
|
||||
private readonly excludePatterns = [
|
||||
/\.test\.ts$/,
|
||||
/\.spec\.ts$/,
|
||||
/Dto\.ts$/,
|
||||
/Request\.ts$/,
|
||||
/Response\.ts$/,
|
||||
/Mapper\.ts$/,
|
||||
]
|
||||
|
||||
/**
|
||||
* Detects anemic model violations in the given code
|
||||
*/
|
||||
public detectAnemicModels(
|
||||
code: string,
|
||||
filePath: string,
|
||||
layer: string | undefined,
|
||||
): AnemicModelViolation[] {
|
||||
if (!this.shouldAnalyze(filePath, layer)) {
|
||||
return []
|
||||
}
|
||||
|
||||
const violations: AnemicModelViolation[] = []
|
||||
const classes = this.extractClasses(code)
|
||||
|
||||
for (const classInfo of classes) {
|
||||
const violation = this.analyzeClass(classInfo, filePath, layer || LAYERS.DOMAIN)
|
||||
if (violation) {
|
||||
violations.push(violation)
|
||||
}
|
||||
}
|
||||
|
||||
return violations
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if file should be analyzed
|
||||
*/
|
||||
private shouldAnalyze(filePath: string, layer: string | undefined): boolean {
|
||||
if (layer !== LAYERS.DOMAIN) {
|
||||
return false
|
||||
}
|
||||
|
||||
if (this.excludePatterns.some((pattern) => pattern.test(filePath))) {
|
||||
return false
|
||||
}
|
||||
|
||||
return this.entityPatterns.some((pattern) => pattern.test(filePath))
|
||||
}
|
||||
|
||||
/**
|
||||
* Extracts class information from code
|
||||
*/
|
||||
private extractClasses(code: string): ClassInfo[] {
|
||||
const classes: ClassInfo[] = []
|
||||
const lines = code.split("\n")
|
||||
let currentClass: { name: string; startLine: number; startIndex: number } | null = null
|
||||
let braceCount = 0
|
||||
let classBody = ""
|
||||
|
||||
for (let i = 0; i < lines.length; i++) {
|
||||
const line = lines[i]
|
||||
|
||||
if (!currentClass) {
|
||||
const classRegex = /^\s*(?:export\s+)?(?:abstract\s+)?class\s+(\w+)/
|
||||
const classMatch = classRegex.exec(line)
|
||||
if (classMatch) {
|
||||
currentClass = {
|
||||
name: classMatch[1],
|
||||
startLine: i + 1,
|
||||
startIndex: lines.slice(0, i).join("\n").length,
|
||||
}
|
||||
braceCount = 0
|
||||
classBody = ""
|
||||
}
|
||||
}
|
||||
|
||||
if (currentClass) {
|
||||
for (const char of line) {
|
||||
if (char === "{") {
|
||||
braceCount++
|
||||
} else if (char === "}") {
|
||||
braceCount--
|
||||
}
|
||||
}
|
||||
|
||||
if (braceCount > 0) {
|
||||
classBody = `${classBody}${line}\n`
|
||||
} else if (braceCount === 0 && classBody.length > 0) {
|
||||
const properties = this.extractProperties(classBody)
|
||||
const methods = this.extractMethods(classBody)
|
||||
|
||||
classes.push({
|
||||
className: currentClass.name,
|
||||
lineNumber: currentClass.startLine,
|
||||
properties,
|
||||
methods,
|
||||
})
|
||||
|
||||
currentClass = null
|
||||
classBody = ""
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return classes
|
||||
}
|
||||
|
||||
/**
|
||||
* Extracts properties from class body
|
||||
*/
|
||||
private extractProperties(classBody: string): PropertyInfo[] {
|
||||
const properties: PropertyInfo[] = []
|
||||
const propertyRegex = /(?:private|protected|public|readonly)*\s*(\w+)(?:\?)?:\s*\w+/g
|
||||
|
||||
let match
|
||||
while ((match = propertyRegex.exec(classBody)) !== null) {
|
||||
const propertyName = match[1]
|
||||
|
||||
if (!this.isMethodSignature(match[0])) {
|
||||
properties.push({ name: propertyName })
|
||||
}
|
||||
}
|
||||
|
||||
return properties
|
||||
}
|
||||
|
||||
/**
|
||||
* Extracts methods from class body
|
||||
*/
|
||||
private extractMethods(classBody: string): MethodInfo[] {
|
||||
const methods: MethodInfo[] = []
|
||||
const methodRegex =
|
||||
/(public|private|protected)?\s*(get|set)?\s+(\w+)\s*\([^)]*\)(?:\s*:\s*\w+)?/g
|
||||
|
||||
let match
|
||||
while ((match = methodRegex.exec(classBody)) !== null) {
|
||||
const visibility = match[1] || CLASS_KEYWORDS.PUBLIC
|
||||
const accessor = match[2]
|
||||
const methodName = match[3]
|
||||
|
||||
if (methodName === CLASS_KEYWORDS.CONSTRUCTOR) {
|
||||
continue
|
||||
}
|
||||
|
||||
const isGetter = accessor === "get" || this.isGetterMethod(methodName)
|
||||
const isSetter = accessor === "set" || this.isSetterMethod(methodName, classBody)
|
||||
const isPublic = visibility === CLASS_KEYWORDS.PUBLIC || !visibility
|
||||
|
||||
methods.push({
|
||||
name: methodName,
|
||||
isGetter,
|
||||
isSetter,
|
||||
isPublic,
|
||||
isBusinessLogic: !isGetter && !isSetter,
|
||||
})
|
||||
}
|
||||
|
||||
return methods
|
||||
}
|
||||
|
||||
/**
|
||||
* Analyzes class for anemic model violations
|
||||
*/
|
||||
private analyzeClass(
|
||||
classInfo: ClassInfo,
|
||||
filePath: string,
|
||||
layer: string,
|
||||
): AnemicModelViolation | null {
|
||||
const { className, lineNumber, properties, methods } = classInfo
|
||||
|
||||
if (properties.length === 0 && methods.length === 0) {
|
||||
return null
|
||||
}
|
||||
|
||||
const businessMethods = methods.filter((m) => m.isBusinessLogic)
|
||||
const hasOnlyGettersSetters = businessMethods.length === 0 && methods.length > 0
|
||||
const hasPublicSetters = methods.some((m) => m.isSetter && m.isPublic)
|
||||
|
||||
const methodCount = methods.length
|
||||
const propertyCount = properties.length
|
||||
|
||||
if (hasPublicSetters) {
|
||||
return AnemicModelViolation.create(
|
||||
className,
|
||||
filePath,
|
||||
layer,
|
||||
lineNumber,
|
||||
methodCount,
|
||||
propertyCount,
|
||||
ANEMIC_MODEL_FLAGS.HAS_ONLY_GETTERS_SETTERS_FALSE,
|
||||
ANEMIC_MODEL_FLAGS.HAS_PUBLIC_SETTERS_TRUE,
|
||||
)
|
||||
}
|
||||
|
||||
if (hasOnlyGettersSetters && methodCount >= 2 && propertyCount > 0) {
|
||||
return AnemicModelViolation.create(
|
||||
className,
|
||||
filePath,
|
||||
layer,
|
||||
lineNumber,
|
||||
methodCount,
|
||||
propertyCount,
|
||||
ANEMIC_MODEL_FLAGS.HAS_ONLY_GETTERS_SETTERS_TRUE,
|
||||
ANEMIC_MODEL_FLAGS.HAS_PUBLIC_SETTERS_FALSE,
|
||||
)
|
||||
}
|
||||
|
||||
const methodToPropertyRatio = methodCount / Math.max(propertyCount, 1)
|
||||
if (
|
||||
propertyCount > 0 &&
|
||||
businessMethods.length < 2 &&
|
||||
methodToPropertyRatio < 1.0 &&
|
||||
methodCount > 0
|
||||
) {
|
||||
return AnemicModelViolation.create(
|
||||
className,
|
||||
filePath,
|
||||
layer,
|
||||
lineNumber,
|
||||
methodCount,
|
||||
propertyCount,
|
||||
ANALYZER_DEFAULTS.HAS_ONLY_GETTERS_SETTERS,
|
||||
ANALYZER_DEFAULTS.HAS_PUBLIC_SETTERS,
|
||||
)
|
||||
}
|
||||
|
||||
return null
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if method name is a getter pattern
|
||||
*/
|
||||
private isGetterMethod(methodName: string): boolean {
|
||||
return (
|
||||
methodName.startsWith("get") ||
|
||||
methodName.startsWith("is") ||
|
||||
methodName.startsWith("has")
|
||||
)
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if method is a setter pattern
|
||||
*/
|
||||
private isSetterMethod(methodName: string, _classBody: string): boolean {
|
||||
return methodName.startsWith("set")
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if property declaration is actually a method signature
|
||||
*/
|
||||
private isMethodSignature(propertyDeclaration: string): boolean {
|
||||
return propertyDeclaration.includes("(") && propertyDeclaration.includes(")")
|
||||
}
|
||||
|
||||
/**
|
||||
* Gets line number for a position in code
|
||||
*/
|
||||
private getLineNumber(code: string, position: number): number {
|
||||
const lines = code.substring(0, position).split("\n")
|
||||
return lines.length
|
||||
}
|
||||
}
|
||||
|
||||
interface ClassInfo {
|
||||
className: string
|
||||
lineNumber: number
|
||||
properties: PropertyInfo[]
|
||||
methods: MethodInfo[]
|
||||
}
|
||||
|
||||
interface PropertyInfo {
|
||||
name: string
|
||||
}
|
||||
|
||||
interface MethodInfo {
|
||||
name: string
|
||||
isGetter: boolean
|
||||
isSetter: boolean
|
||||
isPublic: boolean
|
||||
isBusinessLogic: boolean
|
||||
}
|
||||
@@ -0,0 +1,104 @@
|
||||
import Parser from "tree-sitter"
|
||||
import { HardcodedValue } from "../../domain/value-objects/HardcodedValue"
|
||||
import { AstBooleanAnalyzer } from "../strategies/AstBooleanAnalyzer"
|
||||
import { AstConfigObjectAnalyzer } from "../strategies/AstConfigObjectAnalyzer"
|
||||
import { AstNumberAnalyzer } from "../strategies/AstNumberAnalyzer"
|
||||
import { AstStringAnalyzer } from "../strategies/AstStringAnalyzer"
|
||||
|
||||
/**
|
||||
* AST tree traverser for detecting hardcoded values
|
||||
*
|
||||
* Walks through the Abstract Syntax Tree and uses analyzers
|
||||
* to detect hardcoded numbers, strings, booleans, and configuration objects.
|
||||
* Also tracks value usage to identify "almost constants" - values used 2+ times.
|
||||
*/
|
||||
export class AstTreeTraverser {
|
||||
constructor(
|
||||
private readonly numberAnalyzer: AstNumberAnalyzer,
|
||||
private readonly stringAnalyzer: AstStringAnalyzer,
|
||||
private readonly booleanAnalyzer: AstBooleanAnalyzer,
|
||||
private readonly configObjectAnalyzer: AstConfigObjectAnalyzer,
|
||||
) {}
|
||||
|
||||
/**
|
||||
* Traverses the AST tree and collects hardcoded values
|
||||
*/
|
||||
public traverse(tree: Parser.Tree, sourceCode: string): HardcodedValue[] {
|
||||
const results: HardcodedValue[] = []
|
||||
const lines = sourceCode.split("\n")
|
||||
const cursor = tree.walk()
|
||||
|
||||
this.visit(cursor, lines, results)
|
||||
|
||||
this.markAlmostConstants(results)
|
||||
|
||||
return results
|
||||
}
|
||||
|
||||
/**
|
||||
* Marks values that appear multiple times in the same file
|
||||
*/
|
||||
private markAlmostConstants(results: HardcodedValue[]): void {
|
||||
const valueUsage = new Map<string, number>()
|
||||
|
||||
for (const result of results) {
|
||||
const key = `${result.type}:${String(result.value)}`
|
||||
valueUsage.set(key, (valueUsage.get(key) || 0) + 1)
|
||||
}
|
||||
|
||||
for (let i = 0; i < results.length; i++) {
|
||||
const result = results[i]
|
||||
const key = `${result.type}:${String(result.value)}`
|
||||
const count = valueUsage.get(key) || 0
|
||||
|
||||
if (count >= 2 && !result.withinFileUsageCount) {
|
||||
results[i] = HardcodedValue.create(
|
||||
result.value,
|
||||
result.type,
|
||||
result.line,
|
||||
result.column,
|
||||
result.context,
|
||||
result.valueType,
|
||||
result.duplicateLocations,
|
||||
count,
|
||||
)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Recursively visits AST nodes
|
||||
*/
|
||||
private visit(cursor: Parser.TreeCursor, lines: string[], results: HardcodedValue[]): void {
|
||||
const node = cursor.currentNode
|
||||
|
||||
if (node.type === "object") {
|
||||
const violation = this.configObjectAnalyzer.analyze(node, lines)
|
||||
if (violation) {
|
||||
results.push(violation)
|
||||
}
|
||||
} else if (node.type === "number") {
|
||||
const violation = this.numberAnalyzer.analyze(node, lines)
|
||||
if (violation) {
|
||||
results.push(violation)
|
||||
}
|
||||
} else if (node.type === "string") {
|
||||
const violation = this.stringAnalyzer.analyze(node, lines)
|
||||
if (violation) {
|
||||
results.push(violation)
|
||||
}
|
||||
} else if (node.type === "true" || node.type === "false") {
|
||||
const violation = this.booleanAnalyzer.analyze(node, lines)
|
||||
if (violation) {
|
||||
results.push(violation)
|
||||
}
|
||||
}
|
||||
|
||||
if (cursor.gotoFirstChild()) {
|
||||
do {
|
||||
this.visit(cursor, lines, results)
|
||||
} while (cursor.gotoNextSibling())
|
||||
cursor.gotoParent()
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,122 @@
|
||||
import { HardcodedValue } from "../../domain/value-objects/HardcodedValue"
|
||||
import type {
|
||||
DuplicateInfo,
|
||||
IDuplicateValueTracker,
|
||||
ValueLocation,
|
||||
} from "../../domain/services/IDuplicateValueTracker"
|
||||
|
||||
/**
|
||||
* Tracks duplicate hardcoded values across files
|
||||
*
|
||||
* Helps identify values that are used in multiple places
|
||||
* and should be extracted to a shared constant.
|
||||
*/
|
||||
export class DuplicateValueTracker implements IDuplicateValueTracker {
|
||||
private readonly valueMap = new Map<string, ValueLocation[]>()
|
||||
|
||||
/**
|
||||
* Adds a hardcoded value to tracking
|
||||
*/
|
||||
public track(violation: HardcodedValue, filePath: string): void {
|
||||
const key = this.createKey(violation.value, violation.type)
|
||||
const location: ValueLocation = {
|
||||
file: filePath,
|
||||
line: violation.line,
|
||||
context: violation.context,
|
||||
}
|
||||
|
||||
const locations = this.valueMap.get(key)
|
||||
if (!locations) {
|
||||
this.valueMap.set(key, [location])
|
||||
} else {
|
||||
locations.push(location)
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Gets all duplicate values (values used in 2+ places)
|
||||
*/
|
||||
public getDuplicates(): DuplicateInfo[] {
|
||||
const duplicates: DuplicateInfo[] = []
|
||||
|
||||
for (const [key, locations] of this.valueMap.entries()) {
|
||||
if (locations.length >= 2) {
|
||||
const { value } = this.parseKey(key)
|
||||
duplicates.push({
|
||||
value,
|
||||
locations,
|
||||
count: locations.length,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return duplicates.sort((a, b) => b.count - a.count)
|
||||
}
|
||||
|
||||
/**
|
||||
* Gets duplicate locations for a specific value
|
||||
*/
|
||||
public getDuplicateLocations(
|
||||
value: string | number | boolean,
|
||||
type: string,
|
||||
): ValueLocation[] | null {
|
||||
const key = this.createKey(value, type)
|
||||
const locations = this.valueMap.get(key)
|
||||
|
||||
if (!locations || locations.length < 2) {
|
||||
return null
|
||||
}
|
||||
|
||||
return locations
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if a value is duplicated
|
||||
*/
|
||||
public isDuplicate(value: string | number | boolean, type: string): boolean {
|
||||
const key = this.createKey(value, type)
|
||||
const locations = this.valueMap.get(key)
|
||||
return locations ? locations.length >= 2 : false
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates a unique key for a value
|
||||
*/
|
||||
private createKey(value: string | number | boolean, type: string): string {
|
||||
return `${type}:${String(value)}`
|
||||
}
|
||||
|
||||
/**
|
||||
* Parses a key back to value and type
|
||||
*/
|
||||
private parseKey(key: string): { value: string; type: string } {
|
||||
const [type, ...valueParts] = key.split(":")
|
||||
return { value: valueParts.join(":"), type }
|
||||
}
|
||||
|
||||
/**
|
||||
* Gets statistics about duplicates
|
||||
*/
|
||||
public getStats(): {
|
||||
totalValues: number
|
||||
duplicateValues: number
|
||||
duplicatePercentage: number
|
||||
} {
|
||||
const totalValues = this.valueMap.size
|
||||
const duplicateValues = this.getDuplicates().length
|
||||
const duplicatePercentage = totalValues > 0 ? (duplicateValues / totalValues) * 100 : 0
|
||||
|
||||
return {
|
||||
totalValues,
|
||||
duplicateValues,
|
||||
duplicatePercentage,
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Clears all tracked values
|
||||
*/
|
||||
public clear(): void {
|
||||
this.valueMap.clear()
|
||||
}
|
||||
}
|
||||
@@ -1,14 +1,28 @@
|
||||
import Parser from "tree-sitter"
|
||||
import { IHardcodeDetector } from "../../domain/services/IHardcodeDetector"
|
||||
import { HardcodedValue } from "../../domain/value-objects/HardcodedValue"
|
||||
import { ALLOWED_NUMBERS, CODE_PATTERNS, DETECTION_KEYWORDS } from "../constants/defaults"
|
||||
import { HARDCODE_TYPES } from "../../shared/constants"
|
||||
import { CodeParser } from "../parsers/CodeParser"
|
||||
import { AstBooleanAnalyzer } from "../strategies/AstBooleanAnalyzer"
|
||||
import { AstConfigObjectAnalyzer } from "../strategies/AstConfigObjectAnalyzer"
|
||||
import { AstContextChecker } from "../strategies/AstContextChecker"
|
||||
import { AstNumberAnalyzer } from "../strategies/AstNumberAnalyzer"
|
||||
import { AstStringAnalyzer } from "../strategies/AstStringAnalyzer"
|
||||
import { ConstantsFileChecker } from "../strategies/ConstantsFileChecker"
|
||||
import { AstTreeTraverser } from "./AstTreeTraverser"
|
||||
|
||||
/**
|
||||
* Detects hardcoded values (magic numbers and strings) in TypeScript/JavaScript code
|
||||
*
|
||||
* This detector identifies configuration values, URLs, timeouts, ports, and other
|
||||
* constants that should be extracted to configuration files. It uses pattern matching
|
||||
* and context analysis to reduce false positives.
|
||||
* This detector uses Abstract Syntax Tree (AST) analysis via tree-sitter to identify
|
||||
* configuration values, URLs, timeouts, ports, and other constants that should be
|
||||
* extracted to configuration files. AST-based detection provides more accurate context
|
||||
* understanding and reduces false positives compared to regex-based approaches.
|
||||
*
|
||||
* The detector uses a modular architecture with specialized components:
|
||||
* - AstContextChecker: Checks if nodes are in specific contexts (exports, types, etc.)
|
||||
* - AstNumberAnalyzer: Analyzes number literals to detect magic numbers
|
||||
* - AstStringAnalyzer: Analyzes string literals to detect magic strings
|
||||
* - AstTreeTraverser: Traverses the AST and coordinates analyzers
|
||||
*
|
||||
* @example
|
||||
* ```typescript
|
||||
@@ -22,9 +36,27 @@ import { HARDCODE_TYPES } from "../../shared/constants"
|
||||
* ```
|
||||
*/
|
||||
export class HardcodeDetector implements IHardcodeDetector {
|
||||
private readonly ALLOWED_NUMBERS = ALLOWED_NUMBERS
|
||||
private readonly constantsChecker: ConstantsFileChecker
|
||||
private readonly parser: CodeParser
|
||||
private readonly traverser: AstTreeTraverser
|
||||
|
||||
private readonly ALLOWED_STRING_PATTERNS = [/^[a-z]$/i, /^\/$/, /^\\$/, /^\s+$/, /^,$/, /^\.$/]
|
||||
constructor() {
|
||||
this.constantsChecker = new ConstantsFileChecker()
|
||||
this.parser = new CodeParser()
|
||||
|
||||
const contextChecker = new AstContextChecker()
|
||||
const numberAnalyzer = new AstNumberAnalyzer(contextChecker)
|
||||
const stringAnalyzer = new AstStringAnalyzer(contextChecker)
|
||||
const booleanAnalyzer = new AstBooleanAnalyzer(contextChecker)
|
||||
const configObjectAnalyzer = new AstConfigObjectAnalyzer(contextChecker)
|
||||
|
||||
this.traverser = new AstTreeTraverser(
|
||||
numberAnalyzer,
|
||||
stringAnalyzer,
|
||||
booleanAnalyzer,
|
||||
configObjectAnalyzer,
|
||||
)
|
||||
}
|
||||
|
||||
/**
|
||||
* Detects all hardcoded values (both numbers and strings) in the given code
|
||||
@@ -34,358 +66,57 @@ export class HardcodeDetector implements IHardcodeDetector {
|
||||
* @returns Array of detected hardcoded values with suggestions
|
||||
*/
|
||||
public detectAll(code: string, filePath: string): HardcodedValue[] {
|
||||
if (this.isConstantsFile(filePath)) {
|
||||
if (this.constantsChecker.isConstantsFile(filePath)) {
|
||||
return []
|
||||
}
|
||||
const magicNumbers = this.detectMagicNumbers(code, filePath)
|
||||
const magicStrings = this.detectMagicStrings(code, filePath)
|
||||
return [...magicNumbers, ...magicStrings]
|
||||
|
||||
const tree = this.parseCode(code, filePath)
|
||||
return this.traverser.traverse(tree, code)
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if a file is a constants definition file
|
||||
*/
|
||||
private isConstantsFile(filePath: string): boolean {
|
||||
const _fileName = filePath.split("/").pop() ?? ""
|
||||
const constantsPatterns = [
|
||||
/^constants?\.(ts|js)$/i,
|
||||
/constants?\/.*\.(ts|js)$/i,
|
||||
/\/(constants|config|settings|defaults)\.ts$/i,
|
||||
]
|
||||
return constantsPatterns.some((pattern) => pattern.test(filePath))
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if a line is inside an exported constant definition
|
||||
*/
|
||||
private isInExportedConstant(lines: string[], lineIndex: number): boolean {
|
||||
const currentLineTrimmed = lines[lineIndex].trim()
|
||||
|
||||
if (this.isSingleLineExportConst(currentLineTrimmed)) {
|
||||
return true
|
||||
}
|
||||
|
||||
const exportConstStart = this.findExportConstStart(lines, lineIndex)
|
||||
if (exportConstStart === -1) {
|
||||
return false
|
||||
}
|
||||
|
||||
const { braces, brackets } = this.countUnclosedBraces(lines, exportConstStart, lineIndex)
|
||||
return braces > 0 || brackets > 0
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if a line is a single-line export const declaration
|
||||
*/
|
||||
private isSingleLineExportConst(line: string): boolean {
|
||||
if (!line.startsWith(CODE_PATTERNS.EXPORT_CONST)) {
|
||||
return false
|
||||
}
|
||||
|
||||
const hasObjectOrArray =
|
||||
line.includes(CODE_PATTERNS.OBJECT_START) || line.includes(CODE_PATTERNS.ARRAY_START)
|
||||
|
||||
if (hasObjectOrArray) {
|
||||
const hasAsConstEnding =
|
||||
line.includes(CODE_PATTERNS.AS_CONST_OBJECT) ||
|
||||
line.includes(CODE_PATTERNS.AS_CONST_ARRAY) ||
|
||||
line.includes(CODE_PATTERNS.AS_CONST_END_SEMICOLON_OBJECT) ||
|
||||
line.includes(CODE_PATTERNS.AS_CONST_END_SEMICOLON_ARRAY)
|
||||
|
||||
return hasAsConstEnding
|
||||
}
|
||||
|
||||
return line.includes(CODE_PATTERNS.AS_CONST)
|
||||
}
|
||||
|
||||
/**
|
||||
* Find the starting line of an export const declaration
|
||||
*/
|
||||
private findExportConstStart(lines: string[], lineIndex: number): number {
|
||||
for (let currentLine = lineIndex; currentLine >= 0; currentLine--) {
|
||||
const trimmed = lines[currentLine].trim()
|
||||
|
||||
const isExportConst =
|
||||
trimmed.startsWith(CODE_PATTERNS.EXPORT_CONST) &&
|
||||
(trimmed.includes(CODE_PATTERNS.OBJECT_START) ||
|
||||
trimmed.includes(CODE_PATTERNS.ARRAY_START))
|
||||
|
||||
if (isExportConst) {
|
||||
return currentLine
|
||||
}
|
||||
|
||||
const isTopLevelStatement =
|
||||
currentLine < lineIndex &&
|
||||
(trimmed.startsWith(CODE_PATTERNS.EXPORT) ||
|
||||
trimmed.startsWith(CODE_PATTERNS.IMPORT))
|
||||
|
||||
if (isTopLevelStatement) {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
return -1
|
||||
}
|
||||
|
||||
/**
|
||||
* Count unclosed braces and brackets between two line indices
|
||||
*/
|
||||
private countUnclosedBraces(
|
||||
lines: string[],
|
||||
startLine: number,
|
||||
endLine: number,
|
||||
): { braces: number; brackets: number } {
|
||||
let braces = 0
|
||||
let brackets = 0
|
||||
|
||||
for (let i = startLine; i <= endLine; i++) {
|
||||
const line = lines[i]
|
||||
let inString = false
|
||||
let stringChar = ""
|
||||
|
||||
for (let j = 0; j < line.length; j++) {
|
||||
const char = line[j]
|
||||
const prevChar = j > 0 ? line[j - 1] : ""
|
||||
|
||||
if ((char === "'" || char === '"' || char === "`") && prevChar !== "\\") {
|
||||
if (!inString) {
|
||||
inString = true
|
||||
stringChar = char
|
||||
} else if (char === stringChar) {
|
||||
inString = false
|
||||
stringChar = ""
|
||||
}
|
||||
}
|
||||
|
||||
if (!inString) {
|
||||
if (char === "{") {
|
||||
braces++
|
||||
} else if (char === "}") {
|
||||
braces--
|
||||
} else if (char === "[") {
|
||||
brackets++
|
||||
} else if (char === "]") {
|
||||
brackets--
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return { braces, brackets }
|
||||
}
|
||||
|
||||
/**
|
||||
* Detects magic numbers in code (timeouts, ports, limits, retries, etc.)
|
||||
*
|
||||
* Skips allowed numbers (-1, 0, 1, 2, 10, 100, 1000) and values in exported constants
|
||||
* Detects magic numbers in code
|
||||
*
|
||||
* @param code - Source code to analyze
|
||||
* @param _filePath - File path (currently unused, reserved for future use)
|
||||
* @param filePath - File path (used for constants file check)
|
||||
* @returns Array of detected magic numbers
|
||||
*/
|
||||
public detectMagicNumbers(code: string, _filePath: string): HardcodedValue[] {
|
||||
const results: HardcodedValue[] = []
|
||||
const lines = code.split("\n")
|
||||
public detectMagicNumbers(code: string, filePath: string): HardcodedValue[] {
|
||||
if (this.constantsChecker.isConstantsFile(filePath)) {
|
||||
return []
|
||||
}
|
||||
|
||||
const numberPatterns = [
|
||||
/(?:setTimeout|setInterval)\s*\(\s*[^,]+,\s*(\d+)/g,
|
||||
/(?:maxRetries|retries|attempts)\s*[=:]\s*(\d+)/gi,
|
||||
/(?:limit|max|min)\s*[=:]\s*(\d+)/gi,
|
||||
/(?:port|PORT)\s*[=:]\s*(\d+)/g,
|
||||
/(?:delay|timeout|TIMEOUT)\s*[=:]\s*(\d+)/gi,
|
||||
]
|
||||
|
||||
lines.forEach((line, lineIndex) => {
|
||||
if (line.trim().startsWith("//") || line.trim().startsWith("*")) {
|
||||
return
|
||||
}
|
||||
|
||||
// Skip lines inside exported constants
|
||||
if (this.isInExportedConstant(lines, lineIndex)) {
|
||||
return
|
||||
}
|
||||
|
||||
numberPatterns.forEach((pattern) => {
|
||||
let match
|
||||
const regex = new RegExp(pattern)
|
||||
|
||||
while ((match = regex.exec(line)) !== null) {
|
||||
const value = parseInt(match[1], 10)
|
||||
|
||||
if (!this.ALLOWED_NUMBERS.has(value)) {
|
||||
results.push(
|
||||
HardcodedValue.create(
|
||||
value,
|
||||
HARDCODE_TYPES.MAGIC_NUMBER,
|
||||
lineIndex + 1,
|
||||
match.index,
|
||||
line.trim(),
|
||||
),
|
||||
)
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
const genericNumberRegex = /\b(\d{3,})\b/g
|
||||
let match
|
||||
|
||||
while ((match = genericNumberRegex.exec(line)) !== null) {
|
||||
const value = parseInt(match[1], 10)
|
||||
|
||||
if (
|
||||
!this.ALLOWED_NUMBERS.has(value) &&
|
||||
!this.isInComment(line, match.index) &&
|
||||
!this.isInString(line, match.index)
|
||||
) {
|
||||
const context = this.extractContext(line, match.index)
|
||||
if (this.looksLikeMagicNumber(context)) {
|
||||
results.push(
|
||||
HardcodedValue.create(
|
||||
value,
|
||||
HARDCODE_TYPES.MAGIC_NUMBER,
|
||||
lineIndex + 1,
|
||||
match.index,
|
||||
line.trim(),
|
||||
),
|
||||
)
|
||||
}
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
return results
|
||||
const tree = this.parseCode(code, filePath)
|
||||
const allViolations = this.traverser.traverse(tree, code)
|
||||
return allViolations.filter((v) => v.isMagicNumber())
|
||||
}
|
||||
|
||||
/**
|
||||
* Detects magic strings in code (URLs, connection strings, error messages, etc.)
|
||||
*
|
||||
* Skips short strings (≤3 chars), console logs, test descriptions, imports,
|
||||
* and values in exported constants
|
||||
* Detects magic strings in code
|
||||
*
|
||||
* @param code - Source code to analyze
|
||||
* @param _filePath - File path (currently unused, reserved for future use)
|
||||
* @param filePath - File path (used for constants file check)
|
||||
* @returns Array of detected magic strings
|
||||
*/
|
||||
public detectMagicStrings(code: string, _filePath: string): HardcodedValue[] {
|
||||
const results: HardcodedValue[] = []
|
||||
const lines = code.split("\n")
|
||||
|
||||
const stringRegex = /(['"`])(?:(?!\1).)+\1/g
|
||||
|
||||
lines.forEach((line, lineIndex) => {
|
||||
if (
|
||||
line.trim().startsWith("//") ||
|
||||
line.trim().startsWith("*") ||
|
||||
line.includes("import ") ||
|
||||
line.includes("from ")
|
||||
) {
|
||||
return
|
||||
}
|
||||
|
||||
// Skip lines inside exported constants
|
||||
if (this.isInExportedConstant(lines, lineIndex)) {
|
||||
return
|
||||
}
|
||||
|
||||
let match
|
||||
const regex = new RegExp(stringRegex)
|
||||
|
||||
while ((match = regex.exec(line)) !== null) {
|
||||
const fullMatch = match[0]
|
||||
const value = fullMatch.slice(1, -1)
|
||||
|
||||
// Skip template literals (backtick strings with ${} interpolation)
|
||||
if (fullMatch.startsWith("`") || value.includes("${")) {
|
||||
continue
|
||||
}
|
||||
|
||||
if (!this.isAllowedString(value) && this.looksLikeMagicString(line, value)) {
|
||||
results.push(
|
||||
HardcodedValue.create(
|
||||
value,
|
||||
HARDCODE_TYPES.MAGIC_STRING,
|
||||
lineIndex + 1,
|
||||
match.index,
|
||||
line.trim(),
|
||||
),
|
||||
)
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
return results
|
||||
}
|
||||
|
||||
private isAllowedString(str: string): boolean {
|
||||
if (str.length <= 1) {
|
||||
return true
|
||||
public detectMagicStrings(code: string, filePath: string): HardcodedValue[] {
|
||||
if (this.constantsChecker.isConstantsFile(filePath)) {
|
||||
return []
|
||||
}
|
||||
|
||||
return this.ALLOWED_STRING_PATTERNS.some((pattern) => pattern.test(str))
|
||||
const tree = this.parseCode(code, filePath)
|
||||
const allViolations = this.traverser.traverse(tree, code)
|
||||
return allViolations.filter((v) => v.isMagicString())
|
||||
}
|
||||
|
||||
private looksLikeMagicString(line: string, value: string): boolean {
|
||||
const lowerLine = line.toLowerCase()
|
||||
|
||||
if (
|
||||
lowerLine.includes(DETECTION_KEYWORDS.TEST) ||
|
||||
lowerLine.includes(DETECTION_KEYWORDS.DESCRIBE)
|
||||
) {
|
||||
return false
|
||||
/**
|
||||
* Parses code based on file extension
|
||||
*/
|
||||
private parseCode(code: string, filePath: string): Parser.Tree {
|
||||
if (filePath.endsWith(".tsx")) {
|
||||
return this.parser.parseTsx(code)
|
||||
} else if (filePath.endsWith(".ts")) {
|
||||
return this.parser.parseTypeScript(code)
|
||||
}
|
||||
|
||||
if (
|
||||
lowerLine.includes(DETECTION_KEYWORDS.CONSOLE_LOG) ||
|
||||
lowerLine.includes(DETECTION_KEYWORDS.CONSOLE_ERROR)
|
||||
) {
|
||||
return false
|
||||
}
|
||||
|
||||
if (value.includes(DETECTION_KEYWORDS.HTTP) || value.includes(DETECTION_KEYWORDS.API)) {
|
||||
return true
|
||||
}
|
||||
|
||||
if (/^\d{2,}$/.test(value)) {
|
||||
return false
|
||||
}
|
||||
|
||||
return value.length > 3
|
||||
}
|
||||
|
||||
private looksLikeMagicNumber(context: string): boolean {
|
||||
const lowerContext = context.toLowerCase()
|
||||
|
||||
const configKeywords = [
|
||||
DETECTION_KEYWORDS.TIMEOUT,
|
||||
DETECTION_KEYWORDS.DELAY,
|
||||
DETECTION_KEYWORDS.RETRY,
|
||||
DETECTION_KEYWORDS.LIMIT,
|
||||
DETECTION_KEYWORDS.MAX,
|
||||
DETECTION_KEYWORDS.MIN,
|
||||
DETECTION_KEYWORDS.PORT,
|
||||
DETECTION_KEYWORDS.INTERVAL,
|
||||
]
|
||||
|
||||
return configKeywords.some((keyword) => lowerContext.includes(keyword))
|
||||
}
|
||||
|
||||
private isInComment(line: string, index: number): boolean {
|
||||
const beforeIndex = line.substring(0, index)
|
||||
return beforeIndex.includes("//") || beforeIndex.includes("/*")
|
||||
}
|
||||
|
||||
private isInString(line: string, index: number): boolean {
|
||||
const beforeIndex = line.substring(0, index)
|
||||
const singleQuotes = (beforeIndex.match(/'/g) ?? []).length
|
||||
const doubleQuotes = (beforeIndex.match(/"/g) ?? []).length
|
||||
const backticks = (beforeIndex.match(/`/g) ?? []).length
|
||||
|
||||
return singleQuotes % 2 !== 0 || doubleQuotes % 2 !== 0 || backticks % 2 !== 0
|
||||
}
|
||||
|
||||
private extractContext(line: string, index: number): string {
|
||||
const start = Math.max(0, index - 30)
|
||||
const end = Math.min(line.length, index + 30)
|
||||
return line.substring(start, end)
|
||||
return this.parser.parseJavaScript(code)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,9 +1,9 @@
|
||||
import { IRepositoryPatternDetector } from "../../domain/services/RepositoryPatternDetectorService"
|
||||
import { RepositoryViolation } from "../../domain/value-objects/RepositoryViolation"
|
||||
import { LAYERS, REPOSITORY_VIOLATION_TYPES } from "../../shared/constants/rules"
|
||||
import { ORM_QUERY_METHODS } from "../constants/orm-methods"
|
||||
import { REPOSITORY_PATTERN_MESSAGES } from "../../domain/constants/Messages"
|
||||
import { REPOSITORY_METHOD_SUGGESTIONS } from "../constants/detectorPatterns"
|
||||
import { OrmTypeMatcher } from "../strategies/OrmTypeMatcher"
|
||||
import { MethodNameValidator } from "../strategies/MethodNameValidator"
|
||||
import { RepositoryFileAnalyzer } from "../strategies/RepositoryFileAnalyzer"
|
||||
import { RepositoryViolationDetector } from "../strategies/RepositoryViolationDetector"
|
||||
|
||||
/**
|
||||
* Detects Repository Pattern violations in the codebase
|
||||
@@ -36,84 +36,20 @@ import { REPOSITORY_METHOD_SUGGESTIONS } from "../constants/detectorPatterns"
|
||||
* ```
|
||||
*/
|
||||
export class RepositoryPatternDetector implements IRepositoryPatternDetector {
|
||||
private readonly ormTypePatterns = [
|
||||
/Prisma\./,
|
||||
/PrismaClient/,
|
||||
/TypeORM/,
|
||||
/@Entity/,
|
||||
/@Column/,
|
||||
/@PrimaryColumn/,
|
||||
/@PrimaryGeneratedColumn/,
|
||||
/@ManyToOne/,
|
||||
/@OneToMany/,
|
||||
/@ManyToMany/,
|
||||
/@JoinColumn/,
|
||||
/@JoinTable/,
|
||||
/Mongoose\./,
|
||||
/Schema/,
|
||||
/Model</,
|
||||
/Document/,
|
||||
/Sequelize\./,
|
||||
/DataTypes\./,
|
||||
/FindOptions/,
|
||||
/WhereOptions/,
|
||||
/IncludeOptions/,
|
||||
/QueryInterface/,
|
||||
/MikroORM/,
|
||||
/EntityManager/,
|
||||
/EntityRepository/,
|
||||
/Collection</,
|
||||
]
|
||||
private readonly ormMatcher: OrmTypeMatcher
|
||||
private readonly methodValidator: MethodNameValidator
|
||||
private readonly fileAnalyzer: RepositoryFileAnalyzer
|
||||
private readonly violationDetector: RepositoryViolationDetector
|
||||
|
||||
private readonly technicalMethodNames = ORM_QUERY_METHODS
|
||||
|
||||
private readonly domainMethodPatterns = [
|
||||
/^findBy[A-Z]/,
|
||||
/^findAll$/,
|
||||
/^find[A-Z]/,
|
||||
/^save$/,
|
||||
/^saveAll$/,
|
||||
/^create$/,
|
||||
/^update$/,
|
||||
/^delete$/,
|
||||
/^deleteBy[A-Z]/,
|
||||
/^deleteAll$/,
|
||||
/^remove$/,
|
||||
/^removeBy[A-Z]/,
|
||||
/^removeAll$/,
|
||||
/^add$/,
|
||||
/^add[A-Z]/,
|
||||
/^get[A-Z]/,
|
||||
/^getAll$/,
|
||||
/^search/,
|
||||
/^list/,
|
||||
/^has[A-Z]/,
|
||||
/^is[A-Z]/,
|
||||
/^exists$/,
|
||||
/^exists[A-Z]/,
|
||||
/^existsBy[A-Z]/,
|
||||
/^clear[A-Z]/,
|
||||
/^clearAll$/,
|
||||
/^store[A-Z]/,
|
||||
/^initialize$/,
|
||||
/^initializeCollection$/,
|
||||
/^close$/,
|
||||
/^connect$/,
|
||||
/^disconnect$/,
|
||||
/^count$/,
|
||||
/^countBy[A-Z]/,
|
||||
]
|
||||
|
||||
private readonly concreteRepositoryPatterns = [
|
||||
/PrismaUserRepository/,
|
||||
/MongoUserRepository/,
|
||||
/TypeOrmUserRepository/,
|
||||
/SequelizeUserRepository/,
|
||||
/InMemoryUserRepository/,
|
||||
/PostgresUserRepository/,
|
||||
/MySqlUserRepository/,
|
||||
/Repository(?!Interface)/,
|
||||
]
|
||||
constructor() {
|
||||
this.ormMatcher = new OrmTypeMatcher()
|
||||
this.methodValidator = new MethodNameValidator(this.ormMatcher)
|
||||
this.fileAnalyzer = new RepositoryFileAnalyzer()
|
||||
this.violationDetector = new RepositoryViolationDetector(
|
||||
this.ormMatcher,
|
||||
this.methodValidator,
|
||||
)
|
||||
}
|
||||
|
||||
/**
|
||||
* Detects all Repository Pattern violations in the given code
|
||||
@@ -125,14 +61,16 @@ export class RepositoryPatternDetector implements IRepositoryPatternDetector {
|
||||
): RepositoryViolation[] {
|
||||
const violations: RepositoryViolation[] = []
|
||||
|
||||
if (this.isRepositoryInterface(filePath, layer)) {
|
||||
violations.push(...this.detectOrmTypesInInterface(code, filePath, layer))
|
||||
violations.push(...this.detectNonDomainMethodNames(code, filePath, layer))
|
||||
if (this.fileAnalyzer.isRepositoryInterface(filePath, layer)) {
|
||||
violations.push(...this.violationDetector.detectOrmTypes(code, filePath, layer))
|
||||
violations.push(...this.violationDetector.detectNonDomainMethods(code, filePath, layer))
|
||||
}
|
||||
|
||||
if (this.isUseCase(filePath, layer)) {
|
||||
violations.push(...this.detectConcreteRepositoryUsage(code, filePath, layer))
|
||||
violations.push(...this.detectNewRepositoryInstantiation(code, filePath, layer))
|
||||
if (this.fileAnalyzer.isUseCase(filePath, layer)) {
|
||||
violations.push(
|
||||
...this.violationDetector.detectConcreteRepositoryUsage(code, filePath, layer),
|
||||
)
|
||||
violations.push(...this.violationDetector.detectNewInstantiation(code, filePath, layer))
|
||||
}
|
||||
|
||||
return violations
|
||||
@@ -142,338 +80,27 @@ export class RepositoryPatternDetector implements IRepositoryPatternDetector {
|
||||
* Checks if a type is an ORM-specific type
|
||||
*/
|
||||
public isOrmType(typeName: string): boolean {
|
||||
return this.ormTypePatterns.some((pattern) => pattern.test(typeName))
|
||||
return this.ormMatcher.isOrmType(typeName)
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if a method name follows domain language conventions
|
||||
*/
|
||||
public isDomainMethodName(methodName: string): boolean {
|
||||
if ((this.technicalMethodNames as readonly string[]).includes(methodName)) {
|
||||
return false
|
||||
}
|
||||
|
||||
return this.domainMethodPatterns.some((pattern) => pattern.test(methodName))
|
||||
return this.methodValidator.isDomainMethodName(methodName)
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if a file is a repository interface
|
||||
*/
|
||||
public isRepositoryInterface(filePath: string, layer: string | undefined): boolean {
|
||||
if (layer !== LAYERS.DOMAIN) {
|
||||
return false
|
||||
}
|
||||
|
||||
return /I[A-Z]\w*Repository\.ts$/.test(filePath) && /repositories?\//.test(filePath)
|
||||
return this.fileAnalyzer.isRepositoryInterface(filePath, layer)
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if a file is a use case
|
||||
*/
|
||||
public isUseCase(filePath: string, layer: string | undefined): boolean {
|
||||
if (layer !== LAYERS.APPLICATION) {
|
||||
return false
|
||||
}
|
||||
|
||||
return /use-cases?\//.test(filePath) && /[A-Z][a-z]+[A-Z]\w*\.ts$/.test(filePath)
|
||||
}
|
||||
|
||||
/**
|
||||
* Detects ORM-specific types in repository interfaces
|
||||
*/
|
||||
private detectOrmTypesInInterface(
|
||||
code: string,
|
||||
filePath: string,
|
||||
layer: string | undefined,
|
||||
): RepositoryViolation[] {
|
||||
const violations: RepositoryViolation[] = []
|
||||
const lines = code.split("\n")
|
||||
|
||||
for (let i = 0; i < lines.length; i++) {
|
||||
const line = lines[i]
|
||||
const lineNumber = i + 1
|
||||
|
||||
const methodMatch =
|
||||
/(\w+)\s*\([^)]*:\s*([^)]+)\)\s*:\s*.*?(?:Promise<([^>]+)>|([A-Z]\w+))/.exec(line)
|
||||
|
||||
if (methodMatch) {
|
||||
const params = methodMatch[2]
|
||||
const returnType = methodMatch[3] || methodMatch[4]
|
||||
|
||||
if (this.isOrmType(params)) {
|
||||
const ormType = this.extractOrmType(params)
|
||||
violations.push(
|
||||
RepositoryViolation.create(
|
||||
REPOSITORY_VIOLATION_TYPES.ORM_TYPE_IN_INTERFACE,
|
||||
filePath,
|
||||
layer || LAYERS.DOMAIN,
|
||||
lineNumber,
|
||||
`Method parameter uses ORM type: ${ormType}`,
|
||||
ormType,
|
||||
),
|
||||
)
|
||||
}
|
||||
|
||||
if (returnType && this.isOrmType(returnType)) {
|
||||
const ormType = this.extractOrmType(returnType)
|
||||
violations.push(
|
||||
RepositoryViolation.create(
|
||||
REPOSITORY_VIOLATION_TYPES.ORM_TYPE_IN_INTERFACE,
|
||||
filePath,
|
||||
layer || LAYERS.DOMAIN,
|
||||
lineNumber,
|
||||
`Method return type uses ORM type: ${ormType}`,
|
||||
ormType,
|
||||
),
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
for (const pattern of this.ormTypePatterns) {
|
||||
if (pattern.test(line) && !line.trim().startsWith("//")) {
|
||||
const ormType = this.extractOrmType(line)
|
||||
violations.push(
|
||||
RepositoryViolation.create(
|
||||
REPOSITORY_VIOLATION_TYPES.ORM_TYPE_IN_INTERFACE,
|
||||
filePath,
|
||||
layer || LAYERS.DOMAIN,
|
||||
lineNumber,
|
||||
`Repository interface contains ORM-specific type: ${ormType}`,
|
||||
ormType,
|
||||
),
|
||||
)
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return violations
|
||||
}
|
||||
|
||||
/**
|
||||
* Suggests better domain method names based on the original method name
|
||||
*/
|
||||
private suggestDomainMethodName(methodName: string): string {
|
||||
const lowerName = methodName.toLowerCase()
|
||||
const suggestions: string[] = []
|
||||
|
||||
const suggestionMap: Record<string, string[]> = {
|
||||
query: [
|
||||
REPOSITORY_METHOD_SUGGESTIONS.SEARCH,
|
||||
REPOSITORY_METHOD_SUGGESTIONS.FIND_BY_PROPERTY,
|
||||
],
|
||||
select: [
|
||||
REPOSITORY_METHOD_SUGGESTIONS.FIND_BY_PROPERTY,
|
||||
REPOSITORY_METHOD_SUGGESTIONS.GET_ENTITY,
|
||||
],
|
||||
insert: [
|
||||
REPOSITORY_METHOD_SUGGESTIONS.CREATE,
|
||||
REPOSITORY_METHOD_SUGGESTIONS.ADD_ENTITY,
|
||||
REPOSITORY_METHOD_SUGGESTIONS.STORE_ENTITY,
|
||||
],
|
||||
update: [
|
||||
REPOSITORY_METHOD_SUGGESTIONS.UPDATE,
|
||||
REPOSITORY_METHOD_SUGGESTIONS.MODIFY_ENTITY,
|
||||
],
|
||||
upsert: [
|
||||
REPOSITORY_METHOD_SUGGESTIONS.SAVE,
|
||||
REPOSITORY_METHOD_SUGGESTIONS.STORE_ENTITY,
|
||||
],
|
||||
remove: [
|
||||
REPOSITORY_METHOD_SUGGESTIONS.DELETE,
|
||||
REPOSITORY_METHOD_SUGGESTIONS.REMOVE_BY_PROPERTY,
|
||||
],
|
||||
fetch: [
|
||||
REPOSITORY_METHOD_SUGGESTIONS.FIND_BY_PROPERTY,
|
||||
REPOSITORY_METHOD_SUGGESTIONS.GET_ENTITY,
|
||||
],
|
||||
retrieve: [
|
||||
REPOSITORY_METHOD_SUGGESTIONS.FIND_BY_PROPERTY,
|
||||
REPOSITORY_METHOD_SUGGESTIONS.GET_ENTITY,
|
||||
],
|
||||
load: [
|
||||
REPOSITORY_METHOD_SUGGESTIONS.FIND_BY_PROPERTY,
|
||||
REPOSITORY_METHOD_SUGGESTIONS.GET_ENTITY,
|
||||
],
|
||||
}
|
||||
|
||||
for (const [keyword, keywords] of Object.entries(suggestionMap)) {
|
||||
if (lowerName.includes(keyword)) {
|
||||
suggestions.push(...keywords)
|
||||
}
|
||||
}
|
||||
|
||||
if (lowerName.includes("get") && lowerName.includes("all")) {
|
||||
suggestions.push(
|
||||
REPOSITORY_METHOD_SUGGESTIONS.FIND_ALL,
|
||||
REPOSITORY_METHOD_SUGGESTIONS.LIST_ALL,
|
||||
)
|
||||
}
|
||||
|
||||
if (suggestions.length === 0) {
|
||||
return REPOSITORY_METHOD_SUGGESTIONS.DEFAULT_SUGGESTION
|
||||
}
|
||||
|
||||
return `Consider: ${suggestions.slice(0, 3).join(", ")}`
|
||||
}
|
||||
|
||||
/**
|
||||
* Detects non-domain method names in repository interfaces
|
||||
*/
|
||||
private detectNonDomainMethodNames(
|
||||
code: string,
|
||||
filePath: string,
|
||||
layer: string | undefined,
|
||||
): RepositoryViolation[] {
|
||||
const violations: RepositoryViolation[] = []
|
||||
const lines = code.split("\n")
|
||||
|
||||
for (let i = 0; i < lines.length; i++) {
|
||||
const line = lines[i]
|
||||
const lineNumber = i + 1
|
||||
|
||||
const methodMatch = /^\s*(\w+)\s*\(/.exec(line)
|
||||
|
||||
if (methodMatch) {
|
||||
const methodName = methodMatch[1]
|
||||
|
||||
if (!this.isDomainMethodName(methodName) && !line.trim().startsWith("//")) {
|
||||
const suggestion = this.suggestDomainMethodName(methodName)
|
||||
violations.push(
|
||||
RepositoryViolation.create(
|
||||
REPOSITORY_VIOLATION_TYPES.NON_DOMAIN_METHOD_NAME,
|
||||
filePath,
|
||||
layer || LAYERS.DOMAIN,
|
||||
lineNumber,
|
||||
`Method '${methodName}' uses technical name instead of domain language. ${suggestion}`,
|
||||
undefined,
|
||||
undefined,
|
||||
methodName,
|
||||
),
|
||||
)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return violations
|
||||
}
|
||||
|
||||
/**
|
||||
* Detects concrete repository usage in use cases
|
||||
*/
|
||||
private detectConcreteRepositoryUsage(
|
||||
code: string,
|
||||
filePath: string,
|
||||
layer: string | undefined,
|
||||
): RepositoryViolation[] {
|
||||
const violations: RepositoryViolation[] = []
|
||||
const lines = code.split("\n")
|
||||
|
||||
for (let i = 0; i < lines.length; i++) {
|
||||
const line = lines[i]
|
||||
const lineNumber = i + 1
|
||||
|
||||
const constructorParamMatch =
|
||||
/constructor\s*\([^)]*(?:private|public|protected)\s+(?:readonly\s+)?(\w+)\s*:\s*([A-Z]\w*Repository)/.exec(
|
||||
line,
|
||||
)
|
||||
|
||||
if (constructorParamMatch) {
|
||||
const repositoryType = constructorParamMatch[2]
|
||||
|
||||
if (!repositoryType.startsWith("I")) {
|
||||
violations.push(
|
||||
RepositoryViolation.create(
|
||||
REPOSITORY_VIOLATION_TYPES.CONCRETE_REPOSITORY_IN_USE_CASE,
|
||||
filePath,
|
||||
layer || LAYERS.APPLICATION,
|
||||
lineNumber,
|
||||
`Use case depends on concrete repository '${repositoryType}'`,
|
||||
undefined,
|
||||
repositoryType,
|
||||
),
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
const fieldMatch =
|
||||
/(?:private|public|protected)\s+(?:readonly\s+)?(\w+)\s*:\s*([A-Z]\w*Repository)/.exec(
|
||||
line,
|
||||
)
|
||||
|
||||
if (fieldMatch) {
|
||||
const repositoryType = fieldMatch[2]
|
||||
|
||||
if (
|
||||
!repositoryType.startsWith("I") &&
|
||||
!line.includes(REPOSITORY_PATTERN_MESSAGES.CONSTRUCTOR)
|
||||
) {
|
||||
violations.push(
|
||||
RepositoryViolation.create(
|
||||
REPOSITORY_VIOLATION_TYPES.CONCRETE_REPOSITORY_IN_USE_CASE,
|
||||
filePath,
|
||||
layer || LAYERS.APPLICATION,
|
||||
lineNumber,
|
||||
`Use case field uses concrete repository '${repositoryType}'`,
|
||||
undefined,
|
||||
repositoryType,
|
||||
),
|
||||
)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return violations
|
||||
}
|
||||
|
||||
/**
|
||||
* Detects 'new Repository()' instantiation in use cases
|
||||
*/
|
||||
private detectNewRepositoryInstantiation(
|
||||
code: string,
|
||||
filePath: string,
|
||||
layer: string | undefined,
|
||||
): RepositoryViolation[] {
|
||||
const violations: RepositoryViolation[] = []
|
||||
const lines = code.split("\n")
|
||||
|
||||
for (let i = 0; i < lines.length; i++) {
|
||||
const line = lines[i]
|
||||
const lineNumber = i + 1
|
||||
|
||||
const newRepositoryMatch = /new\s+([A-Z]\w*Repository)\s*\(/.exec(line)
|
||||
|
||||
if (newRepositoryMatch && !line.trim().startsWith("//")) {
|
||||
const repositoryName = newRepositoryMatch[1]
|
||||
violations.push(
|
||||
RepositoryViolation.create(
|
||||
REPOSITORY_VIOLATION_TYPES.NEW_REPOSITORY_IN_USE_CASE,
|
||||
filePath,
|
||||
layer || LAYERS.APPLICATION,
|
||||
lineNumber,
|
||||
`Use case creates repository with 'new ${repositoryName}()'`,
|
||||
undefined,
|
||||
repositoryName,
|
||||
),
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
return violations
|
||||
}
|
||||
|
||||
/**
|
||||
* Extracts ORM type name from a code line
|
||||
*/
|
||||
private extractOrmType(line: string): string {
|
||||
for (const pattern of this.ormTypePatterns) {
|
||||
const match = line.match(pattern)
|
||||
if (match) {
|
||||
const startIdx = match.index || 0
|
||||
const typeMatch = /[\w.]+/.exec(line.slice(startIdx))
|
||||
return typeMatch ? typeMatch[0] : REPOSITORY_PATTERN_MESSAGES.UNKNOWN_TYPE
|
||||
}
|
||||
}
|
||||
return REPOSITORY_PATTERN_MESSAGES.UNKNOWN_TYPE
|
||||
return this.fileAnalyzer.isUseCase(filePath, layer)
|
||||
}
|
||||
}
|
||||
|
||||
169
packages/guardian/src/infrastructure/analyzers/SecretDetector.ts
Normal file
169
packages/guardian/src/infrastructure/analyzers/SecretDetector.ts
Normal file
@@ -0,0 +1,169 @@
|
||||
import { createEngine } from "@secretlint/node"
|
||||
import type { SecretLintConfigDescriptor } from "@secretlint/types"
|
||||
import { ISecretDetector } from "../../domain/services/ISecretDetector"
|
||||
import { SecretViolation } from "../../domain/value-objects/SecretViolation"
|
||||
import { SECRET_KEYWORDS, SECRET_TYPE_NAMES } from "../../domain/constants/SecretExamples"
|
||||
import { EXTERNAL_PACKAGES } from "../../shared/constants/rules"
|
||||
|
||||
/**
|
||||
* Detects hardcoded secrets in TypeScript/JavaScript code
|
||||
*
|
||||
* Uses industry-standard Secretlint library to detect 350+ types of secrets
|
||||
* including AWS keys, GitHub tokens, NPM tokens, SSH keys, API keys, and more.
|
||||
*
|
||||
* All detected secrets are marked as CRITICAL severity because they represent
|
||||
* serious security risks that could lead to unauthorized access or data breaches.
|
||||
*
|
||||
* @example
|
||||
* ```typescript
|
||||
* const detector = new SecretDetector()
|
||||
* const code = `const AWS_KEY = "AKIA1234567890ABCDEF"`
|
||||
* const violations = await detector.detectAll(code, 'config.ts')
|
||||
* // Returns array of SecretViolation objects with CRITICAL severity
|
||||
* ```
|
||||
*/
|
||||
export class SecretDetector implements ISecretDetector {
|
||||
private readonly secretlintConfig: SecretLintConfigDescriptor = {
|
||||
rules: [
|
||||
{
|
||||
id: EXTERNAL_PACKAGES.SECRETLINT_PRESET,
|
||||
},
|
||||
],
|
||||
}
|
||||
|
||||
/**
|
||||
* Detects all types of hardcoded secrets in the provided code
|
||||
*
|
||||
* @param code - Source code to analyze
|
||||
* @param filePath - Path to the file being analyzed
|
||||
* @returns Promise resolving to array of secret violations
|
||||
*/
|
||||
public async detectAll(code: string, filePath: string): Promise<SecretViolation[]> {
|
||||
try {
|
||||
const engine = await createEngine({
|
||||
cwd: process.cwd(),
|
||||
configFileJSON: this.secretlintConfig,
|
||||
formatter: "stylish",
|
||||
color: false,
|
||||
})
|
||||
|
||||
const result = await engine.executeOnContent({
|
||||
content: code,
|
||||
filePath,
|
||||
})
|
||||
|
||||
return this.parseOutputToViolations(result.output, filePath)
|
||||
} catch (_error) {
|
||||
return []
|
||||
}
|
||||
}
|
||||
|
||||
private parseOutputToViolations(output: string, filePath: string): SecretViolation[] {
|
||||
const violations: SecretViolation[] = []
|
||||
|
||||
if (!output || output.trim() === "") {
|
||||
return violations
|
||||
}
|
||||
|
||||
const lines = output.split("\n")
|
||||
|
||||
for (const line of lines) {
|
||||
const match = /^\s*(\d+):(\d+)\s+(error|warning)\s+(.+?)\s+(.+)$/.exec(line)
|
||||
|
||||
if (match) {
|
||||
const [, lineNum, column, , message, ruleId] = match
|
||||
const secretType = this.extractSecretType(message, ruleId)
|
||||
|
||||
const violation = SecretViolation.create(
|
||||
filePath,
|
||||
parseInt(lineNum, 10),
|
||||
parseInt(column, 10),
|
||||
secretType,
|
||||
message,
|
||||
)
|
||||
|
||||
violations.push(violation)
|
||||
}
|
||||
}
|
||||
|
||||
return violations
|
||||
}
|
||||
|
||||
private extractSecretType(message: string, ruleId: string): string {
|
||||
if (ruleId.includes(SECRET_KEYWORDS.AWS)) {
|
||||
if (message.toLowerCase().includes(SECRET_KEYWORDS.ACCESS_KEY)) {
|
||||
return SECRET_TYPE_NAMES.AWS_ACCESS_KEY
|
||||
}
|
||||
if (message.toLowerCase().includes(SECRET_KEYWORDS.SECRET)) {
|
||||
return SECRET_TYPE_NAMES.AWS_SECRET_KEY
|
||||
}
|
||||
return SECRET_TYPE_NAMES.AWS_CREDENTIAL
|
||||
}
|
||||
|
||||
if (ruleId.includes(SECRET_KEYWORDS.GITHUB)) {
|
||||
if (message.toLowerCase().includes(SECRET_KEYWORDS.PERSONAL_ACCESS_TOKEN)) {
|
||||
return SECRET_TYPE_NAMES.GITHUB_PERSONAL_ACCESS_TOKEN
|
||||
}
|
||||
if (message.toLowerCase().includes(SECRET_KEYWORDS.OAUTH)) {
|
||||
return SECRET_TYPE_NAMES.GITHUB_OAUTH_TOKEN
|
||||
}
|
||||
return SECRET_TYPE_NAMES.GITHUB_TOKEN
|
||||
}
|
||||
|
||||
if (ruleId.includes(SECRET_KEYWORDS.NPM)) {
|
||||
return SECRET_TYPE_NAMES.NPM_TOKEN
|
||||
}
|
||||
|
||||
if (ruleId.includes(SECRET_KEYWORDS.GCP) || ruleId.includes(SECRET_KEYWORDS.GOOGLE)) {
|
||||
return SECRET_TYPE_NAMES.GCP_SERVICE_ACCOUNT_KEY
|
||||
}
|
||||
|
||||
if (ruleId.includes(SECRET_KEYWORDS.PRIVATEKEY) || ruleId.includes(SECRET_KEYWORDS.SSH)) {
|
||||
if (message.toLowerCase().includes(SECRET_KEYWORDS.RSA)) {
|
||||
return SECRET_TYPE_NAMES.SSH_RSA_PRIVATE_KEY
|
||||
}
|
||||
if (message.toLowerCase().includes(SECRET_KEYWORDS.DSA)) {
|
||||
return SECRET_TYPE_NAMES.SSH_DSA_PRIVATE_KEY
|
||||
}
|
||||
if (message.toLowerCase().includes(SECRET_KEYWORDS.ECDSA)) {
|
||||
return SECRET_TYPE_NAMES.SSH_ECDSA_PRIVATE_KEY
|
||||
}
|
||||
if (message.toLowerCase().includes(SECRET_KEYWORDS.ED25519)) {
|
||||
return SECRET_TYPE_NAMES.SSH_ED25519_PRIVATE_KEY
|
||||
}
|
||||
return SECRET_TYPE_NAMES.SSH_PRIVATE_KEY
|
||||
}
|
||||
|
||||
if (ruleId.includes(SECRET_KEYWORDS.SLACK)) {
|
||||
if (message.toLowerCase().includes(SECRET_KEYWORDS.BOT)) {
|
||||
return SECRET_TYPE_NAMES.SLACK_BOT_TOKEN
|
||||
}
|
||||
if (message.toLowerCase().includes(SECRET_KEYWORDS.USER)) {
|
||||
return SECRET_TYPE_NAMES.SLACK_USER_TOKEN
|
||||
}
|
||||
return SECRET_TYPE_NAMES.SLACK_TOKEN
|
||||
}
|
||||
|
||||
if (ruleId.includes(SECRET_KEYWORDS.BASICAUTH)) {
|
||||
return SECRET_TYPE_NAMES.BASIC_AUTH_CREDENTIALS
|
||||
}
|
||||
|
||||
if (message.toLowerCase().includes(SECRET_KEYWORDS.API_KEY)) {
|
||||
return SECRET_TYPE_NAMES.API_KEY
|
||||
}
|
||||
|
||||
if (message.toLowerCase().includes(SECRET_KEYWORDS.TOKEN)) {
|
||||
return SECRET_TYPE_NAMES.AUTHENTICATION_TOKEN
|
||||
}
|
||||
|
||||
if (message.toLowerCase().includes(SECRET_KEYWORDS.PASSWORD)) {
|
||||
return SECRET_TYPE_NAMES.PASSWORD
|
||||
}
|
||||
|
||||
if (message.toLowerCase().includes(SECRET_KEYWORDS.SECRET)) {
|
||||
return SECRET_TYPE_NAMES.SECRET
|
||||
}
|
||||
|
||||
return SECRET_TYPE_NAMES.SENSITIVE_DATA
|
||||
}
|
||||
}
|
||||
@@ -84,6 +84,8 @@ export const DDD_FOLDER_NAMES = {
|
||||
FACTORIES: "factories",
|
||||
PORTS: "ports",
|
||||
INTERFACES: "interfaces",
|
||||
ERRORS: "errors",
|
||||
EXCEPTIONS: "exceptions",
|
||||
} as const
|
||||
|
||||
/**
|
||||
|
||||
@@ -0,0 +1,177 @@
|
||||
import { DDD_FOLDER_NAMES } from "../constants/detectorPatterns"
|
||||
import { IMPORT_PATTERNS } from "../constants/paths"
|
||||
import { FolderRegistry } from "./FolderRegistry"
|
||||
|
||||
/**
|
||||
* Analyzes file paths and imports to extract aggregate information
|
||||
*
|
||||
* Handles path normalization, aggregate extraction, and entity name detection
|
||||
* for aggregate boundary validation.
|
||||
*/
|
||||
export class AggregatePathAnalyzer {
|
||||
constructor(private readonly folderRegistry: FolderRegistry) {}
|
||||
|
||||
/**
|
||||
* Extracts the aggregate name from a file path
|
||||
*
|
||||
* Handles patterns like:
|
||||
* - domain/aggregates/order/Order.ts → 'order'
|
||||
* - domain/order/Order.ts → 'order'
|
||||
* - domain/entities/order/Order.ts → 'order'
|
||||
*/
|
||||
public extractAggregateFromPath(filePath: string): string | undefined {
|
||||
const normalizedPath = this.normalizePath(filePath)
|
||||
const segments = this.getPathSegmentsAfterDomain(normalizedPath)
|
||||
|
||||
if (!segments || segments.length < 2) {
|
||||
return undefined
|
||||
}
|
||||
|
||||
return this.findAggregateInSegments(segments)
|
||||
}
|
||||
|
||||
/**
|
||||
* Extracts the aggregate name from an import path
|
||||
*/
|
||||
public extractAggregateFromImport(importPath: string): string | undefined {
|
||||
const normalizedPath = importPath.replace(IMPORT_PATTERNS.QUOTE, "").toLowerCase()
|
||||
const segments = normalizedPath.split("/").filter((seg) => seg !== ".." && seg !== ".")
|
||||
|
||||
if (segments.length === 0) {
|
||||
return undefined
|
||||
}
|
||||
|
||||
return this.findAggregateInImportSegments(segments)
|
||||
}
|
||||
|
||||
/**
|
||||
* Extracts the entity name from an import path
|
||||
*/
|
||||
public extractEntityName(importPath: string): string | undefined {
|
||||
const normalizedPath = importPath.replace(IMPORT_PATTERNS.QUOTE, "")
|
||||
const segments = normalizedPath.split("/")
|
||||
const lastSegment = segments[segments.length - 1]
|
||||
|
||||
if (lastSegment) {
|
||||
return lastSegment.replace(/\.(ts|js)$/, "")
|
||||
}
|
||||
|
||||
return undefined
|
||||
}
|
||||
|
||||
/**
|
||||
* Normalizes a file path for consistent processing
|
||||
*/
|
||||
private normalizePath(filePath: string): string {
|
||||
return filePath.toLowerCase().replace(/\\/g, "/")
|
||||
}
|
||||
|
||||
/**
|
||||
* Gets path segments after the 'domain' folder
|
||||
*/
|
||||
private getPathSegmentsAfterDomain(normalizedPath: string): string[] | undefined {
|
||||
const domainMatch = /(?:^|\/)(domain)\//.exec(normalizedPath)
|
||||
if (!domainMatch) {
|
||||
return undefined
|
||||
}
|
||||
|
||||
const domainEndIndex = domainMatch.index + domainMatch[0].length
|
||||
const pathAfterDomain = normalizedPath.substring(domainEndIndex)
|
||||
return pathAfterDomain.split("/").filter(Boolean)
|
||||
}
|
||||
|
||||
/**
|
||||
* Finds aggregate name in path segments after domain folder
|
||||
*/
|
||||
private findAggregateInSegments(segments: string[]): string | undefined {
|
||||
if (this.folderRegistry.isEntityFolder(segments[0])) {
|
||||
return this.extractFromEntityFolder(segments)
|
||||
}
|
||||
|
||||
const aggregate = segments[0]
|
||||
if (this.folderRegistry.isNonAggregateFolder(aggregate)) {
|
||||
return undefined
|
||||
}
|
||||
|
||||
return aggregate
|
||||
}
|
||||
|
||||
/**
|
||||
* Extracts aggregate from entity folder structure
|
||||
*/
|
||||
private extractFromEntityFolder(segments: string[]): string | undefined {
|
||||
if (segments.length < 3) {
|
||||
return undefined
|
||||
}
|
||||
|
||||
const aggregate = segments[1]
|
||||
if (this.folderRegistry.isNonAggregateFolder(aggregate)) {
|
||||
return undefined
|
||||
}
|
||||
|
||||
return aggregate
|
||||
}
|
||||
|
||||
/**
|
||||
* Finds aggregate in import path segments
|
||||
*/
|
||||
private findAggregateInImportSegments(segments: string[]): string | undefined {
|
||||
const aggregateFromDomainFolder = this.findAggregateAfterDomainFolder(segments)
|
||||
if (aggregateFromDomainFolder) {
|
||||
return aggregateFromDomainFolder
|
||||
}
|
||||
|
||||
return this.findAggregateFromSecondLastSegment(segments)
|
||||
}
|
||||
|
||||
/**
|
||||
* Finds aggregate after 'domain' or 'aggregates' folder in import
|
||||
*/
|
||||
private findAggregateAfterDomainFolder(segments: string[]): string | undefined {
|
||||
for (let i = 0; i < segments.length; i++) {
|
||||
const isDomainOrAggregatesFolder =
|
||||
segments[i] === DDD_FOLDER_NAMES.DOMAIN ||
|
||||
segments[i] === DDD_FOLDER_NAMES.AGGREGATES
|
||||
|
||||
if (!isDomainOrAggregatesFolder) {
|
||||
continue
|
||||
}
|
||||
|
||||
if (i + 1 >= segments.length) {
|
||||
continue
|
||||
}
|
||||
|
||||
const nextSegment = segments[i + 1]
|
||||
const isEntityOrAggregateFolder =
|
||||
this.folderRegistry.isEntityFolder(nextSegment) ||
|
||||
nextSegment === DDD_FOLDER_NAMES.AGGREGATES
|
||||
|
||||
if (isEntityOrAggregateFolder) {
|
||||
return i + 2 < segments.length ? segments[i + 2] : undefined
|
||||
}
|
||||
|
||||
return nextSegment
|
||||
}
|
||||
return undefined
|
||||
}
|
||||
|
||||
/**
|
||||
* Extracts aggregate from second-to-last segment if applicable
|
||||
*/
|
||||
private findAggregateFromSecondLastSegment(segments: string[]): string | undefined {
|
||||
if (segments.length >= 2) {
|
||||
const secondLastSegment = segments[segments.length - 2]
|
||||
|
||||
if (
|
||||
!this.folderRegistry.isEntityFolder(secondLastSegment) &&
|
||||
!this.folderRegistry.isValueObjectFolder(secondLastSegment) &&
|
||||
!this.folderRegistry.isAllowedFolder(secondLastSegment) &&
|
||||
secondLastSegment !== DDD_FOLDER_NAMES.DOMAIN
|
||||
) {
|
||||
return secondLastSegment
|
||||
}
|
||||
}
|
||||
|
||||
return undefined
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,92 @@
|
||||
import Parser from "tree-sitter"
|
||||
import { HardcodedValue, HardcodeType } from "../../domain/value-objects/HardcodedValue"
|
||||
import { DETECTION_VALUES } from "../../shared/constants/rules"
|
||||
import { AstContextChecker } from "./AstContextChecker"
|
||||
|
||||
/**
|
||||
* AST-based analyzer for detecting magic booleans
|
||||
*
|
||||
* Detects boolean literals used as arguments without clear meaning.
|
||||
* Example: doSomething(true, false, true) - hard to understand
|
||||
* Better: doSomething({ sync: true, validate: false, cache: true })
|
||||
*/
|
||||
export class AstBooleanAnalyzer {
|
||||
constructor(private readonly contextChecker: AstContextChecker) {}
|
||||
|
||||
/**
|
||||
* Analyzes a boolean node and returns a violation if it's a magic boolean
|
||||
*/
|
||||
public analyze(node: Parser.SyntaxNode, lines: string[]): HardcodedValue | null {
|
||||
if (!this.shouldDetect(node)) {
|
||||
return null
|
||||
}
|
||||
|
||||
const value = node.text === DETECTION_VALUES.BOOLEAN_TRUE
|
||||
|
||||
return this.createViolation(node, value, lines)
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if boolean should be detected
|
||||
*/
|
||||
private shouldDetect(node: Parser.SyntaxNode): boolean {
|
||||
if (this.contextChecker.isInExportedConstant(node)) {
|
||||
return false
|
||||
}
|
||||
|
||||
if (this.contextChecker.isInTypeContext(node)) {
|
||||
return false
|
||||
}
|
||||
|
||||
if (this.contextChecker.isInTestDescription(node)) {
|
||||
return false
|
||||
}
|
||||
|
||||
const parent = node.parent
|
||||
if (!parent) {
|
||||
return false
|
||||
}
|
||||
|
||||
if (parent.type === "arguments") {
|
||||
return this.isInFunctionCallWithMultipleBooleans(parent)
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if function call has multiple boolean arguments
|
||||
*/
|
||||
private isInFunctionCallWithMultipleBooleans(argsNode: Parser.SyntaxNode): boolean {
|
||||
let booleanCount = 0
|
||||
|
||||
for (const child of argsNode.children) {
|
||||
if (child.type === "true" || child.type === "false") {
|
||||
booleanCount++
|
||||
}
|
||||
}
|
||||
|
||||
return booleanCount >= 2
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates a HardcodedValue violation from a boolean node
|
||||
*/
|
||||
private createViolation(
|
||||
node: Parser.SyntaxNode,
|
||||
value: boolean,
|
||||
lines: string[],
|
||||
): HardcodedValue {
|
||||
const lineNumber = node.startPosition.row + 1
|
||||
const column = node.startPosition.column
|
||||
const context = lines[node.startPosition.row]?.trim() ?? ""
|
||||
|
||||
return HardcodedValue.create(
|
||||
value,
|
||||
"MAGIC_BOOLEAN" as HardcodeType,
|
||||
lineNumber,
|
||||
column,
|
||||
context,
|
||||
)
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,114 @@
|
||||
import Parser from "tree-sitter"
|
||||
import { HardcodedValue, HardcodeType } from "../../domain/value-objects/HardcodedValue"
|
||||
import { HARDCODE_TYPES } from "../../shared/constants/rules"
|
||||
import { ALLOWED_NUMBERS } from "../constants/defaults"
|
||||
import { AstContextChecker } from "./AstContextChecker"
|
||||
|
||||
/**
|
||||
* AST-based analyzer for detecting configuration objects with hardcoded values
|
||||
*
|
||||
* Detects objects that contain multiple hardcoded values that should be
|
||||
* extracted to a configuration file.
|
||||
*
|
||||
* Example:
|
||||
* const config = { timeout: 5000, retries: 3, url: "http://..." }
|
||||
*/
|
||||
export class AstConfigObjectAnalyzer {
|
||||
private readonly MIN_HARDCODED_VALUES = 2
|
||||
|
||||
constructor(private readonly contextChecker: AstContextChecker) {}
|
||||
|
||||
/**
|
||||
* Analyzes an object expression and returns a violation if it contains many hardcoded values
|
||||
*/
|
||||
public analyze(node: Parser.SyntaxNode, lines: string[]): HardcodedValue | null {
|
||||
if (node.type !== "object") {
|
||||
return null
|
||||
}
|
||||
|
||||
if (this.contextChecker.isInExportedConstant(node)) {
|
||||
return null
|
||||
}
|
||||
|
||||
if (this.contextChecker.isInTypeContext(node)) {
|
||||
return null
|
||||
}
|
||||
|
||||
const hardcodedCount = this.countHardcodedValues(node)
|
||||
|
||||
if (hardcodedCount < this.MIN_HARDCODED_VALUES) {
|
||||
return null
|
||||
}
|
||||
|
||||
return this.createViolation(node, hardcodedCount, lines)
|
||||
}
|
||||
|
||||
/**
|
||||
* Counts hardcoded values in an object
|
||||
*/
|
||||
private countHardcodedValues(objectNode: Parser.SyntaxNode): number {
|
||||
let count = 0
|
||||
|
||||
for (const child of objectNode.children) {
|
||||
if (child.type === "pair") {
|
||||
const value = child.childForFieldName("value")
|
||||
if (value && this.isHardcodedValue(value)) {
|
||||
count++
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return count
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if a node is a hardcoded value
|
||||
*/
|
||||
private isHardcodedValue(node: Parser.SyntaxNode): boolean {
|
||||
if (node.type === "number") {
|
||||
const value = parseInt(node.text, 10)
|
||||
return !ALLOWED_NUMBERS.has(value) && value >= 100
|
||||
}
|
||||
|
||||
if (node.type === "string") {
|
||||
const stringFragment = node.children.find((c) => c.type === "string_fragment")
|
||||
return stringFragment !== undefined && stringFragment.text.length > 3
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates a HardcodedValue violation for a config object
|
||||
*/
|
||||
private createViolation(
|
||||
node: Parser.SyntaxNode,
|
||||
hardcodedCount: number,
|
||||
lines: string[],
|
||||
): HardcodedValue {
|
||||
const lineNumber = node.startPosition.row + 1
|
||||
const column = node.startPosition.column
|
||||
const context = lines[node.startPosition.row]?.trim() ?? ""
|
||||
|
||||
const objectPreview = this.getObjectPreview(node)
|
||||
|
||||
return HardcodedValue.create(
|
||||
`Configuration object with ${String(hardcodedCount)} hardcoded values: ${objectPreview}`,
|
||||
HARDCODE_TYPES.MAGIC_CONFIG as HardcodeType,
|
||||
lineNumber,
|
||||
column,
|
||||
context,
|
||||
)
|
||||
}
|
||||
|
||||
/**
|
||||
* Gets a preview of the object for the violation message
|
||||
*/
|
||||
private getObjectPreview(node: Parser.SyntaxNode): string {
|
||||
const text = node.text
|
||||
if (text.length <= 50) {
|
||||
return text
|
||||
}
|
||||
return `${text.substring(0, 47)}...`
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,256 @@
|
||||
import Parser from "tree-sitter"
|
||||
|
||||
/**
|
||||
* AST context checker for analyzing node contexts
|
||||
*
|
||||
* Provides reusable methods to check if a node is in specific contexts
|
||||
* like exports, type declarations, function calls, etc.
|
||||
*/
|
||||
export class AstContextChecker {
|
||||
/**
|
||||
* Checks if node is in an exported constant with "as const"
|
||||
*/
|
||||
public isInExportedConstant(node: Parser.SyntaxNode): boolean {
|
||||
let current = node.parent
|
||||
|
||||
while (current) {
|
||||
if (current.type === "export_statement") {
|
||||
if (this.checkExportedConstant(current)) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
current = current.parent
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
/**
|
||||
* Helper to check if export statement contains "as const"
|
||||
*/
|
||||
private checkExportedConstant(exportNode: Parser.SyntaxNode): boolean {
|
||||
const declaration = exportNode.childForFieldName("declaration")
|
||||
if (!declaration) {
|
||||
return false
|
||||
}
|
||||
|
||||
const declarator = this.findDescendant(declaration, "variable_declarator")
|
||||
if (!declarator) {
|
||||
return false
|
||||
}
|
||||
|
||||
const value = declarator.childForFieldName("value")
|
||||
if (value?.type !== "as_expression") {
|
||||
return false
|
||||
}
|
||||
|
||||
const asType = value.children.find((c) => c.type === "const")
|
||||
return asType !== undefined
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if node is in a type context (union type, type alias, interface)
|
||||
*/
|
||||
public isInTypeContext(node: Parser.SyntaxNode): boolean {
|
||||
let current = node.parent
|
||||
|
||||
while (current) {
|
||||
if (
|
||||
current.type === "type_alias_declaration" ||
|
||||
current.type === "union_type" ||
|
||||
current.type === "literal_type" ||
|
||||
current.type === "interface_declaration" ||
|
||||
current.type === "type_annotation"
|
||||
) {
|
||||
return true
|
||||
}
|
||||
current = current.parent
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if node is in an import statement or import() call
|
||||
*/
|
||||
public isInImportStatement(node: Parser.SyntaxNode): boolean {
|
||||
let current = node.parent
|
||||
|
||||
while (current) {
|
||||
if (current.type === "import_statement") {
|
||||
return true
|
||||
}
|
||||
|
||||
if (current.type === "call_expression") {
|
||||
const functionNode =
|
||||
current.childForFieldName("function") ||
|
||||
current.children.find((c) => c.type === "identifier" || c.type === "import")
|
||||
|
||||
if (
|
||||
functionNode &&
|
||||
(functionNode.text === "import" || functionNode.type === "import")
|
||||
) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
current = current.parent
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if node is in a test description (test(), describe(), it())
|
||||
*/
|
||||
public isInTestDescription(node: Parser.SyntaxNode): boolean {
|
||||
let current = node.parent
|
||||
|
||||
while (current) {
|
||||
if (current.type === "call_expression") {
|
||||
const callee = current.childForFieldName("function")
|
||||
if (callee?.type === "identifier") {
|
||||
const funcName = callee.text
|
||||
if (
|
||||
funcName === "test" ||
|
||||
funcName === "describe" ||
|
||||
funcName === "it" ||
|
||||
funcName === "expect"
|
||||
) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
}
|
||||
current = current.parent
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if node is in a console.log or console.error call
|
||||
*/
|
||||
public isInConsoleCall(node: Parser.SyntaxNode): boolean {
|
||||
let current = node.parent
|
||||
|
||||
while (current) {
|
||||
if (current.type === "call_expression") {
|
||||
const callee = current.childForFieldName("function")
|
||||
if (callee?.type === "member_expression") {
|
||||
const object = callee.childForFieldName("object")
|
||||
const property = callee.childForFieldName("property")
|
||||
|
||||
if (
|
||||
object?.text === "console" &&
|
||||
property &&
|
||||
(property.text === "log" ||
|
||||
property.text === "error" ||
|
||||
property.text === "warn")
|
||||
) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
}
|
||||
current = current.parent
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if node is in a Symbol() call
|
||||
*/
|
||||
public isInSymbolCall(node: Parser.SyntaxNode): boolean {
|
||||
let current = node.parent
|
||||
|
||||
while (current) {
|
||||
if (current.type === "call_expression") {
|
||||
const callee = current.childForFieldName("function")
|
||||
if (callee?.type === "identifier" && callee.text === "Symbol") {
|
||||
return true
|
||||
}
|
||||
}
|
||||
current = current.parent
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if node is in a typeof check
|
||||
*/
|
||||
public isInTypeofCheck(node: Parser.SyntaxNode): boolean {
|
||||
let current = node.parent
|
||||
|
||||
while (current) {
|
||||
if (current.type === "binary_expression") {
|
||||
const left = current.childForFieldName("left")
|
||||
const right = current.childForFieldName("right")
|
||||
|
||||
if (left?.type === "unary_expression") {
|
||||
const operator = left.childForFieldName("operator")
|
||||
if (operator?.text === "typeof") {
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
if (right?.type === "unary_expression") {
|
||||
const operator = right.childForFieldName("operator")
|
||||
if (operator?.text === "typeof") {
|
||||
return true
|
||||
}
|
||||
}
|
||||
}
|
||||
current = current.parent
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if parent is a call expression with specific function names
|
||||
*/
|
||||
public isInCallExpression(parent: Parser.SyntaxNode, functionNames: string[]): boolean {
|
||||
if (parent.type === "arguments") {
|
||||
const callExpr = parent.parent
|
||||
if (callExpr?.type === "call_expression") {
|
||||
const callee = callExpr.childForFieldName("function")
|
||||
if (callee?.type === "identifier") {
|
||||
return functionNames.includes(callee.text)
|
||||
}
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
/**
|
||||
* Gets context text around a node
|
||||
*/
|
||||
public getNodeContext(node: Parser.SyntaxNode): string {
|
||||
let current: Parser.SyntaxNode | null = node
|
||||
|
||||
while (current && current.type !== "lexical_declaration" && current.type !== "pair") {
|
||||
current = current.parent
|
||||
}
|
||||
|
||||
return current ? current.text.toLowerCase() : ""
|
||||
}
|
||||
|
||||
/**
|
||||
* Finds a descendant node by type
|
||||
*/
|
||||
private findDescendant(node: Parser.SyntaxNode, type: string): Parser.SyntaxNode | null {
|
||||
if (node.type === type) {
|
||||
return node
|
||||
}
|
||||
|
||||
for (const child of node.children) {
|
||||
const result = this.findDescendant(child, type)
|
||||
if (result) {
|
||||
return result
|
||||
}
|
||||
}
|
||||
|
||||
return null
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,132 @@
|
||||
import Parser from "tree-sitter"
|
||||
import { HardcodedValue, HardcodeType } from "../../domain/value-objects/HardcodedValue"
|
||||
import { HARDCODE_TYPES } from "../../shared/constants/rules"
|
||||
import { ALLOWED_NUMBERS, DETECTION_KEYWORDS } from "../constants/defaults"
|
||||
import { AstContextChecker } from "./AstContextChecker"
|
||||
|
||||
/**
|
||||
* AST-based analyzer for detecting magic numbers
|
||||
*
|
||||
* Analyzes number literal nodes in the AST to determine if they are
|
||||
* hardcoded values that should be extracted to constants.
|
||||
*/
|
||||
export class AstNumberAnalyzer {
|
||||
constructor(private readonly contextChecker: AstContextChecker) {}
|
||||
|
||||
/**
|
||||
* Analyzes a number node and returns a violation if it's a magic number
|
||||
*/
|
||||
public analyze(node: Parser.SyntaxNode, lines: string[]): HardcodedValue | null {
|
||||
const value = parseInt(node.text, 10)
|
||||
|
||||
if (ALLOWED_NUMBERS.has(value)) {
|
||||
return null
|
||||
}
|
||||
|
||||
if (this.contextChecker.isInExportedConstant(node)) {
|
||||
return null
|
||||
}
|
||||
|
||||
if (!this.shouldDetect(node, value)) {
|
||||
return null
|
||||
}
|
||||
|
||||
return this.createViolation(node, value, lines)
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if number should be detected based on context
|
||||
*/
|
||||
private shouldDetect(node: Parser.SyntaxNode, value: number): boolean {
|
||||
const parent = node.parent
|
||||
if (!parent) {
|
||||
return false
|
||||
}
|
||||
|
||||
if (this.contextChecker.isInCallExpression(parent, ["setTimeout", "setInterval"])) {
|
||||
return true
|
||||
}
|
||||
|
||||
if (parent.type === "variable_declarator") {
|
||||
const identifier = parent.childForFieldName("name")
|
||||
if (identifier && this.hasConfigKeyword(identifier.text.toLowerCase())) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
if (parent.type === "pair") {
|
||||
const key = parent.childForFieldName("key")
|
||||
if (key && this.hasConfigKeyword(key.text.toLowerCase())) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
if (value >= 100) {
|
||||
const context = this.contextChecker.getNodeContext(node)
|
||||
return this.looksLikeMagicNumber(context)
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if name contains configuration keywords
|
||||
*/
|
||||
private hasConfigKeyword(name: string): boolean {
|
||||
const keywords = [
|
||||
DETECTION_KEYWORDS.TIMEOUT,
|
||||
DETECTION_KEYWORDS.DELAY,
|
||||
DETECTION_KEYWORDS.RETRY,
|
||||
DETECTION_KEYWORDS.LIMIT,
|
||||
DETECTION_KEYWORDS.MAX,
|
||||
DETECTION_KEYWORDS.MIN,
|
||||
DETECTION_KEYWORDS.PORT,
|
||||
DETECTION_KEYWORDS.INTERVAL,
|
||||
]
|
||||
|
||||
return (
|
||||
keywords.some((keyword) => name.includes(keyword)) ||
|
||||
name.includes("retries") ||
|
||||
name.includes("attempts")
|
||||
)
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if context suggests a magic number
|
||||
*/
|
||||
private looksLikeMagicNumber(context: string): boolean {
|
||||
const configKeywords = [
|
||||
DETECTION_KEYWORDS.TIMEOUT,
|
||||
DETECTION_KEYWORDS.DELAY,
|
||||
DETECTION_KEYWORDS.RETRY,
|
||||
DETECTION_KEYWORDS.LIMIT,
|
||||
DETECTION_KEYWORDS.MAX,
|
||||
DETECTION_KEYWORDS.MIN,
|
||||
DETECTION_KEYWORDS.PORT,
|
||||
DETECTION_KEYWORDS.INTERVAL,
|
||||
]
|
||||
|
||||
return configKeywords.some((keyword) => context.includes(keyword))
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates a HardcodedValue violation from a number node
|
||||
*/
|
||||
private createViolation(
|
||||
node: Parser.SyntaxNode,
|
||||
value: number,
|
||||
lines: string[],
|
||||
): HardcodedValue {
|
||||
const lineNumber = node.startPosition.row + 1
|
||||
const column = node.startPosition.column
|
||||
const context = lines[node.startPosition.row]?.trim() ?? ""
|
||||
|
||||
return HardcodedValue.create(
|
||||
value,
|
||||
HARDCODE_TYPES.MAGIC_NUMBER as HardcodeType,
|
||||
lineNumber,
|
||||
column,
|
||||
context,
|
||||
)
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,144 @@
|
||||
import Parser from "tree-sitter"
|
||||
import { HardcodedValue, HardcodeType } from "../../domain/value-objects/HardcodedValue"
|
||||
import { CONFIG_KEYWORDS, DETECTION_VALUES, HARDCODE_TYPES } from "../../shared/constants/rules"
|
||||
import { AstContextChecker } from "./AstContextChecker"
|
||||
import { ValuePatternMatcher } from "./ValuePatternMatcher"
|
||||
|
||||
/**
|
||||
* AST-based analyzer for detecting magic strings
|
||||
*
|
||||
* Analyzes string literal nodes in the AST to determine if they are
|
||||
* hardcoded values that should be extracted to constants.
|
||||
*
|
||||
* Detects various types of hardcoded strings:
|
||||
* - URLs and connection strings
|
||||
* - Email addresses
|
||||
* - IP addresses
|
||||
* - File paths
|
||||
* - Dates
|
||||
* - API keys
|
||||
*/
|
||||
export class AstStringAnalyzer {
|
||||
private readonly patternMatcher: ValuePatternMatcher
|
||||
|
||||
constructor(private readonly contextChecker: AstContextChecker) {
|
||||
this.patternMatcher = new ValuePatternMatcher()
|
||||
}
|
||||
|
||||
/**
|
||||
* Analyzes a string node and returns a violation if it's a magic string
|
||||
*/
|
||||
public analyze(node: Parser.SyntaxNode, lines: string[]): HardcodedValue | null {
|
||||
const stringFragment = node.children.find((child) => child.type === "string_fragment")
|
||||
if (!stringFragment) {
|
||||
return null
|
||||
}
|
||||
|
||||
const value = stringFragment.text
|
||||
|
||||
if (value.length <= 3) {
|
||||
return null
|
||||
}
|
||||
|
||||
if (this.contextChecker.isInExportedConstant(node)) {
|
||||
return null
|
||||
}
|
||||
|
||||
if (this.contextChecker.isInTypeContext(node)) {
|
||||
return null
|
||||
}
|
||||
|
||||
if (this.contextChecker.isInImportStatement(node)) {
|
||||
return null
|
||||
}
|
||||
|
||||
if (this.contextChecker.isInTestDescription(node)) {
|
||||
return null
|
||||
}
|
||||
|
||||
if (this.contextChecker.isInConsoleCall(node)) {
|
||||
return null
|
||||
}
|
||||
|
||||
if (this.contextChecker.isInSymbolCall(node)) {
|
||||
return null
|
||||
}
|
||||
|
||||
if (this.contextChecker.isInTypeofCheck(node)) {
|
||||
return null
|
||||
}
|
||||
|
||||
if (this.shouldDetect(node, value)) {
|
||||
return this.createViolation(node, value, lines)
|
||||
}
|
||||
|
||||
return null
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if string value should be detected
|
||||
*/
|
||||
private shouldDetect(node: Parser.SyntaxNode, value: string): boolean {
|
||||
if (this.patternMatcher.shouldDetect(value)) {
|
||||
return true
|
||||
}
|
||||
|
||||
if (this.hasConfigurationContext(node)) {
|
||||
return true
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if string is in a configuration-related context
|
||||
*/
|
||||
private hasConfigurationContext(node: Parser.SyntaxNode): boolean {
|
||||
const context = this.contextChecker.getNodeContext(node).toLowerCase()
|
||||
|
||||
const configKeywords = [
|
||||
"url",
|
||||
"uri",
|
||||
...CONFIG_KEYWORDS.NETWORK,
|
||||
"api",
|
||||
...CONFIG_KEYWORDS.DATABASE,
|
||||
"db",
|
||||
"env",
|
||||
...CONFIG_KEYWORDS.SECURITY,
|
||||
"key",
|
||||
...CONFIG_KEYWORDS.MESSAGES,
|
||||
"label",
|
||||
]
|
||||
|
||||
return configKeywords.some((keyword) => context.includes(keyword))
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates a HardcodedValue violation from a string node
|
||||
*/
|
||||
private createViolation(
|
||||
node: Parser.SyntaxNode,
|
||||
value: string,
|
||||
lines: string[],
|
||||
): HardcodedValue {
|
||||
const lineNumber = node.startPosition.row + 1
|
||||
const column = node.startPosition.column
|
||||
const context = lines[node.startPosition.row]?.trim() ?? ""
|
||||
|
||||
const detectedType = this.patternMatcher.detectType(value)
|
||||
const valueType =
|
||||
detectedType ||
|
||||
(this.hasConfigurationContext(node)
|
||||
? DETECTION_VALUES.TYPE_CONFIG
|
||||
: DETECTION_VALUES.TYPE_GENERIC)
|
||||
|
||||
return HardcodedValue.create(
|
||||
value,
|
||||
HARDCODE_TYPES.MAGIC_STRING as HardcodeType,
|
||||
lineNumber,
|
||||
column,
|
||||
context,
|
||||
valueType,
|
||||
)
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,21 @@
|
||||
/**
|
||||
* Checks if a file is a constants definition file
|
||||
*
|
||||
* Identifies files that should be skipped for hardcode detection
|
||||
* since they are meant to contain constant definitions.
|
||||
*/
|
||||
export class ConstantsFileChecker {
|
||||
private readonly constantsPatterns = [
|
||||
/^constants?\.(ts|js)$/i,
|
||||
/constants?\/.*\.(ts|js)$/i,
|
||||
/\/(constants|config|settings|defaults|tokens)\.ts$/i,
|
||||
/\/di\/tokens\.(ts|js)$/i,
|
||||
]
|
||||
|
||||
/**
|
||||
* Checks if a file path represents a constants file
|
||||
*/
|
||||
public isConstantsFile(filePath: string): boolean {
|
||||
return this.constantsPatterns.some((pattern) => pattern.test(filePath))
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,72 @@
|
||||
import { DDD_FOLDER_NAMES } from "../constants/detectorPatterns"
|
||||
|
||||
/**
|
||||
* Registry for DDD folder names used in aggregate boundary detection
|
||||
*
|
||||
* Centralizes folder name management for cleaner code organization
|
||||
* and easier maintenance of folder name rules.
|
||||
*/
|
||||
export class FolderRegistry {
|
||||
public readonly entityFolders: Set<string>
|
||||
public readonly valueObjectFolders: Set<string>
|
||||
public readonly allowedFolders: Set<string>
|
||||
public readonly nonAggregateFolders: Set<string>
|
||||
|
||||
constructor() {
|
||||
this.entityFolders = new Set<string>([
|
||||
DDD_FOLDER_NAMES.ENTITIES,
|
||||
DDD_FOLDER_NAMES.AGGREGATES,
|
||||
])
|
||||
|
||||
this.valueObjectFolders = new Set<string>([
|
||||
DDD_FOLDER_NAMES.VALUE_OBJECTS,
|
||||
DDD_FOLDER_NAMES.VO,
|
||||
])
|
||||
|
||||
this.allowedFolders = new Set<string>([
|
||||
DDD_FOLDER_NAMES.VALUE_OBJECTS,
|
||||
DDD_FOLDER_NAMES.VO,
|
||||
DDD_FOLDER_NAMES.EVENTS,
|
||||
DDD_FOLDER_NAMES.DOMAIN_EVENTS,
|
||||
DDD_FOLDER_NAMES.REPOSITORIES,
|
||||
DDD_FOLDER_NAMES.SERVICES,
|
||||
DDD_FOLDER_NAMES.SPECIFICATIONS,
|
||||
DDD_FOLDER_NAMES.ERRORS,
|
||||
DDD_FOLDER_NAMES.EXCEPTIONS,
|
||||
])
|
||||
|
||||
this.nonAggregateFolders = new Set<string>([
|
||||
DDD_FOLDER_NAMES.VALUE_OBJECTS,
|
||||
DDD_FOLDER_NAMES.VO,
|
||||
DDD_FOLDER_NAMES.EVENTS,
|
||||
DDD_FOLDER_NAMES.DOMAIN_EVENTS,
|
||||
DDD_FOLDER_NAMES.REPOSITORIES,
|
||||
DDD_FOLDER_NAMES.SERVICES,
|
||||
DDD_FOLDER_NAMES.SPECIFICATIONS,
|
||||
DDD_FOLDER_NAMES.ENTITIES,
|
||||
DDD_FOLDER_NAMES.CONSTANTS,
|
||||
DDD_FOLDER_NAMES.SHARED,
|
||||
DDD_FOLDER_NAMES.FACTORIES,
|
||||
DDD_FOLDER_NAMES.PORTS,
|
||||
DDD_FOLDER_NAMES.INTERFACES,
|
||||
DDD_FOLDER_NAMES.ERRORS,
|
||||
DDD_FOLDER_NAMES.EXCEPTIONS,
|
||||
])
|
||||
}
|
||||
|
||||
public isEntityFolder(folderName: string): boolean {
|
||||
return this.entityFolders.has(folderName)
|
||||
}
|
||||
|
||||
public isValueObjectFolder(folderName: string): boolean {
|
||||
return this.valueObjectFolders.has(folderName)
|
||||
}
|
||||
|
||||
public isAllowedFolder(folderName: string): boolean {
|
||||
return this.allowedFolders.has(folderName)
|
||||
}
|
||||
|
||||
public isNonAggregateFolder(folderName: string): boolean {
|
||||
return this.nonAggregateFolders.has(folderName)
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,150 @@
|
||||
import { IMPORT_PATTERNS } from "../constants/paths"
|
||||
import { AggregatePathAnalyzer } from "./AggregatePathAnalyzer"
|
||||
import { FolderRegistry } from "./FolderRegistry"
|
||||
|
||||
/**
|
||||
* Validates imports for aggregate boundary violations
|
||||
*
|
||||
* Checks if imports cross aggregate boundaries inappropriately
|
||||
* and ensures proper encapsulation in DDD architecture.
|
||||
*/
|
||||
export class ImportValidator {
|
||||
constructor(
|
||||
private readonly folderRegistry: FolderRegistry,
|
||||
private readonly pathAnalyzer: AggregatePathAnalyzer,
|
||||
) {}
|
||||
|
||||
/**
|
||||
* Checks if an import violates aggregate boundaries
|
||||
*/
|
||||
public isViolation(importPath: string, currentAggregate: string): boolean {
|
||||
const normalizedPath = this.normalizeImportPath(importPath)
|
||||
|
||||
if (!this.isValidImportPath(normalizedPath)) {
|
||||
return false
|
||||
}
|
||||
|
||||
if (this.isInternalBoundedContextImport(normalizedPath)) {
|
||||
return false
|
||||
}
|
||||
|
||||
const targetAggregate = this.pathAnalyzer.extractAggregateFromImport(normalizedPath)
|
||||
if (!targetAggregate || targetAggregate === currentAggregate) {
|
||||
return false
|
||||
}
|
||||
|
||||
if (this.isAllowedImport(normalizedPath)) {
|
||||
return false
|
||||
}
|
||||
|
||||
return this.seemsLikeEntityImport(normalizedPath)
|
||||
}
|
||||
|
||||
/**
|
||||
* Extracts all import paths from a line of code
|
||||
*/
|
||||
public extractImports(line: string): string[] {
|
||||
const imports: string[] = []
|
||||
|
||||
this.extractEsImports(line, imports)
|
||||
this.extractRequireImports(line, imports)
|
||||
|
||||
return imports
|
||||
}
|
||||
|
||||
/**
|
||||
* Normalizes an import path for consistent processing
|
||||
*/
|
||||
private normalizeImportPath(importPath: string): string {
|
||||
return importPath.replace(IMPORT_PATTERNS.QUOTE, "").toLowerCase()
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if import path is valid for analysis
|
||||
*/
|
||||
private isValidImportPath(normalizedPath: string): boolean {
|
||||
if (!normalizedPath.includes("/")) {
|
||||
return false
|
||||
}
|
||||
|
||||
if (!normalizedPath.startsWith(".") && !normalizedPath.startsWith("/")) {
|
||||
return false
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if import is internal to the same bounded context
|
||||
*/
|
||||
private isInternalBoundedContextImport(normalizedPath: string): boolean {
|
||||
const parts = normalizedPath.split("/")
|
||||
const dotDotCount = parts.filter((p) => p === "..").length
|
||||
|
||||
if (dotDotCount === 1) {
|
||||
const nonDotParts = parts.filter((p) => p !== ".." && p !== ".")
|
||||
if (nonDotParts.length >= 1) {
|
||||
const firstFolder = nonDotParts[0]
|
||||
if (this.folderRegistry.isEntityFolder(firstFolder)) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if import is from an allowed folder
|
||||
*/
|
||||
private isAllowedImport(normalizedPath: string): boolean {
|
||||
for (const folderName of this.folderRegistry.allowedFolders) {
|
||||
if (normalizedPath.includes(`/${folderName}/`)) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if import seems to be an entity
|
||||
*/
|
||||
private seemsLikeEntityImport(normalizedPath: string): boolean {
|
||||
const pathParts = normalizedPath.split("/")
|
||||
const lastPart = pathParts[pathParts.length - 1]
|
||||
|
||||
if (!lastPart) {
|
||||
return false
|
||||
}
|
||||
|
||||
const filename = lastPart.replace(/\.(ts|js)$/, "")
|
||||
|
||||
if (filename.length > 0 && /^[a-z][a-z]/.exec(filename)) {
|
||||
return true
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
/**
|
||||
* Extracts ES6 imports from a line
|
||||
*/
|
||||
private extractEsImports(line: string, imports: string[]): void {
|
||||
let match = IMPORT_PATTERNS.ES_IMPORT.exec(line)
|
||||
while (match) {
|
||||
imports.push(match[1])
|
||||
match = IMPORT_PATTERNS.ES_IMPORT.exec(line)
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Extracts CommonJS requires from a line
|
||||
*/
|
||||
private extractRequireImports(line: string, imports: string[]): void {
|
||||
let match = IMPORT_PATTERNS.REQUIRE.exec(line)
|
||||
while (match) {
|
||||
imports.push(match[1])
|
||||
match = IMPORT_PATTERNS.REQUIRE.exec(line)
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,134 @@
|
||||
import { REPOSITORY_METHOD_SUGGESTIONS } from "../constants/detectorPatterns"
|
||||
import { OrmTypeMatcher } from "./OrmTypeMatcher"
|
||||
|
||||
/**
|
||||
* Validates repository method names for domain language compliance
|
||||
*
|
||||
* Ensures repository methods use domain language instead of
|
||||
* technical database terminology.
|
||||
*/
|
||||
export class MethodNameValidator {
|
||||
private readonly domainMethodPatterns = [
|
||||
/^findBy[A-Z]/,
|
||||
/^findAll$/,
|
||||
/^find[A-Z]/,
|
||||
/^save$/,
|
||||
/^saveAll$/,
|
||||
/^create$/,
|
||||
/^update$/,
|
||||
/^delete$/,
|
||||
/^deleteBy[A-Z]/,
|
||||
/^deleteAll$/,
|
||||
/^remove$/,
|
||||
/^removeBy[A-Z]/,
|
||||
/^removeAll$/,
|
||||
/^add$/,
|
||||
/^add[A-Z]/,
|
||||
/^get[A-Z]/,
|
||||
/^getAll$/,
|
||||
/^search/,
|
||||
/^list/,
|
||||
/^has[A-Z]/,
|
||||
/^is[A-Z]/,
|
||||
/^exists$/,
|
||||
/^exists[A-Z]/,
|
||||
/^existsBy[A-Z]/,
|
||||
/^clear[A-Z]/,
|
||||
/^clearAll$/,
|
||||
/^store[A-Z]/,
|
||||
/^initialize$/,
|
||||
/^initializeCollection$/,
|
||||
/^close$/,
|
||||
/^connect$/,
|
||||
/^disconnect$/,
|
||||
/^count$/,
|
||||
/^countBy[A-Z]/,
|
||||
]
|
||||
|
||||
constructor(private readonly ormMatcher: OrmTypeMatcher) {}
|
||||
|
||||
/**
|
||||
* Checks if a method name follows domain language conventions
|
||||
*/
|
||||
public isDomainMethodName(methodName: string): boolean {
|
||||
if (this.ormMatcher.isTechnicalMethod(methodName)) {
|
||||
return false
|
||||
}
|
||||
|
||||
return this.domainMethodPatterns.some((pattern) => pattern.test(methodName))
|
||||
}
|
||||
|
||||
/**
|
||||
* Suggests better domain method names
|
||||
*/
|
||||
public suggestDomainMethodName(methodName: string): string {
|
||||
const lowerName = methodName.toLowerCase()
|
||||
const suggestions: string[] = []
|
||||
|
||||
this.collectSuggestions(lowerName, suggestions)
|
||||
|
||||
if (lowerName.includes("get") && lowerName.includes("all")) {
|
||||
suggestions.push(
|
||||
REPOSITORY_METHOD_SUGGESTIONS.FIND_ALL,
|
||||
REPOSITORY_METHOD_SUGGESTIONS.LIST_ALL,
|
||||
)
|
||||
}
|
||||
|
||||
if (suggestions.length === 0) {
|
||||
return REPOSITORY_METHOD_SUGGESTIONS.DEFAULT_SUGGESTION
|
||||
}
|
||||
|
||||
return `Consider: ${suggestions.slice(0, 3).join(", ")}`
|
||||
}
|
||||
|
||||
/**
|
||||
* Collects method name suggestions based on keywords
|
||||
*/
|
||||
private collectSuggestions(lowerName: string, suggestions: string[]): void {
|
||||
const suggestionMap: Record<string, string[]> = {
|
||||
query: [
|
||||
REPOSITORY_METHOD_SUGGESTIONS.SEARCH,
|
||||
REPOSITORY_METHOD_SUGGESTIONS.FIND_BY_PROPERTY,
|
||||
],
|
||||
select: [
|
||||
REPOSITORY_METHOD_SUGGESTIONS.FIND_BY_PROPERTY,
|
||||
REPOSITORY_METHOD_SUGGESTIONS.GET_ENTITY,
|
||||
],
|
||||
insert: [
|
||||
REPOSITORY_METHOD_SUGGESTIONS.CREATE,
|
||||
REPOSITORY_METHOD_SUGGESTIONS.ADD_ENTITY,
|
||||
REPOSITORY_METHOD_SUGGESTIONS.STORE_ENTITY,
|
||||
],
|
||||
update: [
|
||||
REPOSITORY_METHOD_SUGGESTIONS.UPDATE,
|
||||
REPOSITORY_METHOD_SUGGESTIONS.MODIFY_ENTITY,
|
||||
],
|
||||
upsert: [
|
||||
REPOSITORY_METHOD_SUGGESTIONS.SAVE,
|
||||
REPOSITORY_METHOD_SUGGESTIONS.STORE_ENTITY,
|
||||
],
|
||||
remove: [
|
||||
REPOSITORY_METHOD_SUGGESTIONS.DELETE,
|
||||
REPOSITORY_METHOD_SUGGESTIONS.REMOVE_BY_PROPERTY,
|
||||
],
|
||||
fetch: [
|
||||
REPOSITORY_METHOD_SUGGESTIONS.FIND_BY_PROPERTY,
|
||||
REPOSITORY_METHOD_SUGGESTIONS.GET_ENTITY,
|
||||
],
|
||||
retrieve: [
|
||||
REPOSITORY_METHOD_SUGGESTIONS.FIND_BY_PROPERTY,
|
||||
REPOSITORY_METHOD_SUGGESTIONS.GET_ENTITY,
|
||||
],
|
||||
load: [
|
||||
REPOSITORY_METHOD_SUGGESTIONS.FIND_BY_PROPERTY,
|
||||
REPOSITORY_METHOD_SUGGESTIONS.GET_ENTITY,
|
||||
],
|
||||
}
|
||||
|
||||
for (const [keyword, keywords] of Object.entries(suggestionMap)) {
|
||||
if (lowerName.includes(keyword)) {
|
||||
suggestions.push(...keywords)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,68 @@
|
||||
import { ORM_QUERY_METHODS } from "../constants/orm-methods"
|
||||
import { REPOSITORY_PATTERN_MESSAGES } from "../../domain/constants/Messages"
|
||||
|
||||
/**
|
||||
* Matches and validates ORM-specific types and patterns
|
||||
*
|
||||
* Identifies ORM-specific types (Prisma, TypeORM, Mongoose, etc.)
|
||||
* that should not appear in domain layer repository interfaces.
|
||||
*/
|
||||
export class OrmTypeMatcher {
|
||||
private readonly ormTypePatterns = [
|
||||
/Prisma\./,
|
||||
/PrismaClient/,
|
||||
/TypeORM/,
|
||||
/@Entity/,
|
||||
/@Column/,
|
||||
/@PrimaryColumn/,
|
||||
/@PrimaryGeneratedColumn/,
|
||||
/@ManyToOne/,
|
||||
/@OneToMany/,
|
||||
/@ManyToMany/,
|
||||
/@JoinColumn/,
|
||||
/@JoinTable/,
|
||||
/Mongoose\./,
|
||||
/Schema/,
|
||||
/Model</,
|
||||
/Document/,
|
||||
/Sequelize\./,
|
||||
/DataTypes\./,
|
||||
/FindOptions/,
|
||||
/WhereOptions/,
|
||||
/IncludeOptions/,
|
||||
/QueryInterface/,
|
||||
/MikroORM/,
|
||||
/EntityManager/,
|
||||
/EntityRepository/,
|
||||
/Collection</,
|
||||
]
|
||||
|
||||
/**
|
||||
* Checks if a type name is an ORM-specific type
|
||||
*/
|
||||
public isOrmType(typeName: string): boolean {
|
||||
return this.ormTypePatterns.some((pattern) => pattern.test(typeName))
|
||||
}
|
||||
|
||||
/**
|
||||
* Extracts ORM type name from a code line
|
||||
*/
|
||||
public extractOrmType(line: string): string {
|
||||
for (const pattern of this.ormTypePatterns) {
|
||||
const match = line.match(pattern)
|
||||
if (match) {
|
||||
const startIdx = match.index || 0
|
||||
const typeMatch = /[\w.]+/.exec(line.slice(startIdx))
|
||||
return typeMatch ? typeMatch[0] : REPOSITORY_PATTERN_MESSAGES.UNKNOWN_TYPE
|
||||
}
|
||||
}
|
||||
return REPOSITORY_PATTERN_MESSAGES.UNKNOWN_TYPE
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if a method name is a technical ORM method
|
||||
*/
|
||||
public isTechnicalMethod(methodName: string): boolean {
|
||||
return (ORM_QUERY_METHODS as readonly string[]).includes(methodName)
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,31 @@
|
||||
import { LAYERS } from "../../shared/constants/rules"
|
||||
|
||||
/**
|
||||
* Analyzes files to determine their role in the repository pattern
|
||||
*
|
||||
* Identifies repository interfaces and use cases based on file paths
|
||||
* and architectural layer conventions.
|
||||
*/
|
||||
export class RepositoryFileAnalyzer {
|
||||
/**
|
||||
* Checks if a file is a repository interface
|
||||
*/
|
||||
public isRepositoryInterface(filePath: string, layer: string | undefined): boolean {
|
||||
if (layer !== LAYERS.DOMAIN) {
|
||||
return false
|
||||
}
|
||||
|
||||
return /I[A-Z]\w*Repository\.ts$/.test(filePath) && /repositories?\//.test(filePath)
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if a file is a use case
|
||||
*/
|
||||
public isUseCase(filePath: string, layer: string | undefined): boolean {
|
||||
if (layer !== LAYERS.APPLICATION) {
|
||||
return false
|
||||
}
|
||||
|
||||
return /use-cases?\//.test(filePath) && /[A-Z][a-z]+[A-Z]\w*\.ts$/.test(filePath)
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,285 @@
|
||||
import { RepositoryViolation } from "../../domain/value-objects/RepositoryViolation"
|
||||
import { LAYERS, REPOSITORY_VIOLATION_TYPES } from "../../shared/constants/rules"
|
||||
import { REPOSITORY_PATTERN_MESSAGES } from "../../domain/constants/Messages"
|
||||
import { OrmTypeMatcher } from "./OrmTypeMatcher"
|
||||
import { MethodNameValidator } from "./MethodNameValidator"
|
||||
|
||||
/**
|
||||
* Detects specific repository pattern violations
|
||||
*
|
||||
* Handles detection of ORM types, non-domain methods, concrete repositories,
|
||||
* and repository instantiation violations.
|
||||
*/
|
||||
export class RepositoryViolationDetector {
|
||||
constructor(
|
||||
private readonly ormMatcher: OrmTypeMatcher,
|
||||
private readonly methodValidator: MethodNameValidator,
|
||||
) {}
|
||||
|
||||
/**
|
||||
* Detects ORM types in repository interface
|
||||
*/
|
||||
public detectOrmTypes(
|
||||
code: string,
|
||||
filePath: string,
|
||||
layer: string | undefined,
|
||||
): RepositoryViolation[] {
|
||||
const violations: RepositoryViolation[] = []
|
||||
const lines = code.split("\n")
|
||||
|
||||
for (let i = 0; i < lines.length; i++) {
|
||||
const line = lines[i]
|
||||
const lineNumber = i + 1
|
||||
|
||||
this.detectOrmInMethod(line, lineNumber, filePath, layer, violations)
|
||||
this.detectOrmInLine(line, lineNumber, filePath, layer, violations)
|
||||
}
|
||||
|
||||
return violations
|
||||
}
|
||||
|
||||
/**
|
||||
* Detects non-domain method names
|
||||
*/
|
||||
public detectNonDomainMethods(
|
||||
code: string,
|
||||
filePath: string,
|
||||
layer: string | undefined,
|
||||
): RepositoryViolation[] {
|
||||
const violations: RepositoryViolation[] = []
|
||||
const lines = code.split("\n")
|
||||
|
||||
for (let i = 0; i < lines.length; i++) {
|
||||
const line = lines[i]
|
||||
const lineNumber = i + 1
|
||||
|
||||
const methodMatch = /^\s*(\w+)\s*\(/.exec(line)
|
||||
|
||||
if (methodMatch) {
|
||||
const methodName = methodMatch[1]
|
||||
|
||||
if (
|
||||
!this.methodValidator.isDomainMethodName(methodName) &&
|
||||
!line.trim().startsWith("//")
|
||||
) {
|
||||
const suggestion = this.methodValidator.suggestDomainMethodName(methodName)
|
||||
violations.push(
|
||||
RepositoryViolation.create(
|
||||
REPOSITORY_VIOLATION_TYPES.NON_DOMAIN_METHOD_NAME,
|
||||
filePath,
|
||||
layer || LAYERS.DOMAIN,
|
||||
lineNumber,
|
||||
`Method '${methodName}' uses technical name instead of domain language. ${suggestion}`,
|
||||
undefined,
|
||||
undefined,
|
||||
methodName,
|
||||
),
|
||||
)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return violations
|
||||
}
|
||||
|
||||
/**
|
||||
* Detects concrete repository usage
|
||||
*/
|
||||
public detectConcreteRepositoryUsage(
|
||||
code: string,
|
||||
filePath: string,
|
||||
layer: string | undefined,
|
||||
): RepositoryViolation[] {
|
||||
const violations: RepositoryViolation[] = []
|
||||
const lines = code.split("\n")
|
||||
|
||||
for (let i = 0; i < lines.length; i++) {
|
||||
const line = lines[i]
|
||||
const lineNumber = i + 1
|
||||
|
||||
this.detectConcreteInConstructor(line, lineNumber, filePath, layer, violations)
|
||||
this.detectConcreteInField(line, lineNumber, filePath, layer, violations)
|
||||
}
|
||||
|
||||
return violations
|
||||
}
|
||||
|
||||
/**
|
||||
* Detects new Repository() instantiation
|
||||
*/
|
||||
public detectNewInstantiation(
|
||||
code: string,
|
||||
filePath: string,
|
||||
layer: string | undefined,
|
||||
): RepositoryViolation[] {
|
||||
const violations: RepositoryViolation[] = []
|
||||
const lines = code.split("\n")
|
||||
|
||||
for (let i = 0; i < lines.length; i++) {
|
||||
const line = lines[i]
|
||||
const lineNumber = i + 1
|
||||
|
||||
const newRepositoryMatch = /new\s+([A-Z]\w*Repository)\s*\(/.exec(line)
|
||||
|
||||
if (newRepositoryMatch && !line.trim().startsWith("//")) {
|
||||
const repositoryName = newRepositoryMatch[1]
|
||||
violations.push(
|
||||
RepositoryViolation.create(
|
||||
REPOSITORY_VIOLATION_TYPES.NEW_REPOSITORY_IN_USE_CASE,
|
||||
filePath,
|
||||
layer || LAYERS.APPLICATION,
|
||||
lineNumber,
|
||||
`Use case creates repository with 'new ${repositoryName}()'`,
|
||||
undefined,
|
||||
repositoryName,
|
||||
),
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
return violations
|
||||
}
|
||||
|
||||
/**
|
||||
* Detects ORM types in method signatures
|
||||
*/
|
||||
private detectOrmInMethod(
|
||||
line: string,
|
||||
lineNumber: number,
|
||||
filePath: string,
|
||||
layer: string | undefined,
|
||||
violations: RepositoryViolation[],
|
||||
): void {
|
||||
const methodMatch =
|
||||
/(\w+)\s*\([^)]*:\s*([^)]+)\)\s*:\s*.*?(?:Promise<([^>]+)>|([A-Z]\w+))/.exec(line)
|
||||
|
||||
if (methodMatch) {
|
||||
const params = methodMatch[2]
|
||||
const returnType = methodMatch[3] || methodMatch[4]
|
||||
|
||||
if (this.ormMatcher.isOrmType(params)) {
|
||||
const ormType = this.ormMatcher.extractOrmType(params)
|
||||
violations.push(
|
||||
RepositoryViolation.create(
|
||||
REPOSITORY_VIOLATION_TYPES.ORM_TYPE_IN_INTERFACE,
|
||||
filePath,
|
||||
layer || LAYERS.DOMAIN,
|
||||
lineNumber,
|
||||
`Method parameter uses ORM type: ${ormType}`,
|
||||
ormType,
|
||||
),
|
||||
)
|
||||
}
|
||||
|
||||
if (returnType && this.ormMatcher.isOrmType(returnType)) {
|
||||
const ormType = this.ormMatcher.extractOrmType(returnType)
|
||||
violations.push(
|
||||
RepositoryViolation.create(
|
||||
REPOSITORY_VIOLATION_TYPES.ORM_TYPE_IN_INTERFACE,
|
||||
filePath,
|
||||
layer || LAYERS.DOMAIN,
|
||||
lineNumber,
|
||||
`Method return type uses ORM type: ${ormType}`,
|
||||
ormType,
|
||||
),
|
||||
)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Detects ORM types in general code line
|
||||
*/
|
||||
private detectOrmInLine(
|
||||
line: string,
|
||||
lineNumber: number,
|
||||
filePath: string,
|
||||
layer: string | undefined,
|
||||
violations: RepositoryViolation[],
|
||||
): void {
|
||||
if (this.ormMatcher.isOrmType(line) && !line.trim().startsWith("//")) {
|
||||
const ormType = this.ormMatcher.extractOrmType(line)
|
||||
violations.push(
|
||||
RepositoryViolation.create(
|
||||
REPOSITORY_VIOLATION_TYPES.ORM_TYPE_IN_INTERFACE,
|
||||
filePath,
|
||||
layer || LAYERS.DOMAIN,
|
||||
lineNumber,
|
||||
`Repository interface contains ORM-specific type: ${ormType}`,
|
||||
ormType,
|
||||
),
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Detects concrete repository in constructor
|
||||
*/
|
||||
private detectConcreteInConstructor(
|
||||
line: string,
|
||||
lineNumber: number,
|
||||
filePath: string,
|
||||
layer: string | undefined,
|
||||
violations: RepositoryViolation[],
|
||||
): void {
|
||||
const constructorParamMatch =
|
||||
/constructor\s*\([^)]*(?:private|public|protected)\s+(?:readonly\s+)?(\w+)\s*:\s*([A-Z]\w*Repository)/.exec(
|
||||
line,
|
||||
)
|
||||
|
||||
if (constructorParamMatch) {
|
||||
const repositoryType = constructorParamMatch[2]
|
||||
|
||||
if (!repositoryType.startsWith("I")) {
|
||||
violations.push(
|
||||
RepositoryViolation.create(
|
||||
REPOSITORY_VIOLATION_TYPES.CONCRETE_REPOSITORY_IN_USE_CASE,
|
||||
filePath,
|
||||
layer || LAYERS.APPLICATION,
|
||||
lineNumber,
|
||||
`Use case depends on concrete repository '${repositoryType}'`,
|
||||
undefined,
|
||||
repositoryType,
|
||||
),
|
||||
)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Detects concrete repository in field
|
||||
*/
|
||||
private detectConcreteInField(
|
||||
line: string,
|
||||
lineNumber: number,
|
||||
filePath: string,
|
||||
layer: string | undefined,
|
||||
violations: RepositoryViolation[],
|
||||
): void {
|
||||
const fieldMatch =
|
||||
/(?:private|public|protected)\s+(?:readonly\s+)?(\w+)\s*:\s*([A-Z]\w*Repository)/.exec(
|
||||
line,
|
||||
)
|
||||
|
||||
if (fieldMatch) {
|
||||
const repositoryType = fieldMatch[2]
|
||||
|
||||
if (
|
||||
!repositoryType.startsWith("I") &&
|
||||
!line.includes(REPOSITORY_PATTERN_MESSAGES.CONSTRUCTOR)
|
||||
) {
|
||||
violations.push(
|
||||
RepositoryViolation.create(
|
||||
REPOSITORY_VIOLATION_TYPES.CONCRETE_REPOSITORY_IN_USE_CASE,
|
||||
filePath,
|
||||
layer || LAYERS.APPLICATION,
|
||||
lineNumber,
|
||||
`Use case field uses concrete repository '${repositoryType}'`,
|
||||
undefined,
|
||||
repositoryType,
|
||||
),
|
||||
)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,191 @@
|
||||
/**
|
||||
* Pattern matcher for detecting specific value types
|
||||
*
|
||||
* Provides pattern matching for emails, IPs, paths, dates, UUIDs, versions, and other common hardcoded values
|
||||
*/
|
||||
export class ValuePatternMatcher {
|
||||
private static readonly EMAIL_PATTERN = /^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$/
|
||||
private static readonly IP_V4_PATTERN = /^(\d{1,3}\.){3}\d{1,3}$/
|
||||
private static readonly IP_V6_PATTERN =
|
||||
/^([0-9a-fA-F]{1,4}:){7}[0-9a-fA-F]{1,4}$|^::([0-9a-fA-F]{1,4}:){0,6}[0-9a-fA-F]{1,4}$/
|
||||
private static readonly DATE_ISO_PATTERN = /^\d{4}-\d{2}-\d{2}$/
|
||||
private static readonly URL_PATTERN = /^https?:\/\/|^mongodb:\/\/|^postgresql:\/\//
|
||||
private static readonly UNIX_PATH_PATTERN = /^\/[a-zA-Z0-9/_-]+/
|
||||
private static readonly WINDOWS_PATH_PATTERN = /^[a-zA-Z]:\\[a-zA-Z0-9\\/_-]+/
|
||||
private static readonly API_KEY_PATTERN = /^(sk_|pk_|api_|key_)[a-zA-Z0-9_-]{20,}$/
|
||||
private static readonly UUID_PATTERN =
|
||||
/^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$/i
|
||||
private static readonly SEMVER_PATTERN = /^\d+\.\d+\.\d+(-[\w.-]+)?(\+[\w.-]+)?$/
|
||||
private static readonly HEX_COLOR_PATTERN = /^#([0-9a-fA-F]{3}|[0-9a-fA-F]{6})$/
|
||||
private static readonly MAC_ADDRESS_PATTERN = /^([0-9A-Fa-f]{2}[:-]){5}([0-9A-Fa-f]{2})$/
|
||||
private static readonly BASE64_PATTERN =
|
||||
/^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$/
|
||||
private static readonly JWT_PATTERN = /^eyJ[A-Za-z0-9-_]+\.eyJ[A-Za-z0-9-_]+\.[A-Za-z0-9-_]+$/
|
||||
|
||||
/**
|
||||
* Checks if value is an email address
|
||||
*/
|
||||
public isEmail(value: string): boolean {
|
||||
return ValuePatternMatcher.EMAIL_PATTERN.test(value)
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if value is an IP address (v4 or v6)
|
||||
*/
|
||||
public isIpAddress(value: string): boolean {
|
||||
return (
|
||||
ValuePatternMatcher.IP_V4_PATTERN.test(value) ||
|
||||
ValuePatternMatcher.IP_V6_PATTERN.test(value)
|
||||
)
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if value is a date in ISO format
|
||||
*/
|
||||
public isDate(value: string): boolean {
|
||||
return ValuePatternMatcher.DATE_ISO_PATTERN.test(value)
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if value is a URL
|
||||
*/
|
||||
public isUrl(value: string): boolean {
|
||||
return ValuePatternMatcher.URL_PATTERN.test(value)
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if value is a file path (Unix or Windows)
|
||||
*/
|
||||
public isFilePath(value: string): boolean {
|
||||
return (
|
||||
ValuePatternMatcher.UNIX_PATH_PATTERN.test(value) ||
|
||||
ValuePatternMatcher.WINDOWS_PATH_PATTERN.test(value)
|
||||
)
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if value looks like an API key
|
||||
*/
|
||||
public isApiKey(value: string): boolean {
|
||||
return ValuePatternMatcher.API_KEY_PATTERN.test(value)
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if value is a UUID
|
||||
*/
|
||||
public isUuid(value: string): boolean {
|
||||
return ValuePatternMatcher.UUID_PATTERN.test(value)
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if value is a semantic version
|
||||
*/
|
||||
public isSemver(value: string): boolean {
|
||||
return ValuePatternMatcher.SEMVER_PATTERN.test(value)
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if value is a hex color
|
||||
*/
|
||||
public isHexColor(value: string): boolean {
|
||||
return ValuePatternMatcher.HEX_COLOR_PATTERN.test(value)
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if value is a MAC address
|
||||
*/
|
||||
public isMacAddress(value: string): boolean {
|
||||
return ValuePatternMatcher.MAC_ADDRESS_PATTERN.test(value)
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if value is Base64 encoded (min length 20 to avoid false positives)
|
||||
*/
|
||||
public isBase64(value: string): boolean {
|
||||
return value.length >= 20 && ValuePatternMatcher.BASE64_PATTERN.test(value)
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if value is a JWT token
|
||||
*/
|
||||
public isJwt(value: string): boolean {
|
||||
return ValuePatternMatcher.JWT_PATTERN.test(value)
|
||||
}
|
||||
|
||||
/**
|
||||
* Detects the type of value
|
||||
*/
|
||||
public detectType(
|
||||
value: string,
|
||||
):
|
||||
| "email"
|
||||
| "url"
|
||||
| "ip_address"
|
||||
| "file_path"
|
||||
| "date"
|
||||
| "api_key"
|
||||
| "uuid"
|
||||
| "version"
|
||||
| "color"
|
||||
| "mac_address"
|
||||
| "base64"
|
||||
| null {
|
||||
if (this.isEmail(value)) {
|
||||
return "email"
|
||||
}
|
||||
if (this.isJwt(value)) {
|
||||
return "api_key"
|
||||
}
|
||||
if (this.isApiKey(value)) {
|
||||
return "api_key"
|
||||
}
|
||||
if (this.isUrl(value)) {
|
||||
return "url"
|
||||
}
|
||||
if (this.isIpAddress(value)) {
|
||||
return "ip_address"
|
||||
}
|
||||
if (this.isFilePath(value)) {
|
||||
return "file_path"
|
||||
}
|
||||
if (this.isDate(value)) {
|
||||
return "date"
|
||||
}
|
||||
if (this.isUuid(value)) {
|
||||
return "uuid"
|
||||
}
|
||||
if (this.isSemver(value)) {
|
||||
return "version"
|
||||
}
|
||||
if (this.isHexColor(value)) {
|
||||
return "color"
|
||||
}
|
||||
if (this.isMacAddress(value)) {
|
||||
return "mac_address"
|
||||
}
|
||||
if (this.isBase64(value)) {
|
||||
return "base64"
|
||||
}
|
||||
return null
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if value should be detected as hardcoded
|
||||
*/
|
||||
public shouldDetect(value: string): boolean {
|
||||
return (
|
||||
this.isEmail(value) ||
|
||||
this.isUrl(value) ||
|
||||
this.isIpAddress(value) ||
|
||||
this.isFilePath(value) ||
|
||||
this.isDate(value) ||
|
||||
this.isApiKey(value) ||
|
||||
this.isUuid(value) ||
|
||||
this.isSemver(value) ||
|
||||
this.isHexColor(value) ||
|
||||
this.isMacAddress(value) ||
|
||||
this.isBase64(value) ||
|
||||
this.isJwt(value)
|
||||
)
|
||||
}
|
||||
}
|
||||
@@ -45,6 +45,25 @@ export const TYPE_NAMES = {
|
||||
OBJECT: "object",
|
||||
} as const
|
||||
|
||||
/**
|
||||
* TypeScript class and method keywords
|
||||
*/
|
||||
export const CLASS_KEYWORDS = {
|
||||
CONSTRUCTOR: "constructor",
|
||||
PUBLIC: "public",
|
||||
PRIVATE: "private",
|
||||
PROTECTED: "protected",
|
||||
} as const
|
||||
|
||||
/**
|
||||
* Example code constants for documentation
|
||||
*/
|
||||
export const EXAMPLE_CODE_CONSTANTS = {
|
||||
ORDER_STATUS_PENDING: "pending",
|
||||
ORDER_STATUS_APPROVED: "approved",
|
||||
CANNOT_APPROVE_ERROR: "Cannot approve",
|
||||
} as const
|
||||
|
||||
/**
|
||||
* Common regex patterns
|
||||
*/
|
||||
@@ -86,12 +105,14 @@ export const SEVERITY_ORDER: Record<SeverityLevel, number> = {
|
||||
* Violation type to severity mapping
|
||||
*/
|
||||
export const VIOLATION_SEVERITY_MAP = {
|
||||
SECRET_EXPOSURE: SEVERITY_LEVELS.CRITICAL,
|
||||
CIRCULAR_DEPENDENCY: SEVERITY_LEVELS.CRITICAL,
|
||||
REPOSITORY_PATTERN: SEVERITY_LEVELS.CRITICAL,
|
||||
AGGREGATE_BOUNDARY: SEVERITY_LEVELS.CRITICAL,
|
||||
DEPENDENCY_DIRECTION: SEVERITY_LEVELS.HIGH,
|
||||
FRAMEWORK_LEAK: SEVERITY_LEVELS.HIGH,
|
||||
ENTITY_EXPOSURE: SEVERITY_LEVELS.HIGH,
|
||||
ANEMIC_MODEL: SEVERITY_LEVELS.MEDIUM,
|
||||
NAMING_CONVENTION: SEVERITY_LEVELS.MEDIUM,
|
||||
ARCHITECTURE: SEVERITY_LEVELS.MEDIUM,
|
||||
HARDCODE: SEVERITY_LEVELS.LOW,
|
||||
|
||||
@@ -11,6 +11,8 @@ export const RULES = {
|
||||
DEPENDENCY_DIRECTION: "dependency-direction",
|
||||
REPOSITORY_PATTERN: "repository-pattern",
|
||||
AGGREGATE_BOUNDARY: "aggregate-boundary",
|
||||
SECRET_EXPOSURE: "secret-exposure",
|
||||
ANEMIC_MODEL: "anemic-model",
|
||||
} as const
|
||||
|
||||
/**
|
||||
@@ -19,6 +21,7 @@ export const RULES = {
|
||||
export const HARDCODE_TYPES = {
|
||||
MAGIC_NUMBER: "magic-number",
|
||||
MAGIC_STRING: "magic-string",
|
||||
MAGIC_BOOLEAN: "magic-boolean",
|
||||
MAGIC_CONFIG: "magic-config",
|
||||
} as const
|
||||
|
||||
@@ -102,32 +105,35 @@ export const NAMING_PATTERNS = {
|
||||
* Common verbs for use cases
|
||||
*/
|
||||
export const USE_CASE_VERBS = [
|
||||
"Aggregate",
|
||||
"Analyze",
|
||||
"Create",
|
||||
"Update",
|
||||
"Delete",
|
||||
"Get",
|
||||
"Find",
|
||||
"List",
|
||||
"Search",
|
||||
"Validate",
|
||||
"Calculate",
|
||||
"Generate",
|
||||
"Send",
|
||||
"Fetch",
|
||||
"Process",
|
||||
"Execute",
|
||||
"Handle",
|
||||
"Register",
|
||||
"Approve",
|
||||
"Authenticate",
|
||||
"Authorize",
|
||||
"Import",
|
||||
"Export",
|
||||
"Place",
|
||||
"Calculate",
|
||||
"Cancel",
|
||||
"Approve",
|
||||
"Reject",
|
||||
"Collect",
|
||||
"Confirm",
|
||||
"Create",
|
||||
"Delete",
|
||||
"Execute",
|
||||
"Export",
|
||||
"Fetch",
|
||||
"Find",
|
||||
"Generate",
|
||||
"Get",
|
||||
"Handle",
|
||||
"Import",
|
||||
"List",
|
||||
"Parse",
|
||||
"Place",
|
||||
"Process",
|
||||
"Register",
|
||||
"Reject",
|
||||
"Search",
|
||||
"Send",
|
||||
"Update",
|
||||
"Validate",
|
||||
] as const
|
||||
|
||||
/**
|
||||
@@ -411,3 +417,83 @@ export const REPOSITORY_VIOLATION_TYPES = {
|
||||
NEW_REPOSITORY_IN_USE_CASE: "new-repository-in-use-case",
|
||||
NON_DOMAIN_METHOD_NAME: "non-domain-method-name",
|
||||
} as const
|
||||
|
||||
/**
|
||||
* Detection patterns for sensitive keywords
|
||||
*/
|
||||
export const DETECTION_PATTERNS = {
|
||||
SENSITIVE_KEYWORDS: ["password", "secret", "token", "auth", "credential"],
|
||||
BUSINESS_KEYWORDS: ["price", "salary", "balance", "amount", "limit", "threshold", "quota"],
|
||||
TECHNICAL_KEYWORDS: [
|
||||
"timeout",
|
||||
"retry",
|
||||
"attempt",
|
||||
"maxretries",
|
||||
"database",
|
||||
"connection",
|
||||
"host",
|
||||
"port",
|
||||
"endpoint",
|
||||
],
|
||||
MEDIUM_KEYWORDS: ["delay", "interval", "duration", "size", "count", "max", "min"],
|
||||
UI_KEYWORDS: [
|
||||
"padding",
|
||||
"margin",
|
||||
"width",
|
||||
"height",
|
||||
"color",
|
||||
"style",
|
||||
"label",
|
||||
"title",
|
||||
"placeholder",
|
||||
"icon",
|
||||
"text",
|
||||
"display",
|
||||
],
|
||||
} as const
|
||||
|
||||
/**
|
||||
* Configuration detection keywords
|
||||
*/
|
||||
export const CONFIG_KEYWORDS = {
|
||||
NETWORK: ["endpoint", "host", "domain", "path", "route"],
|
||||
DATABASE: ["connection", "database"],
|
||||
SECURITY: ["config", "secret", "token", "password", "credential"],
|
||||
MESSAGES: ["message", "error", "warning", "text"],
|
||||
} as const
|
||||
|
||||
/**
|
||||
* Detection comparison values
|
||||
*/
|
||||
export const DETECTION_VALUES = {
|
||||
BOOLEAN_TRUE: "true",
|
||||
BOOLEAN_FALSE: "false",
|
||||
TYPE_CONFIG: "config",
|
||||
TYPE_GENERIC: "generic",
|
||||
} as const
|
||||
|
||||
/**
|
||||
* Boolean constants for analyzers
|
||||
*/
|
||||
export const ANALYZER_DEFAULTS = {
|
||||
HAS_ONLY_GETTERS_SETTERS: false,
|
||||
HAS_PUBLIC_SETTERS: false,
|
||||
HAS_BUSINESS_LOGIC: false,
|
||||
} as const
|
||||
|
||||
/**
|
||||
* Anemic model detection flags
|
||||
*/
|
||||
export const ANEMIC_MODEL_FLAGS = {
|
||||
HAS_ONLY_GETTERS_SETTERS_TRUE: true,
|
||||
HAS_ONLY_GETTERS_SETTERS_FALSE: false,
|
||||
HAS_PUBLIC_SETTERS_TRUE: true,
|
||||
HAS_PUBLIC_SETTERS_FALSE: false,
|
||||
} as const
|
||||
|
||||
/**
|
||||
* External package constants
|
||||
*/
|
||||
export const EXTERNAL_PACKAGES = {
|
||||
SECRETLINT_PRESET: "@secretlint/secretlint-rule-preset-recommend",
|
||||
} as const
|
||||
|
||||
372
packages/guardian/tests/AnemicModelDetector.test.ts
Normal file
372
packages/guardian/tests/AnemicModelDetector.test.ts
Normal file
@@ -0,0 +1,372 @@
|
||||
import { describe, it, expect, beforeEach } from "vitest"
|
||||
import { AnemicModelDetector } from "../src/infrastructure/analyzers/AnemicModelDetector"
|
||||
|
||||
describe("AnemicModelDetector", () => {
|
||||
let detector: AnemicModelDetector
|
||||
|
||||
beforeEach(() => {
|
||||
detector = new AnemicModelDetector()
|
||||
})
|
||||
|
||||
describe("detectAnemicModels", () => {
|
||||
it("should detect class with only getters and setters", () => {
|
||||
const code = `
|
||||
class Order {
|
||||
private status: string
|
||||
private total: number
|
||||
|
||||
getStatus(): string {
|
||||
return this.status
|
||||
}
|
||||
|
||||
setStatus(status: string): void {
|
||||
this.status = status
|
||||
}
|
||||
|
||||
getTotal(): number {
|
||||
return this.total
|
||||
}
|
||||
|
||||
setTotal(total: number): void {
|
||||
this.total = total
|
||||
}
|
||||
}
|
||||
`
|
||||
const violations = detector.detectAnemicModels(
|
||||
code,
|
||||
"src/domain/entities/Order.ts",
|
||||
"domain",
|
||||
)
|
||||
|
||||
expect(violations).toHaveLength(1)
|
||||
expect(violations[0].className).toBe("Order")
|
||||
expect(violations[0].methodCount).toBeGreaterThan(0)
|
||||
expect(violations[0].propertyCount).toBeGreaterThan(0)
|
||||
expect(violations[0].getMessage()).toContain("Order")
|
||||
})
|
||||
|
||||
it("should detect class with public setters", () => {
|
||||
const code = `
|
||||
class User {
|
||||
private email: string
|
||||
private password: string
|
||||
|
||||
public setEmail(email: string): void {
|
||||
this.email = email
|
||||
}
|
||||
|
||||
public getEmail(): string {
|
||||
return this.email
|
||||
}
|
||||
|
||||
public setPassword(password: string): void {
|
||||
this.password = password
|
||||
}
|
||||
}
|
||||
`
|
||||
const violations = detector.detectAnemicModels(
|
||||
code,
|
||||
"src/domain/entities/User.ts",
|
||||
"domain",
|
||||
)
|
||||
|
||||
expect(violations).toHaveLength(1)
|
||||
expect(violations[0].className).toBe("User")
|
||||
expect(violations[0].hasPublicSetters).toBe(true)
|
||||
})
|
||||
|
||||
it("should not detect rich domain model with business logic", () => {
|
||||
const code = `
|
||||
class Order {
|
||||
private readonly id: string
|
||||
private status: OrderStatus
|
||||
private items: OrderItem[]
|
||||
|
||||
public approve(): void {
|
||||
if (!this.canBeApproved()) {
|
||||
throw new Error("Cannot approve")
|
||||
}
|
||||
this.status = OrderStatus.APPROVED
|
||||
}
|
||||
|
||||
public reject(reason: string): void {
|
||||
if (!this.canBeRejected()) {
|
||||
throw new Error("Cannot reject")
|
||||
}
|
||||
this.status = OrderStatus.REJECTED
|
||||
}
|
||||
|
||||
public addItem(item: OrderItem): void {
|
||||
if (this.isApproved()) {
|
||||
throw new Error("Cannot modify approved order")
|
||||
}
|
||||
this.items.push(item)
|
||||
}
|
||||
|
||||
public calculateTotal(): Money {
|
||||
return this.items.reduce((sum, item) => sum.add(item.getPrice()), Money.zero())
|
||||
}
|
||||
|
||||
public getStatus(): OrderStatus {
|
||||
return this.status
|
||||
}
|
||||
|
||||
private canBeApproved(): boolean {
|
||||
return this.status === OrderStatus.PENDING
|
||||
}
|
||||
|
||||
private canBeRejected(): boolean {
|
||||
return this.status === OrderStatus.PENDING
|
||||
}
|
||||
|
||||
private isApproved(): boolean {
|
||||
return this.status === OrderStatus.APPROVED
|
||||
}
|
||||
}
|
||||
`
|
||||
const violations = detector.detectAnemicModels(
|
||||
code,
|
||||
"src/domain/entities/Order.ts",
|
||||
"domain",
|
||||
)
|
||||
|
||||
expect(violations).toHaveLength(0)
|
||||
})
|
||||
|
||||
it("should not analyze files outside domain layer", () => {
|
||||
const code = `
|
||||
class OrderDto {
|
||||
getStatus(): string {
|
||||
return this.status
|
||||
}
|
||||
|
||||
setStatus(status: string): void {
|
||||
this.status = status
|
||||
}
|
||||
}
|
||||
`
|
||||
const violations = detector.detectAnemicModels(
|
||||
code,
|
||||
"src/application/dtos/OrderDto.ts",
|
||||
"application",
|
||||
)
|
||||
|
||||
expect(violations).toHaveLength(0)
|
||||
})
|
||||
|
||||
it("should not analyze DTO files", () => {
|
||||
const code = `
|
||||
class UserDto {
|
||||
private email: string
|
||||
|
||||
getEmail(): string {
|
||||
return this.email
|
||||
}
|
||||
|
||||
setEmail(email: string): void {
|
||||
this.email = email
|
||||
}
|
||||
}
|
||||
`
|
||||
const violations = detector.detectAnemicModels(
|
||||
code,
|
||||
"src/domain/dtos/UserDto.ts",
|
||||
"domain",
|
||||
)
|
||||
|
||||
expect(violations).toHaveLength(0)
|
||||
})
|
||||
|
||||
it("should not analyze test files", () => {
|
||||
const code = `
|
||||
class Order {
|
||||
getStatus(): string {
|
||||
return this.status
|
||||
}
|
||||
|
||||
setStatus(status: string): void {
|
||||
this.status = status
|
||||
}
|
||||
}
|
||||
`
|
||||
const violations = detector.detectAnemicModels(
|
||||
code,
|
||||
"src/domain/entities/Order.test.ts",
|
||||
"domain",
|
||||
)
|
||||
|
||||
expect(violations).toHaveLength(0)
|
||||
})
|
||||
|
||||
it("should detect anemic model in entities folder", () => {
|
||||
const code = `
|
||||
class Product {
|
||||
private name: string
|
||||
private price: number
|
||||
|
||||
getName(): string {
|
||||
return this.name
|
||||
}
|
||||
|
||||
setName(name: string): void {
|
||||
this.name = name
|
||||
}
|
||||
|
||||
getPrice(): number {
|
||||
return this.price
|
||||
}
|
||||
|
||||
setPrice(price: number): void {
|
||||
this.price = price
|
||||
}
|
||||
}
|
||||
`
|
||||
const violations = detector.detectAnemicModels(
|
||||
code,
|
||||
"src/domain/entities/Product.ts",
|
||||
"domain",
|
||||
)
|
||||
|
||||
expect(violations).toHaveLength(1)
|
||||
expect(violations[0].className).toBe("Product")
|
||||
})
|
||||
|
||||
it("should detect anemic model in aggregates folder", () => {
|
||||
const code = `
|
||||
class Customer {
|
||||
private email: string
|
||||
|
||||
getEmail(): string {
|
||||
return this.email
|
||||
}
|
||||
|
||||
setEmail(email: string): void {
|
||||
this.email = email
|
||||
}
|
||||
}
|
||||
`
|
||||
const violations = detector.detectAnemicModels(
|
||||
code,
|
||||
"src/domain/aggregates/customer/Customer.ts",
|
||||
"domain",
|
||||
)
|
||||
|
||||
expect(violations).toHaveLength(1)
|
||||
expect(violations[0].className).toBe("Customer")
|
||||
})
|
||||
|
||||
it("should not detect class with good method-to-property ratio", () => {
|
||||
const code = `
|
||||
class Account {
|
||||
private balance: number
|
||||
private isActive: boolean
|
||||
|
||||
public deposit(amount: number): void {
|
||||
if (amount <= 0) throw new Error("Invalid amount")
|
||||
this.balance += amount
|
||||
}
|
||||
|
||||
public withdraw(amount: number): void {
|
||||
if (amount > this.balance) throw new Error("Insufficient funds")
|
||||
this.balance -= amount
|
||||
}
|
||||
|
||||
public activate(): void {
|
||||
this.isActive = true
|
||||
}
|
||||
|
||||
public deactivate(): void {
|
||||
this.isActive = false
|
||||
}
|
||||
|
||||
public getBalance(): number {
|
||||
return this.balance
|
||||
}
|
||||
}
|
||||
`
|
||||
const violations = detector.detectAnemicModels(
|
||||
code,
|
||||
"src/domain/entities/Account.ts",
|
||||
"domain",
|
||||
)
|
||||
|
||||
expect(violations).toHaveLength(0)
|
||||
})
|
||||
|
||||
it("should handle class with no properties or methods", () => {
|
||||
const code = `
|
||||
class EmptyEntity {
|
||||
}
|
||||
`
|
||||
const violations = detector.detectAnemicModels(
|
||||
code,
|
||||
"src/domain/entities/EmptyEntity.ts",
|
||||
"domain",
|
||||
)
|
||||
|
||||
expect(violations).toHaveLength(0)
|
||||
})
|
||||
|
||||
it("should detect multiple anemic classes in one file", () => {
|
||||
const code = `
|
||||
class Order {
|
||||
getStatus() { return this.status }
|
||||
setStatus(status: string) { this.status = status }
|
||||
}
|
||||
|
||||
class Item {
|
||||
getPrice() { return this.price }
|
||||
setPrice(price: number) { this.price = price }
|
||||
}
|
||||
`
|
||||
const violations = detector.detectAnemicModels(
|
||||
code,
|
||||
"src/domain/entities/Models.ts",
|
||||
"domain",
|
||||
)
|
||||
|
||||
expect(violations).toHaveLength(2)
|
||||
expect(violations[0].className).toBe("Order")
|
||||
expect(violations[1].className).toBe("Item")
|
||||
})
|
||||
|
||||
it("should provide correct violation details", () => {
|
||||
const code = `
|
||||
class Payment {
|
||||
private amount: number
|
||||
private currency: string
|
||||
|
||||
getAmount(): number {
|
||||
return this.amount
|
||||
}
|
||||
|
||||
setAmount(amount: number): void {
|
||||
this.amount = amount
|
||||
}
|
||||
|
||||
getCurrency(): string {
|
||||
return this.currency
|
||||
}
|
||||
|
||||
setCurrency(currency: string): void {
|
||||
this.currency = currency
|
||||
}
|
||||
}
|
||||
`
|
||||
const violations = detector.detectAnemicModels(
|
||||
code,
|
||||
"src/domain/entities/Payment.ts",
|
||||
"domain",
|
||||
)
|
||||
|
||||
expect(violations).toHaveLength(1)
|
||||
const violation = violations[0]
|
||||
expect(violation.className).toBe("Payment")
|
||||
expect(violation.filePath).toBe("src/domain/entities/Payment.ts")
|
||||
expect(violation.layer).toBe("domain")
|
||||
expect(violation.line).toBeGreaterThan(0)
|
||||
expect(violation.getMessage()).toContain("Payment")
|
||||
expect(violation.getSuggestion()).toContain("business")
|
||||
})
|
||||
})
|
||||
})
|
||||
285
packages/guardian/tests/e2e/AnalyzeProject.e2e.test.ts
Normal file
285
packages/guardian/tests/e2e/AnalyzeProject.e2e.test.ts
Normal file
@@ -0,0 +1,285 @@
|
||||
import { describe, it, expect } from "vitest"
|
||||
import { analyzeProject } from "../../src/api"
|
||||
import path from "path"
|
||||
|
||||
describe("AnalyzeProject E2E", () => {
|
||||
const EXAMPLES_DIR = path.join(__dirname, "../../examples")
|
||||
|
||||
describe("Full Pipeline", () => {
|
||||
it("should analyze project and return complete results", async () => {
|
||||
const rootDir = path.join(EXAMPLES_DIR, "good-architecture")
|
||||
|
||||
const result = await analyzeProject({ rootDir })
|
||||
|
||||
expect(result).toBeDefined()
|
||||
expect(result.metrics).toBeDefined()
|
||||
expect(result.metrics.totalFiles).toBeGreaterThan(0)
|
||||
expect(result.metrics.totalFunctions).toBeGreaterThanOrEqual(0)
|
||||
expect(result.metrics.totalImports).toBeGreaterThanOrEqual(0)
|
||||
expect(result.dependencyGraph).toBeDefined()
|
||||
|
||||
expect(Array.isArray(result.hardcodeViolations)).toBe(true)
|
||||
expect(Array.isArray(result.violations)).toBe(true)
|
||||
expect(Array.isArray(result.circularDependencyViolations)).toBe(true)
|
||||
expect(Array.isArray(result.namingViolations)).toBe(true)
|
||||
expect(Array.isArray(result.frameworkLeakViolations)).toBe(true)
|
||||
expect(Array.isArray(result.entityExposureViolations)).toBe(true)
|
||||
expect(Array.isArray(result.dependencyDirectionViolations)).toBe(true)
|
||||
expect(Array.isArray(result.repositoryPatternViolations)).toBe(true)
|
||||
expect(Array.isArray(result.aggregateBoundaryViolations)).toBe(true)
|
||||
expect(Array.isArray(result.anemicModelViolations)).toBe(true)
|
||||
})
|
||||
|
||||
it("should respect exclude patterns", async () => {
|
||||
const rootDir = path.join(EXAMPLES_DIR, "good-architecture")
|
||||
|
||||
const result = await analyzeProject({
|
||||
rootDir,
|
||||
exclude: ["**/dtos/**", "**/mappers/**"],
|
||||
})
|
||||
|
||||
expect(result.metrics.totalFiles).toBeGreaterThan(0)
|
||||
|
||||
const allFiles = [
|
||||
...result.hardcodeViolations.map((v) => v.file),
|
||||
...result.violations.map((v) => v.file),
|
||||
...result.namingViolations.map((v) => v.file),
|
||||
]
|
||||
|
||||
allFiles.forEach((file) => {
|
||||
expect(file).not.toContain("/dtos/")
|
||||
expect(file).not.toContain("/mappers/")
|
||||
})
|
||||
})
|
||||
|
||||
it("should detect violations across all detectors", async () => {
|
||||
const rootDir = path.join(EXAMPLES_DIR, "bad-architecture")
|
||||
|
||||
const result = await analyzeProject({ rootDir })
|
||||
|
||||
const totalViolations =
|
||||
result.hardcodeViolations.length +
|
||||
result.violations.length +
|
||||
result.circularDependencyViolations.length +
|
||||
result.namingViolations.length +
|
||||
result.frameworkLeakViolations.length +
|
||||
result.entityExposureViolations.length +
|
||||
result.dependencyDirectionViolations.length +
|
||||
result.repositoryPatternViolations.length +
|
||||
result.aggregateBoundaryViolations.length +
|
||||
result.anemicModelViolations.length
|
||||
|
||||
expect(totalViolations).toBeGreaterThan(0)
|
||||
})
|
||||
})
|
||||
|
||||
describe("Good Architecture Examples", () => {
|
||||
it("should find zero violations in good-architecture/", async () => {
|
||||
const rootDir = path.join(EXAMPLES_DIR, "good-architecture")
|
||||
|
||||
const result = await analyzeProject({ rootDir })
|
||||
|
||||
expect(result.violations.length).toBe(0)
|
||||
expect(result.frameworkLeakViolations.length).toBe(0)
|
||||
expect(result.entityExposureViolations.length).toBe(0)
|
||||
expect(result.dependencyDirectionViolations.length).toBe(0)
|
||||
expect(result.circularDependencyViolations.length).toBe(0)
|
||||
expect(result.anemicModelViolations.length).toBe(0)
|
||||
})
|
||||
|
||||
it("should have no dependency direction violations", async () => {
|
||||
const rootDir = path.join(EXAMPLES_DIR, "good-architecture/dependency-direction")
|
||||
|
||||
const result = await analyzeProject({ rootDir })
|
||||
|
||||
const goodFiles = result.dependencyDirectionViolations.filter((v) =>
|
||||
v.file.includes("Good"),
|
||||
)
|
||||
|
||||
expect(goodFiles.length).toBe(0)
|
||||
})
|
||||
|
||||
it("should have no entity exposure in good controller", async () => {
|
||||
const rootDir = path.join(EXAMPLES_DIR, "good-architecture/entity-exposure")
|
||||
|
||||
const result = await analyzeProject({ rootDir })
|
||||
|
||||
expect(result.entityExposureViolations.length).toBe(0)
|
||||
})
|
||||
})
|
||||
|
||||
describe("Bad Architecture Examples", () => {
|
||||
it("should detect hardcoded values in bad examples", async () => {
|
||||
const rootDir = path.join(EXAMPLES_DIR, "bad-architecture/hardcoded")
|
||||
|
||||
const result = await analyzeProject({ rootDir })
|
||||
|
||||
expect(result.hardcodeViolations.length).toBeGreaterThan(0)
|
||||
|
||||
const magicNumbers = result.hardcodeViolations.filter((v) => v.type === "magic-number")
|
||||
expect(magicNumbers.length).toBeGreaterThan(0)
|
||||
})
|
||||
|
||||
it("should detect circular dependencies", async () => {
|
||||
const rootDir = path.join(EXAMPLES_DIR, "bad-architecture/circular")
|
||||
|
||||
const result = await analyzeProject({ rootDir })
|
||||
|
||||
if (result.circularDependencyViolations.length > 0) {
|
||||
const violation = result.circularDependencyViolations[0]
|
||||
expect(violation.cycle).toBeDefined()
|
||||
expect(violation.cycle.length).toBeGreaterThanOrEqual(2)
|
||||
expect(violation.severity).toBe("critical")
|
||||
}
|
||||
})
|
||||
|
||||
it("should detect framework leaks in domain", async () => {
|
||||
const rootDir = path.join(EXAMPLES_DIR, "bad-architecture/framework-leaks")
|
||||
|
||||
const result = await analyzeProject({ rootDir })
|
||||
|
||||
if (result.frameworkLeakViolations.length > 0) {
|
||||
const violation = result.frameworkLeakViolations[0]
|
||||
expect(violation.packageName).toBeDefined()
|
||||
expect(violation.severity).toBe("high")
|
||||
}
|
||||
})
|
||||
|
||||
it("should detect naming convention violations", async () => {
|
||||
const rootDir = path.join(EXAMPLES_DIR, "bad-architecture/naming")
|
||||
|
||||
const result = await analyzeProject({ rootDir })
|
||||
|
||||
if (result.namingViolations.length > 0) {
|
||||
const violation = result.namingViolations[0]
|
||||
expect(violation.expected).toBeDefined()
|
||||
expect(violation.severity).toBe("medium")
|
||||
}
|
||||
})
|
||||
|
||||
it("should detect entity exposure violations", async () => {
|
||||
const rootDir = path.join(EXAMPLES_DIR, "bad-architecture/entity-exposure")
|
||||
|
||||
const result = await analyzeProject({ rootDir })
|
||||
|
||||
if (result.entityExposureViolations.length > 0) {
|
||||
const violation = result.entityExposureViolations[0]
|
||||
expect(violation.entityName).toBeDefined()
|
||||
expect(violation.severity).toBe("high")
|
||||
}
|
||||
})
|
||||
|
||||
it("should detect dependency direction violations", async () => {
|
||||
const rootDir = path.join(EXAMPLES_DIR, "bad-architecture/dependency-direction")
|
||||
|
||||
const result = await analyzeProject({ rootDir })
|
||||
|
||||
if (result.dependencyDirectionViolations.length > 0) {
|
||||
const violation = result.dependencyDirectionViolations[0]
|
||||
expect(violation.fromLayer).toBeDefined()
|
||||
expect(violation.toLayer).toBeDefined()
|
||||
expect(violation.severity).toBe("high")
|
||||
}
|
||||
})
|
||||
|
||||
it("should detect repository pattern violations", async () => {
|
||||
const rootDir = path.join(EXAMPLES_DIR, "repository-pattern")
|
||||
|
||||
const result = await analyzeProject({ rootDir })
|
||||
|
||||
const badViolations = result.repositoryPatternViolations.filter((v) =>
|
||||
v.file.includes("bad"),
|
||||
)
|
||||
|
||||
if (badViolations.length > 0) {
|
||||
const violation = badViolations[0]
|
||||
expect(violation.violationType).toBeDefined()
|
||||
expect(violation.severity).toBe("critical")
|
||||
}
|
||||
})
|
||||
|
||||
it("should detect aggregate boundary violations", async () => {
|
||||
const rootDir = path.join(EXAMPLES_DIR, "aggregate-boundary/bad")
|
||||
|
||||
const result = await analyzeProject({ rootDir })
|
||||
|
||||
if (result.aggregateBoundaryViolations.length > 0) {
|
||||
const violation = result.aggregateBoundaryViolations[0]
|
||||
expect(violation.fromAggregate).toBeDefined()
|
||||
expect(violation.toAggregate).toBeDefined()
|
||||
expect(violation.severity).toBe("critical")
|
||||
}
|
||||
})
|
||||
})
|
||||
|
||||
describe("Metrics", () => {
|
||||
it("should provide accurate file counts", async () => {
|
||||
const rootDir = path.join(EXAMPLES_DIR, "good-architecture")
|
||||
|
||||
const result = await analyzeProject({ rootDir })
|
||||
|
||||
expect(result.metrics.totalFiles).toBeGreaterThan(0)
|
||||
expect(result.metrics.totalFunctions).toBeGreaterThanOrEqual(0)
|
||||
expect(result.metrics.totalImports).toBeGreaterThanOrEqual(0)
|
||||
})
|
||||
|
||||
it("should track layer distribution", async () => {
|
||||
const rootDir = path.join(EXAMPLES_DIR, "good-architecture")
|
||||
|
||||
const result = await analyzeProject({ rootDir })
|
||||
|
||||
expect(result.metrics.layerDistribution).toBeDefined()
|
||||
expect(typeof result.metrics.layerDistribution).toBe("object")
|
||||
})
|
||||
|
||||
it("should calculate correct metrics for bad architecture", async () => {
|
||||
const rootDir = path.join(EXAMPLES_DIR, "bad-architecture")
|
||||
|
||||
const result = await analyzeProject({ rootDir })
|
||||
|
||||
expect(result.metrics.totalFiles).toBeGreaterThan(0)
|
||||
expect(result.metrics.totalFunctions).toBeGreaterThanOrEqual(0)
|
||||
expect(result.metrics.totalImports).toBeGreaterThanOrEqual(0)
|
||||
})
|
||||
})
|
||||
|
||||
describe("Dependency Graph", () => {
|
||||
it("should build dependency graph for analyzed files", async () => {
|
||||
const rootDir = path.join(EXAMPLES_DIR, "good-architecture")
|
||||
|
||||
const result = await analyzeProject({ rootDir })
|
||||
|
||||
expect(result.dependencyGraph).toBeDefined()
|
||||
expect(result.files).toBeDefined()
|
||||
expect(Array.isArray(result.files)).toBe(true)
|
||||
})
|
||||
|
||||
it("should track file metadata", async () => {
|
||||
const rootDir = path.join(EXAMPLES_DIR, "good-architecture")
|
||||
|
||||
const result = await analyzeProject({ rootDir })
|
||||
|
||||
if (result.files.length > 0) {
|
||||
const file = result.files[0]
|
||||
expect(file).toHaveProperty("path")
|
||||
}
|
||||
})
|
||||
})
|
||||
|
||||
describe("Error Handling", () => {
|
||||
it("should handle non-existent directory", async () => {
|
||||
const rootDir = path.join(EXAMPLES_DIR, "non-existent-directory")
|
||||
|
||||
await expect(analyzeProject({ rootDir })).rejects.toThrow()
|
||||
})
|
||||
|
||||
it("should handle empty directory gracefully", async () => {
|
||||
const rootDir = path.join(__dirname, "../../dist")
|
||||
|
||||
const result = await analyzeProject({ rootDir })
|
||||
|
||||
expect(result).toBeDefined()
|
||||
expect(result.metrics.totalFiles).toBeGreaterThanOrEqual(0)
|
||||
})
|
||||
})
|
||||
})
|
||||
278
packages/guardian/tests/e2e/CLI.e2e.test.ts
Normal file
278
packages/guardian/tests/e2e/CLI.e2e.test.ts
Normal file
@@ -0,0 +1,278 @@
|
||||
import { describe, it, expect, beforeAll } from "vitest"
|
||||
import { spawn } from "child_process"
|
||||
import path from "path"
|
||||
import { promisify } from "util"
|
||||
import { exec } from "child_process"
|
||||
|
||||
const execAsync = promisify(exec)
|
||||
|
||||
describe("CLI E2E", () => {
|
||||
const CLI_PATH = path.join(__dirname, "../../bin/guardian.js")
|
||||
const EXAMPLES_DIR = path.join(__dirname, "../../examples")
|
||||
|
||||
beforeAll(async () => {
|
||||
await execAsync("pnpm build", {
|
||||
cwd: path.join(__dirname, "../../"),
|
||||
})
|
||||
})
|
||||
|
||||
const runCLI = async (
|
||||
args: string,
|
||||
): Promise<{ stdout: string; stderr: string; exitCode: number }> => {
|
||||
try {
|
||||
const { stdout, stderr } = await execAsync(`node ${CLI_PATH} ${args}`)
|
||||
return { stdout, stderr, exitCode: 0 }
|
||||
} catch (error: unknown) {
|
||||
const err = error as { stdout?: string; stderr?: string; code?: number }
|
||||
return {
|
||||
stdout: err.stdout || "",
|
||||
stderr: err.stderr || "",
|
||||
exitCode: err.code || 1,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
describe("Smoke Tests", () => {
|
||||
it("should display version", async () => {
|
||||
const { stdout } = await execAsync(`node ${CLI_PATH} --version`)
|
||||
|
||||
expect(stdout).toMatch(/\d+\.\d+\.\d+/)
|
||||
})
|
||||
|
||||
it("should display help", async () => {
|
||||
const { stdout } = await execAsync(`node ${CLI_PATH} --help`)
|
||||
|
||||
expect(stdout).toContain("Usage:")
|
||||
expect(stdout).toContain("check")
|
||||
expect(stdout).toContain("Options:")
|
||||
})
|
||||
|
||||
it("should run check command successfully", async () => {
|
||||
const goodArchDir = path.join(EXAMPLES_DIR, "good-architecture")
|
||||
|
||||
const { stdout } = await runCLI(`check ${goodArchDir}`)
|
||||
|
||||
expect(stdout).toContain("Analyzing")
|
||||
}, 30000)
|
||||
})
|
||||
|
||||
describe("Output Format", () => {
|
||||
it("should display violation counts", async () => {
|
||||
const badArchDir = path.join(EXAMPLES_DIR, "bad-architecture")
|
||||
|
||||
const { stdout } = await runCLI(`check ${badArchDir}`)
|
||||
|
||||
expect(stdout).toContain("Analyzing")
|
||||
const hasViolationCount = stdout.includes("Found") || stdout.includes("issue")
|
||||
expect(hasViolationCount).toBe(true)
|
||||
}, 30000)
|
||||
|
||||
it("should display file paths with violations", async () => {
|
||||
const badArchDir = path.join(EXAMPLES_DIR, "bad-architecture/hardcoded")
|
||||
|
||||
const { stdout } = await runCLI(`check ${badArchDir}`)
|
||||
|
||||
expect(stdout).toMatch(/\.ts/)
|
||||
}, 30000)
|
||||
|
||||
it("should display severity levels", async () => {
|
||||
const badArchDir = path.join(EXAMPLES_DIR, "bad-architecture")
|
||||
|
||||
const { stdout } = await runCLI(`check ${badArchDir}`)
|
||||
|
||||
const hasSeverity =
|
||||
stdout.includes("🔴") ||
|
||||
stdout.includes("🟠") ||
|
||||
stdout.includes("🟡") ||
|
||||
stdout.includes("🟢") ||
|
||||
stdout.includes("CRITICAL") ||
|
||||
stdout.includes("HIGH") ||
|
||||
stdout.includes("MEDIUM") ||
|
||||
stdout.includes("LOW")
|
||||
|
||||
expect(hasSeverity).toBe(true)
|
||||
}, 30000)
|
||||
})
|
||||
|
||||
describe("CLI Options", () => {
|
||||
it("should respect --limit option", async () => {
|
||||
const badArchDir = path.join(EXAMPLES_DIR, "bad-architecture")
|
||||
|
||||
const { stdout } = await runCLI(`check ${badArchDir} --limit 5`)
|
||||
|
||||
expect(stdout).toContain("Analyzing")
|
||||
}, 30000)
|
||||
|
||||
it("should respect --only-critical option", async () => {
|
||||
const badArchDir = path.join(EXAMPLES_DIR, "bad-architecture")
|
||||
|
||||
const { stdout } = await runCLI(`check ${badArchDir} --only-critical`)
|
||||
|
||||
expect(stdout).toContain("Analyzing")
|
||||
|
||||
if (stdout.includes("🔴") || stdout.includes("CRITICAL")) {
|
||||
const hasNonCritical =
|
||||
stdout.includes("🟠") ||
|
||||
stdout.includes("🟡") ||
|
||||
stdout.includes("🟢") ||
|
||||
(stdout.includes("HIGH") && !stdout.includes("CRITICAL")) ||
|
||||
stdout.includes("MEDIUM") ||
|
||||
stdout.includes("LOW")
|
||||
|
||||
expect(hasNonCritical).toBe(false)
|
||||
}
|
||||
}, 30000)
|
||||
|
||||
it("should respect --min-severity option", async () => {
|
||||
const badArchDir = path.join(EXAMPLES_DIR, "bad-architecture")
|
||||
|
||||
const { stdout } = await runCLI(`check ${badArchDir} --min-severity high`)
|
||||
|
||||
expect(stdout).toContain("Analyzing")
|
||||
}, 30000)
|
||||
|
||||
it("should respect --exclude option", async () => {
|
||||
const goodArchDir = path.join(EXAMPLES_DIR, "good-architecture")
|
||||
|
||||
const { stdout } = await runCLI(`check ${goodArchDir} --exclude "**/dtos/**"`)
|
||||
|
||||
expect(stdout).not.toContain("/dtos/")
|
||||
}, 30000)
|
||||
|
||||
it("should respect --no-hardcode option", async () => {
|
||||
const badArchDir = path.join(EXAMPLES_DIR, "bad-architecture")
|
||||
|
||||
const { stdout } = await runCLI(`check ${badArchDir} --no-hardcode`)
|
||||
|
||||
expect(stdout).not.toContain("Magic Number")
|
||||
expect(stdout).not.toContain("Magic String")
|
||||
}, 30000)
|
||||
|
||||
it("should respect --no-architecture option", async () => {
|
||||
const badArchDir = path.join(EXAMPLES_DIR, "bad-architecture")
|
||||
|
||||
const { stdout } = await runCLI(`check ${badArchDir} --no-architecture`)
|
||||
|
||||
expect(stdout).not.toContain("Architecture Violation")
|
||||
}, 30000)
|
||||
})
|
||||
|
||||
describe("Good Architecture Examples", () => {
|
||||
it("should show success message for clean code", async () => {
|
||||
const goodArchDir = path.join(EXAMPLES_DIR, "good-architecture")
|
||||
|
||||
const { stdout } = await runCLI(`check ${goodArchDir}`)
|
||||
|
||||
expect(stdout).toContain("Analyzing")
|
||||
}, 30000)
|
||||
})
|
||||
|
||||
describe("Bad Architecture Examples", () => {
|
||||
it("should detect and report hardcoded values", async () => {
|
||||
const hardcodedDir = path.join(EXAMPLES_DIR, "bad-architecture/hardcoded")
|
||||
|
||||
const { stdout } = await runCLI(`check ${hardcodedDir}`)
|
||||
|
||||
expect(stdout).toContain("ServerWithMagicNumbers.ts")
|
||||
}, 30000)
|
||||
|
||||
it("should detect and report circular dependencies", async () => {
|
||||
const circularDir = path.join(EXAMPLES_DIR, "bad-architecture/circular")
|
||||
|
||||
const { stdout } = await runCLI(`check ${circularDir}`)
|
||||
|
||||
expect(stdout).toContain("Analyzing")
|
||||
}, 30000)
|
||||
|
||||
it("should detect and report framework leaks", async () => {
|
||||
const frameworkDir = path.join(EXAMPLES_DIR, "bad-architecture/framework-leaks")
|
||||
|
||||
const { stdout } = await runCLI(`check ${frameworkDir}`)
|
||||
|
||||
expect(stdout).toContain("Analyzing")
|
||||
}, 30000)
|
||||
|
||||
it("should detect and report naming violations", async () => {
|
||||
const namingDir = path.join(EXAMPLES_DIR, "bad-architecture/naming")
|
||||
|
||||
const { stdout } = await runCLI(`check ${namingDir}`)
|
||||
|
||||
expect(stdout).toContain("Analyzing")
|
||||
}, 30000)
|
||||
})
|
||||
|
||||
describe("Error Handling", () => {
|
||||
it("should show error for non-existent path", async () => {
|
||||
const nonExistentPath = path.join(EXAMPLES_DIR, "non-existent-directory")
|
||||
|
||||
try {
|
||||
await execAsync(`node ${CLI_PATH} check ${nonExistentPath}`)
|
||||
expect.fail("Should have thrown an error")
|
||||
} catch (error: unknown) {
|
||||
const err = error as { stderr: string }
|
||||
expect(err.stderr).toBeTruthy()
|
||||
}
|
||||
}, 30000)
|
||||
})
|
||||
|
||||
describe("Exit Codes", () => {
|
||||
it("should run for clean code", async () => {
|
||||
const goodArchDir = path.join(EXAMPLES_DIR, "good-architecture")
|
||||
|
||||
const { stdout, exitCode } = await runCLI(`check ${goodArchDir}`)
|
||||
|
||||
expect(stdout).toContain("Analyzing")
|
||||
expect(exitCode).toBeGreaterThanOrEqual(0)
|
||||
}, 30000)
|
||||
|
||||
it("should handle violations gracefully", async () => {
|
||||
const badArchDir = path.join(EXAMPLES_DIR, "bad-architecture")
|
||||
|
||||
const { stdout, exitCode } = await runCLI(`check ${badArchDir}`)
|
||||
|
||||
expect(stdout).toContain("Analyzing")
|
||||
expect(exitCode).toBeGreaterThanOrEqual(0)
|
||||
}, 30000)
|
||||
})
|
||||
|
||||
describe("Spawn Process Tests", () => {
|
||||
it("should spawn CLI process and capture output", (done) => {
|
||||
const goodArchDir = path.join(EXAMPLES_DIR, "good-architecture")
|
||||
const child = spawn("node", [CLI_PATH, "check", goodArchDir])
|
||||
|
||||
let stdout = ""
|
||||
let stderr = ""
|
||||
|
||||
child.stdout.on("data", (data) => {
|
||||
stdout += data.toString()
|
||||
})
|
||||
|
||||
child.stderr.on("data", (data) => {
|
||||
stderr += data.toString()
|
||||
})
|
||||
|
||||
child.on("close", (code) => {
|
||||
expect(code).toBe(0)
|
||||
expect(stdout).toContain("Analyzing")
|
||||
done()
|
||||
})
|
||||
}, 30000)
|
||||
|
||||
it("should handle large output without buffering issues", (done) => {
|
||||
const badArchDir = path.join(EXAMPLES_DIR, "bad-architecture")
|
||||
const child = spawn("node", [CLI_PATH, "check", badArchDir])
|
||||
|
||||
let stdout = ""
|
||||
|
||||
child.stdout.on("data", (data) => {
|
||||
stdout += data.toString()
|
||||
})
|
||||
|
||||
child.on("close", (code) => {
|
||||
expect(code).toBe(0)
|
||||
expect(stdout.length).toBeGreaterThan(0)
|
||||
done()
|
||||
})
|
||||
}, 30000)
|
||||
})
|
||||
})
|
||||
412
packages/guardian/tests/e2e/JSONOutput.e2e.test.ts
Normal file
412
packages/guardian/tests/e2e/JSONOutput.e2e.test.ts
Normal file
@@ -0,0 +1,412 @@
|
||||
import { describe, it, expect } from "vitest"
|
||||
import { analyzeProject } from "../../src/api"
|
||||
import path from "path"
|
||||
import type {
|
||||
AnalyzeProjectResponse,
|
||||
HardcodeViolation,
|
||||
CircularDependencyViolation,
|
||||
NamingConventionViolation,
|
||||
FrameworkLeakViolation,
|
||||
EntityExposureViolation,
|
||||
DependencyDirectionViolation,
|
||||
RepositoryPatternViolation,
|
||||
AggregateBoundaryViolation,
|
||||
} from "../../src/api"
|
||||
|
||||
describe("JSON Output Format E2E", () => {
|
||||
const EXAMPLES_DIR = path.join(__dirname, "../../examples")
|
||||
|
||||
describe("Response Structure", () => {
|
||||
it("should return valid JSON structure", async () => {
|
||||
const rootDir = path.join(EXAMPLES_DIR, "good-architecture")
|
||||
|
||||
const result = await analyzeProject({ rootDir })
|
||||
|
||||
expect(result).toBeDefined()
|
||||
expect(typeof result).toBe("object")
|
||||
|
||||
const json = JSON.stringify(result)
|
||||
expect(() => JSON.parse(json)).not.toThrow()
|
||||
})
|
||||
|
||||
it("should include all required top-level fields", async () => {
|
||||
const rootDir = path.join(EXAMPLES_DIR, "good-architecture")
|
||||
|
||||
const result: AnalyzeProjectResponse = await analyzeProject({ rootDir })
|
||||
|
||||
expect(result).toHaveProperty("hardcodeViolations")
|
||||
expect(result).toHaveProperty("violations")
|
||||
expect(result).toHaveProperty("circularDependencyViolations")
|
||||
expect(result).toHaveProperty("namingViolations")
|
||||
expect(result).toHaveProperty("frameworkLeakViolations")
|
||||
expect(result).toHaveProperty("entityExposureViolations")
|
||||
expect(result).toHaveProperty("dependencyDirectionViolations")
|
||||
expect(result).toHaveProperty("repositoryPatternViolations")
|
||||
expect(result).toHaveProperty("aggregateBoundaryViolations")
|
||||
expect(result).toHaveProperty("metrics")
|
||||
expect(result).toHaveProperty("dependencyGraph")
|
||||
})
|
||||
|
||||
it("should have correct types for all fields", async () => {
|
||||
const rootDir = path.join(EXAMPLES_DIR, "good-architecture")
|
||||
|
||||
const result = await analyzeProject({ rootDir })
|
||||
|
||||
expect(Array.isArray(result.hardcodeViolations)).toBe(true)
|
||||
expect(Array.isArray(result.violations)).toBe(true)
|
||||
expect(Array.isArray(result.circularDependencyViolations)).toBe(true)
|
||||
expect(Array.isArray(result.namingViolations)).toBe(true)
|
||||
expect(Array.isArray(result.frameworkLeakViolations)).toBe(true)
|
||||
expect(Array.isArray(result.entityExposureViolations)).toBe(true)
|
||||
expect(Array.isArray(result.dependencyDirectionViolations)).toBe(true)
|
||||
expect(Array.isArray(result.repositoryPatternViolations)).toBe(true)
|
||||
expect(Array.isArray(result.aggregateBoundaryViolations)).toBe(true)
|
||||
expect(typeof result.metrics).toBe("object")
|
||||
expect(typeof result.dependencyGraph).toBe("object")
|
||||
})
|
||||
})
|
||||
|
||||
describe("Metrics Structure", () => {
|
||||
it("should include all metric fields", async () => {
|
||||
const rootDir = path.join(EXAMPLES_DIR, "good-architecture")
|
||||
|
||||
const result = await analyzeProject({ rootDir })
|
||||
const { metrics } = result
|
||||
|
||||
expect(metrics).toHaveProperty("totalFiles")
|
||||
expect(metrics).toHaveProperty("totalFunctions")
|
||||
expect(metrics).toHaveProperty("totalImports")
|
||||
expect(metrics).toHaveProperty("layerDistribution")
|
||||
|
||||
expect(typeof metrics.totalFiles).toBe("number")
|
||||
expect(typeof metrics.totalFunctions).toBe("number")
|
||||
expect(typeof metrics.totalImports).toBe("number")
|
||||
expect(typeof metrics.layerDistribution).toBe("object")
|
||||
})
|
||||
|
||||
it("should have non-negative metric values", async () => {
|
||||
const rootDir = path.join(EXAMPLES_DIR, "good-architecture")
|
||||
|
||||
const result = await analyzeProject({ rootDir })
|
||||
const { metrics } = result
|
||||
|
||||
expect(metrics.totalFiles).toBeGreaterThanOrEqual(0)
|
||||
expect(metrics.totalFunctions).toBeGreaterThanOrEqual(0)
|
||||
expect(metrics.totalImports).toBeGreaterThanOrEqual(0)
|
||||
})
|
||||
})
|
||||
|
||||
describe("Hardcode Violation Structure", () => {
|
||||
it("should have correct structure for hardcode violations", async () => {
|
||||
const rootDir = path.join(EXAMPLES_DIR, "bad-architecture/hardcoded")
|
||||
|
||||
const result = await analyzeProject({ rootDir })
|
||||
|
||||
if (result.hardcodeViolations.length > 0) {
|
||||
const violation: HardcodeViolation = result.hardcodeViolations[0]
|
||||
|
||||
expect(violation).toHaveProperty("file")
|
||||
expect(violation).toHaveProperty("line")
|
||||
expect(violation).toHaveProperty("column")
|
||||
expect(violation).toHaveProperty("type")
|
||||
expect(violation).toHaveProperty("value")
|
||||
expect(violation).toHaveProperty("context")
|
||||
expect(violation).toHaveProperty("suggestion")
|
||||
expect(violation).toHaveProperty("severity")
|
||||
|
||||
expect(typeof violation.file).toBe("string")
|
||||
expect(typeof violation.line).toBe("number")
|
||||
expect(typeof violation.column).toBe("number")
|
||||
expect(typeof violation.type).toBe("string")
|
||||
expect(typeof violation.context).toBe("string")
|
||||
expect(typeof violation.severity).toBe("string")
|
||||
}
|
||||
})
|
||||
})
|
||||
|
||||
describe("Circular Dependency Violation Structure", () => {
|
||||
it("should have correct structure for circular dependency violations", async () => {
|
||||
const rootDir = path.join(EXAMPLES_DIR, "bad-architecture/circular")
|
||||
|
||||
const result = await analyzeProject({ rootDir })
|
||||
|
||||
if (result.circularDependencyViolations.length > 0) {
|
||||
const violation: CircularDependencyViolation =
|
||||
result.circularDependencyViolations[0]
|
||||
|
||||
expect(violation).toHaveProperty("file")
|
||||
expect(violation).toHaveProperty("cycle")
|
||||
expect(violation).toHaveProperty("severity")
|
||||
|
||||
expect(typeof violation.file).toBe("string")
|
||||
expect(Array.isArray(violation.cycle)).toBe(true)
|
||||
expect(violation.cycle.length).toBeGreaterThanOrEqual(2)
|
||||
expect(typeof violation.severity).toBe("string")
|
||||
expect(violation.severity).toBe("critical")
|
||||
}
|
||||
})
|
||||
})
|
||||
|
||||
describe("Naming Convention Violation Structure", () => {
|
||||
it("should have correct structure for naming violations", async () => {
|
||||
const rootDir = path.join(EXAMPLES_DIR, "bad-architecture/naming")
|
||||
|
||||
const result = await analyzeProject({ rootDir })
|
||||
|
||||
if (result.namingViolations.length > 0) {
|
||||
const violation: NamingConventionViolation = result.namingViolations[0]
|
||||
|
||||
expect(violation).toHaveProperty("file")
|
||||
expect(violation).toHaveProperty("fileName")
|
||||
expect(violation).toHaveProperty("expected")
|
||||
expect(violation).toHaveProperty("actual")
|
||||
expect(violation).toHaveProperty("layer")
|
||||
expect(violation).toHaveProperty("message")
|
||||
expect(violation).toHaveProperty("severity")
|
||||
|
||||
expect(typeof violation.file).toBe("string")
|
||||
expect(typeof violation.fileName).toBe("string")
|
||||
expect(typeof violation.expected).toBe("string")
|
||||
expect(typeof violation.actual).toBe("string")
|
||||
expect(typeof violation.layer).toBe("string")
|
||||
expect(typeof violation.message).toBe("string")
|
||||
expect(typeof violation.severity).toBe("string")
|
||||
}
|
||||
})
|
||||
})
|
||||
|
||||
describe("Framework Leak Violation Structure", () => {
|
||||
it("should have correct structure for framework leak violations", async () => {
|
||||
const rootDir = path.join(EXAMPLES_DIR, "bad-architecture/framework-leaks")
|
||||
|
||||
const result = await analyzeProject({ rootDir })
|
||||
|
||||
if (result.frameworkLeakViolations.length > 0) {
|
||||
const violation: FrameworkLeakViolation = result.frameworkLeakViolations[0]
|
||||
|
||||
expect(violation).toHaveProperty("file")
|
||||
expect(violation).toHaveProperty("packageName")
|
||||
expect(violation).toHaveProperty("category")
|
||||
expect(violation).toHaveProperty("categoryDescription")
|
||||
expect(violation).toHaveProperty("layer")
|
||||
expect(violation).toHaveProperty("message")
|
||||
expect(violation).toHaveProperty("suggestion")
|
||||
expect(violation).toHaveProperty("severity")
|
||||
|
||||
expect(typeof violation.file).toBe("string")
|
||||
expect(typeof violation.packageName).toBe("string")
|
||||
expect(typeof violation.category).toBe("string")
|
||||
expect(typeof violation.categoryDescription).toBe("string")
|
||||
expect(typeof violation.layer).toBe("string")
|
||||
expect(typeof violation.message).toBe("string")
|
||||
expect(typeof violation.suggestion).toBe("string")
|
||||
expect(typeof violation.severity).toBe("string")
|
||||
}
|
||||
})
|
||||
})
|
||||
|
||||
describe("Entity Exposure Violation Structure", () => {
|
||||
it("should have correct structure for entity exposure violations", async () => {
|
||||
const rootDir = path.join(EXAMPLES_DIR, "bad-architecture/entity-exposure")
|
||||
|
||||
const result = await analyzeProject({ rootDir })
|
||||
|
||||
if (result.entityExposureViolations.length > 0) {
|
||||
const violation: EntityExposureViolation = result.entityExposureViolations[0]
|
||||
|
||||
expect(violation).toHaveProperty("file")
|
||||
expect(violation).toHaveProperty("entityName")
|
||||
expect(violation).toHaveProperty("returnType")
|
||||
expect(violation).toHaveProperty("suggestion")
|
||||
expect(violation).toHaveProperty("severity")
|
||||
|
||||
expect(typeof violation.file).toBe("string")
|
||||
expect(typeof violation.entityName).toBe("string")
|
||||
expect(typeof violation.returnType).toBe("string")
|
||||
expect(typeof violation.suggestion).toBe("string")
|
||||
expect(typeof violation.severity).toBe("string")
|
||||
}
|
||||
})
|
||||
})
|
||||
|
||||
describe("Dependency Direction Violation Structure", () => {
|
||||
it("should have correct structure for dependency direction violations", async () => {
|
||||
const rootDir = path.join(EXAMPLES_DIR, "bad-architecture/dependency-direction")
|
||||
|
||||
const result = await analyzeProject({ rootDir })
|
||||
|
||||
if (result.dependencyDirectionViolations.length > 0) {
|
||||
const violation: DependencyDirectionViolation =
|
||||
result.dependencyDirectionViolations[0]
|
||||
|
||||
expect(violation).toHaveProperty("file")
|
||||
expect(violation).toHaveProperty("fromLayer")
|
||||
expect(violation).toHaveProperty("toLayer")
|
||||
expect(violation).toHaveProperty("importPath")
|
||||
expect(violation).toHaveProperty("suggestion")
|
||||
expect(violation).toHaveProperty("severity")
|
||||
|
||||
expect(typeof violation.file).toBe("string")
|
||||
expect(typeof violation.fromLayer).toBe("string")
|
||||
expect(typeof violation.toLayer).toBe("string")
|
||||
expect(typeof violation.importPath).toBe("string")
|
||||
expect(typeof violation.suggestion).toBe("string")
|
||||
expect(typeof violation.severity).toBe("string")
|
||||
}
|
||||
})
|
||||
})
|
||||
|
||||
describe("Repository Pattern Violation Structure", () => {
|
||||
it("should have correct structure for repository pattern violations", async () => {
|
||||
const rootDir = path.join(EXAMPLES_DIR, "repository-pattern")
|
||||
|
||||
const result = await analyzeProject({ rootDir })
|
||||
|
||||
const badViolations = result.repositoryPatternViolations.filter((v) =>
|
||||
v.file.includes("bad"),
|
||||
)
|
||||
|
||||
if (badViolations.length > 0) {
|
||||
const violation: RepositoryPatternViolation = badViolations[0]
|
||||
|
||||
expect(violation).toHaveProperty("file")
|
||||
expect(violation).toHaveProperty("line")
|
||||
expect(violation).toHaveProperty("violationType")
|
||||
expect(violation).toHaveProperty("details")
|
||||
expect(violation).toHaveProperty("suggestion")
|
||||
expect(violation).toHaveProperty("severity")
|
||||
|
||||
expect(typeof violation.file).toBe("string")
|
||||
expect(typeof violation.line).toBe("number")
|
||||
expect(typeof violation.violationType).toBe("string")
|
||||
expect(typeof violation.details).toBe("string")
|
||||
expect(typeof violation.suggestion).toBe("string")
|
||||
expect(typeof violation.severity).toBe("string")
|
||||
}
|
||||
})
|
||||
})
|
||||
|
||||
describe("Aggregate Boundary Violation Structure", () => {
|
||||
it("should have correct structure for aggregate boundary violations", async () => {
|
||||
const rootDir = path.join(EXAMPLES_DIR, "aggregate-boundary/bad")
|
||||
|
||||
const result = await analyzeProject({ rootDir })
|
||||
|
||||
if (result.aggregateBoundaryViolations.length > 0) {
|
||||
const violation: AggregateBoundaryViolation = result.aggregateBoundaryViolations[0]
|
||||
|
||||
expect(violation).toHaveProperty("file")
|
||||
expect(violation).toHaveProperty("fromAggregate")
|
||||
expect(violation).toHaveProperty("toAggregate")
|
||||
expect(violation).toHaveProperty("entityName")
|
||||
expect(violation).toHaveProperty("importPath")
|
||||
expect(violation).toHaveProperty("suggestion")
|
||||
expect(violation).toHaveProperty("severity")
|
||||
|
||||
expect(typeof violation.file).toBe("string")
|
||||
expect(typeof violation.fromAggregate).toBe("string")
|
||||
expect(typeof violation.toAggregate).toBe("string")
|
||||
expect(typeof violation.entityName).toBe("string")
|
||||
expect(typeof violation.importPath).toBe("string")
|
||||
expect(typeof violation.suggestion).toBe("string")
|
||||
expect(typeof violation.severity).toBe("string")
|
||||
}
|
||||
})
|
||||
})
|
||||
|
||||
describe("Dependency Graph Structure", () => {
|
||||
it("should have dependency graph object", async () => {
|
||||
const rootDir = path.join(EXAMPLES_DIR, "good-architecture")
|
||||
|
||||
const result = await analyzeProject({ rootDir })
|
||||
const { dependencyGraph } = result
|
||||
|
||||
expect(dependencyGraph).toBeDefined()
|
||||
expect(typeof dependencyGraph).toBe("object")
|
||||
})
|
||||
|
||||
it("should have getAllNodes method on dependency graph", async () => {
|
||||
const rootDir = path.join(EXAMPLES_DIR, "good-architecture")
|
||||
|
||||
const result = await analyzeProject({ rootDir })
|
||||
const { dependencyGraph } = result
|
||||
|
||||
expect(typeof dependencyGraph.getAllNodes).toBe("function")
|
||||
const nodes = dependencyGraph.getAllNodes()
|
||||
expect(Array.isArray(nodes)).toBe(true)
|
||||
})
|
||||
})
|
||||
|
||||
describe("JSON Serialization", () => {
|
||||
it("should serialize metrics without data loss", async () => {
|
||||
const rootDir = path.join(EXAMPLES_DIR, "good-architecture")
|
||||
|
||||
const result = await analyzeProject({ rootDir })
|
||||
|
||||
const json = JSON.stringify(result.metrics)
|
||||
const parsed = JSON.parse(json)
|
||||
|
||||
expect(parsed.totalFiles).toBe(result.metrics.totalFiles)
|
||||
expect(parsed.totalFunctions).toBe(result.metrics.totalFunctions)
|
||||
expect(parsed.totalImports).toBe(result.metrics.totalImports)
|
||||
})
|
||||
|
||||
it("should serialize violations without data loss", async () => {
|
||||
const rootDir = path.join(EXAMPLES_DIR, "good-architecture")
|
||||
|
||||
const result = await analyzeProject({ rootDir })
|
||||
|
||||
const json = JSON.stringify({
|
||||
hardcodeViolations: result.hardcodeViolations,
|
||||
violations: result.violations,
|
||||
})
|
||||
const parsed = JSON.parse(json)
|
||||
|
||||
expect(Array.isArray(parsed.violations)).toBe(true)
|
||||
expect(Array.isArray(parsed.hardcodeViolations)).toBe(true)
|
||||
})
|
||||
|
||||
it("should serialize violation arrays for large results", async () => {
|
||||
const rootDir = path.join(EXAMPLES_DIR, "bad-architecture")
|
||||
|
||||
const result = await analyzeProject({ rootDir })
|
||||
|
||||
const json = JSON.stringify({
|
||||
hardcodeViolations: result.hardcodeViolations,
|
||||
violations: result.violations,
|
||||
namingViolations: result.namingViolations,
|
||||
})
|
||||
|
||||
expect(json.length).toBeGreaterThan(0)
|
||||
expect(() => JSON.parse(json)).not.toThrow()
|
||||
})
|
||||
})
|
||||
|
||||
describe("Severity Levels", () => {
|
||||
it("should only contain valid severity levels", async () => {
|
||||
const rootDir = path.join(EXAMPLES_DIR, "bad-architecture")
|
||||
|
||||
const result = await analyzeProject({ rootDir })
|
||||
|
||||
const validSeverities = ["critical", "high", "medium", "low"]
|
||||
|
||||
const allViolations = [
|
||||
...result.hardcodeViolations,
|
||||
...result.violations,
|
||||
...result.circularDependencyViolations,
|
||||
...result.namingViolations,
|
||||
...result.frameworkLeakViolations,
|
||||
...result.entityExposureViolations,
|
||||
...result.dependencyDirectionViolations,
|
||||
...result.repositoryPatternViolations,
|
||||
...result.aggregateBoundaryViolations,
|
||||
]
|
||||
|
||||
allViolations.forEach((violation) => {
|
||||
if ("severity" in violation) {
|
||||
expect(validSeverities).toContain(violation.severity)
|
||||
}
|
||||
})
|
||||
})
|
||||
})
|
||||
})
|
||||
358
packages/guardian/tests/unit/domain/EntityExposure.test.ts
Normal file
358
packages/guardian/tests/unit/domain/EntityExposure.test.ts
Normal file
@@ -0,0 +1,358 @@
|
||||
import { describe, it, expect } from "vitest"
|
||||
import { EntityExposure } from "../../../src/domain/value-objects/EntityExposure"
|
||||
|
||||
describe("EntityExposure", () => {
|
||||
describe("create", () => {
|
||||
it("should create entity exposure with all properties", () => {
|
||||
const exposure = EntityExposure.create(
|
||||
"User",
|
||||
"User",
|
||||
"src/controllers/UserController.ts",
|
||||
"infrastructure",
|
||||
25,
|
||||
"getUser",
|
||||
)
|
||||
|
||||
expect(exposure.entityName).toBe("User")
|
||||
expect(exposure.returnType).toBe("User")
|
||||
expect(exposure.filePath).toBe("src/controllers/UserController.ts")
|
||||
expect(exposure.layer).toBe("infrastructure")
|
||||
expect(exposure.line).toBe(25)
|
||||
expect(exposure.methodName).toBe("getUser")
|
||||
})
|
||||
|
||||
it("should create entity exposure without optional properties", () => {
|
||||
const exposure = EntityExposure.create(
|
||||
"Order",
|
||||
"Order",
|
||||
"src/controllers/OrderController.ts",
|
||||
"infrastructure",
|
||||
)
|
||||
|
||||
expect(exposure.entityName).toBe("Order")
|
||||
expect(exposure.line).toBeUndefined()
|
||||
expect(exposure.methodName).toBeUndefined()
|
||||
})
|
||||
|
||||
it("should create entity exposure with line but without method name", () => {
|
||||
const exposure = EntityExposure.create(
|
||||
"Product",
|
||||
"Product",
|
||||
"src/api/ProductApi.ts",
|
||||
"infrastructure",
|
||||
15,
|
||||
)
|
||||
|
||||
expect(exposure.line).toBe(15)
|
||||
expect(exposure.methodName).toBeUndefined()
|
||||
})
|
||||
})
|
||||
|
||||
describe("getMessage", () => {
|
||||
it("should return message with method name", () => {
|
||||
const exposure = EntityExposure.create(
|
||||
"User",
|
||||
"User",
|
||||
"src/controllers/UserController.ts",
|
||||
"infrastructure",
|
||||
25,
|
||||
"getUser",
|
||||
)
|
||||
|
||||
const message = exposure.getMessage()
|
||||
|
||||
expect(message).toContain("Method 'getUser'")
|
||||
expect(message).toContain("returns domain entity 'User'")
|
||||
expect(message).toContain("instead of DTO")
|
||||
})
|
||||
|
||||
it("should return message without method name", () => {
|
||||
const exposure = EntityExposure.create(
|
||||
"Order",
|
||||
"Order",
|
||||
"src/controllers/OrderController.ts",
|
||||
"infrastructure",
|
||||
30,
|
||||
)
|
||||
|
||||
const message = exposure.getMessage()
|
||||
|
||||
expect(message).toContain("returns domain entity 'Order'")
|
||||
expect(message).toContain("instead of DTO")
|
||||
expect(message).not.toContain("undefined")
|
||||
})
|
||||
|
||||
it("should handle different entity names", () => {
|
||||
const exposures = [
|
||||
EntityExposure.create(
|
||||
"Customer",
|
||||
"Customer",
|
||||
"file.ts",
|
||||
"infrastructure",
|
||||
1,
|
||||
"getCustomer",
|
||||
),
|
||||
EntityExposure.create(
|
||||
"Invoice",
|
||||
"Invoice",
|
||||
"file.ts",
|
||||
"infrastructure",
|
||||
2,
|
||||
"findInvoice",
|
||||
),
|
||||
EntityExposure.create(
|
||||
"Payment",
|
||||
"Payment",
|
||||
"file.ts",
|
||||
"infrastructure",
|
||||
3,
|
||||
"processPayment",
|
||||
),
|
||||
]
|
||||
|
||||
exposures.forEach((exposure) => {
|
||||
const message = exposure.getMessage()
|
||||
expect(message).toContain(exposure.entityName)
|
||||
expect(message).toContain("instead of DTO")
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
describe("getSuggestion", () => {
|
||||
it("should return multi-line suggestion", () => {
|
||||
const exposure = EntityExposure.create(
|
||||
"User",
|
||||
"User",
|
||||
"src/controllers/UserController.ts",
|
||||
"infrastructure",
|
||||
25,
|
||||
"getUser",
|
||||
)
|
||||
|
||||
const suggestion = exposure.getSuggestion()
|
||||
|
||||
expect(suggestion).toContain("Create a DTO class")
|
||||
expect(suggestion).toContain("UserResponseDto")
|
||||
expect(suggestion).toContain("Create a mapper")
|
||||
expect(suggestion).toContain("Update the method")
|
||||
})
|
||||
|
||||
it("should suggest appropriate DTO name based on entity", () => {
|
||||
const exposure = EntityExposure.create(
|
||||
"Order",
|
||||
"Order",
|
||||
"src/controllers/OrderController.ts",
|
||||
"infrastructure",
|
||||
)
|
||||
|
||||
const suggestion = exposure.getSuggestion()
|
||||
|
||||
expect(suggestion).toContain("OrderResponseDto")
|
||||
expect(suggestion).toContain("convert Order to OrderResponseDto")
|
||||
})
|
||||
|
||||
it("should provide step-by-step suggestions", () => {
|
||||
const exposure = EntityExposure.create(
|
||||
"Product",
|
||||
"Product",
|
||||
"src/api/ProductApi.ts",
|
||||
"infrastructure",
|
||||
10,
|
||||
)
|
||||
|
||||
const suggestion = exposure.getSuggestion()
|
||||
const lines = suggestion.split("\n")
|
||||
|
||||
expect(lines.length).toBeGreaterThan(1)
|
||||
expect(lines.some((line) => line.includes("Create a DTO"))).toBe(true)
|
||||
expect(lines.some((line) => line.includes("mapper"))).toBe(true)
|
||||
expect(lines.some((line) => line.includes("Update the method"))).toBe(true)
|
||||
})
|
||||
})
|
||||
|
||||
describe("getExampleFix", () => {
|
||||
it("should return example with method name", () => {
|
||||
const exposure = EntityExposure.create(
|
||||
"User",
|
||||
"User",
|
||||
"src/controllers/UserController.ts",
|
||||
"infrastructure",
|
||||
25,
|
||||
"getUser",
|
||||
)
|
||||
|
||||
const example = exposure.getExampleFix()
|
||||
|
||||
expect(example).toContain("Bad: Exposing domain entity")
|
||||
expect(example).toContain("Good: Using DTO")
|
||||
expect(example).toContain("getUser()")
|
||||
expect(example).toContain("Promise<User>")
|
||||
expect(example).toContain("Promise<UserResponseDto>")
|
||||
expect(example).toContain("UserMapper.toDto")
|
||||
})
|
||||
|
||||
it("should return example without method name", () => {
|
||||
const exposure = EntityExposure.create(
|
||||
"Order",
|
||||
"Order",
|
||||
"src/controllers/OrderController.ts",
|
||||
"infrastructure",
|
||||
30,
|
||||
)
|
||||
|
||||
const example = exposure.getExampleFix()
|
||||
|
||||
expect(example).toContain("Promise<Order>")
|
||||
expect(example).toContain("Promise<OrderResponseDto>")
|
||||
expect(example).toContain("OrderMapper.toDto")
|
||||
expect(example).not.toContain("undefined")
|
||||
})
|
||||
|
||||
it("should show both bad and good examples", () => {
|
||||
const exposure = EntityExposure.create(
|
||||
"Product",
|
||||
"Product",
|
||||
"src/api/ProductApi.ts",
|
||||
"infrastructure",
|
||||
15,
|
||||
"findProduct",
|
||||
)
|
||||
|
||||
const example = exposure.getExampleFix()
|
||||
|
||||
expect(example).toContain("❌ Bad")
|
||||
expect(example).toContain("✅ Good")
|
||||
})
|
||||
|
||||
it("should include async/await pattern", () => {
|
||||
const exposure = EntityExposure.create(
|
||||
"Customer",
|
||||
"Customer",
|
||||
"src/api/CustomerApi.ts",
|
||||
"infrastructure",
|
||||
20,
|
||||
"getCustomer",
|
||||
)
|
||||
|
||||
const example = exposure.getExampleFix()
|
||||
|
||||
expect(example).toContain("async")
|
||||
expect(example).toContain("await")
|
||||
})
|
||||
})
|
||||
|
||||
describe("value object behavior", () => {
|
||||
it("should be equal to another instance with same values", () => {
|
||||
const exposure1 = EntityExposure.create(
|
||||
"User",
|
||||
"User",
|
||||
"file.ts",
|
||||
"infrastructure",
|
||||
10,
|
||||
"getUser",
|
||||
)
|
||||
const exposure2 = EntityExposure.create(
|
||||
"User",
|
||||
"User",
|
||||
"file.ts",
|
||||
"infrastructure",
|
||||
10,
|
||||
"getUser",
|
||||
)
|
||||
|
||||
expect(exposure1.equals(exposure2)).toBe(true)
|
||||
})
|
||||
|
||||
it("should not be equal to instance with different values", () => {
|
||||
const exposure1 = EntityExposure.create(
|
||||
"User",
|
||||
"User",
|
||||
"file.ts",
|
||||
"infrastructure",
|
||||
10,
|
||||
"getUser",
|
||||
)
|
||||
const exposure2 = EntityExposure.create(
|
||||
"Order",
|
||||
"Order",
|
||||
"file.ts",
|
||||
"infrastructure",
|
||||
10,
|
||||
"getUser",
|
||||
)
|
||||
|
||||
expect(exposure1.equals(exposure2)).toBe(false)
|
||||
})
|
||||
|
||||
it("should not be equal to instance with different method name", () => {
|
||||
const exposure1 = EntityExposure.create(
|
||||
"User",
|
||||
"User",
|
||||
"file.ts",
|
||||
"infrastructure",
|
||||
10,
|
||||
"getUser",
|
||||
)
|
||||
const exposure2 = EntityExposure.create(
|
||||
"User",
|
||||
"User",
|
||||
"file.ts",
|
||||
"infrastructure",
|
||||
10,
|
||||
"findUser",
|
||||
)
|
||||
|
||||
expect(exposure1.equals(exposure2)).toBe(false)
|
||||
})
|
||||
})
|
||||
|
||||
describe("edge cases", () => {
|
||||
it("should handle empty entity name", () => {
|
||||
const exposure = EntityExposure.create("", "", "file.ts", "infrastructure")
|
||||
|
||||
expect(exposure.entityName).toBe("")
|
||||
expect(exposure.getMessage()).toBeTruthy()
|
||||
})
|
||||
|
||||
it("should handle very long entity names", () => {
|
||||
const longName = "VeryLongEntityNameThatIsUnusuallyLong"
|
||||
const exposure = EntityExposure.create(longName, longName, "file.ts", "infrastructure")
|
||||
|
||||
expect(exposure.entityName).toBe(longName)
|
||||
const suggestion = exposure.getSuggestion()
|
||||
expect(suggestion).toContain(`${longName}ResponseDto`)
|
||||
})
|
||||
|
||||
it("should handle special characters in method name", () => {
|
||||
const exposure = EntityExposure.create(
|
||||
"User",
|
||||
"User",
|
||||
"file.ts",
|
||||
"infrastructure",
|
||||
10,
|
||||
"get$User",
|
||||
)
|
||||
|
||||
const message = exposure.getMessage()
|
||||
expect(message).toContain("get$User")
|
||||
})
|
||||
|
||||
it("should handle line number 0", () => {
|
||||
const exposure = EntityExposure.create("User", "User", "file.ts", "infrastructure", 0)
|
||||
|
||||
expect(exposure.line).toBe(0)
|
||||
})
|
||||
|
||||
it("should handle very large line numbers", () => {
|
||||
const exposure = EntityExposure.create(
|
||||
"User",
|
||||
"User",
|
||||
"file.ts",
|
||||
"infrastructure",
|
||||
999999,
|
||||
)
|
||||
|
||||
expect(exposure.line).toBe(999999)
|
||||
})
|
||||
})
|
||||
})
|
||||
308
packages/guardian/tests/unit/domain/ProjectPath.test.ts
Normal file
308
packages/guardian/tests/unit/domain/ProjectPath.test.ts
Normal file
@@ -0,0 +1,308 @@
|
||||
import { describe, it, expect } from "vitest"
|
||||
import { ProjectPath } from "../../../src/domain/value-objects/ProjectPath"
|
||||
|
||||
describe("ProjectPath", () => {
|
||||
describe("create", () => {
|
||||
it("should create a ProjectPath with absolute and relative paths", () => {
|
||||
const absolutePath = "/Users/dev/project/src/domain/User.ts"
|
||||
const projectRoot = "/Users/dev/project"
|
||||
|
||||
const projectPath = ProjectPath.create(absolutePath, projectRoot)
|
||||
|
||||
expect(projectPath.absolute).toBe(absolutePath)
|
||||
expect(projectPath.relative).toBe("src/domain/User.ts")
|
||||
})
|
||||
|
||||
it("should handle paths with same directory", () => {
|
||||
const absolutePath = "/Users/dev/project/User.ts"
|
||||
const projectRoot = "/Users/dev/project"
|
||||
|
||||
const projectPath = ProjectPath.create(absolutePath, projectRoot)
|
||||
|
||||
expect(projectPath.absolute).toBe(absolutePath)
|
||||
expect(projectPath.relative).toBe("User.ts")
|
||||
})
|
||||
|
||||
it("should handle nested directory structures", () => {
|
||||
const absolutePath = "/Users/dev/project/src/domain/entities/user/User.ts"
|
||||
const projectRoot = "/Users/dev/project"
|
||||
|
||||
const projectPath = ProjectPath.create(absolutePath, projectRoot)
|
||||
|
||||
expect(projectPath.absolute).toBe(absolutePath)
|
||||
expect(projectPath.relative).toBe("src/domain/entities/user/User.ts")
|
||||
})
|
||||
|
||||
it("should handle Windows-style paths", () => {
|
||||
const absolutePath = "C:\\Users\\dev\\project\\src\\domain\\User.ts"
|
||||
const projectRoot = "C:\\Users\\dev\\project"
|
||||
|
||||
const projectPath = ProjectPath.create(absolutePath, projectRoot)
|
||||
|
||||
expect(projectPath.absolute).toBe(absolutePath)
|
||||
})
|
||||
})
|
||||
|
||||
describe("absolute getter", () => {
|
||||
it("should return the absolute path", () => {
|
||||
const absolutePath = "/Users/dev/project/src/domain/User.ts"
|
||||
const projectRoot = "/Users/dev/project"
|
||||
|
||||
const projectPath = ProjectPath.create(absolutePath, projectRoot)
|
||||
|
||||
expect(projectPath.absolute).toBe(absolutePath)
|
||||
})
|
||||
})
|
||||
|
||||
describe("relative getter", () => {
|
||||
it("should return the relative path", () => {
|
||||
const absolutePath = "/Users/dev/project/src/domain/User.ts"
|
||||
const projectRoot = "/Users/dev/project"
|
||||
|
||||
const projectPath = ProjectPath.create(absolutePath, projectRoot)
|
||||
|
||||
expect(projectPath.relative).toBe("src/domain/User.ts")
|
||||
})
|
||||
})
|
||||
|
||||
describe("extension getter", () => {
|
||||
it("should return .ts for TypeScript files", () => {
|
||||
const absolutePath = "/Users/dev/project/src/domain/User.ts"
|
||||
const projectRoot = "/Users/dev/project"
|
||||
|
||||
const projectPath = ProjectPath.create(absolutePath, projectRoot)
|
||||
|
||||
expect(projectPath.extension).toBe(".ts")
|
||||
})
|
||||
|
||||
it("should return .tsx for TypeScript JSX files", () => {
|
||||
const absolutePath = "/Users/dev/project/src/components/Button.tsx"
|
||||
const projectRoot = "/Users/dev/project"
|
||||
|
||||
const projectPath = ProjectPath.create(absolutePath, projectRoot)
|
||||
|
||||
expect(projectPath.extension).toBe(".tsx")
|
||||
})
|
||||
|
||||
it("should return .js for JavaScript files", () => {
|
||||
const absolutePath = "/Users/dev/project/src/utils/helper.js"
|
||||
const projectRoot = "/Users/dev/project"
|
||||
|
||||
const projectPath = ProjectPath.create(absolutePath, projectRoot)
|
||||
|
||||
expect(projectPath.extension).toBe(".js")
|
||||
})
|
||||
|
||||
it("should return .jsx for JavaScript JSX files", () => {
|
||||
const absolutePath = "/Users/dev/project/src/components/Button.jsx"
|
||||
const projectRoot = "/Users/dev/project"
|
||||
|
||||
const projectPath = ProjectPath.create(absolutePath, projectRoot)
|
||||
|
||||
expect(projectPath.extension).toBe(".jsx")
|
||||
})
|
||||
|
||||
it("should return empty string for files without extension", () => {
|
||||
const absolutePath = "/Users/dev/project/README"
|
||||
const projectRoot = "/Users/dev/project"
|
||||
|
||||
const projectPath = ProjectPath.create(absolutePath, projectRoot)
|
||||
|
||||
expect(projectPath.extension).toBe("")
|
||||
})
|
||||
})
|
||||
|
||||
describe("filename getter", () => {
|
||||
it("should return the filename with extension", () => {
|
||||
const absolutePath = "/Users/dev/project/src/domain/User.ts"
|
||||
const projectRoot = "/Users/dev/project"
|
||||
|
||||
const projectPath = ProjectPath.create(absolutePath, projectRoot)
|
||||
|
||||
expect(projectPath.filename).toBe("User.ts")
|
||||
})
|
||||
|
||||
it("should handle filenames with multiple dots", () => {
|
||||
const absolutePath = "/Users/dev/project/src/domain/User.test.ts"
|
||||
const projectRoot = "/Users/dev/project"
|
||||
|
||||
const projectPath = ProjectPath.create(absolutePath, projectRoot)
|
||||
|
||||
expect(projectPath.filename).toBe("User.test.ts")
|
||||
})
|
||||
|
||||
it("should handle filenames without extension", () => {
|
||||
const absolutePath = "/Users/dev/project/README"
|
||||
const projectRoot = "/Users/dev/project"
|
||||
|
||||
const projectPath = ProjectPath.create(absolutePath, projectRoot)
|
||||
|
||||
expect(projectPath.filename).toBe("README")
|
||||
})
|
||||
})
|
||||
|
||||
describe("directory getter", () => {
|
||||
it("should return the directory path relative to project root", () => {
|
||||
const absolutePath = "/Users/dev/project/src/domain/entities/User.ts"
|
||||
const projectRoot = "/Users/dev/project"
|
||||
|
||||
const projectPath = ProjectPath.create(absolutePath, projectRoot)
|
||||
|
||||
expect(projectPath.directory).toBe("src/domain/entities")
|
||||
})
|
||||
|
||||
it("should return dot for files in project root", () => {
|
||||
const absolutePath = "/Users/dev/project/README.md"
|
||||
const projectRoot = "/Users/dev/project"
|
||||
|
||||
const projectPath = ProjectPath.create(absolutePath, projectRoot)
|
||||
|
||||
expect(projectPath.directory).toBe(".")
|
||||
})
|
||||
|
||||
it("should handle single-level directories", () => {
|
||||
const absolutePath = "/Users/dev/project/src/User.ts"
|
||||
const projectRoot = "/Users/dev/project"
|
||||
|
||||
const projectPath = ProjectPath.create(absolutePath, projectRoot)
|
||||
|
||||
expect(projectPath.directory).toBe("src")
|
||||
})
|
||||
})
|
||||
|
||||
describe("isTypeScript", () => {
|
||||
it("should return true for .ts files", () => {
|
||||
const absolutePath = "/Users/dev/project/src/domain/User.ts"
|
||||
const projectRoot = "/Users/dev/project"
|
||||
|
||||
const projectPath = ProjectPath.create(absolutePath, projectRoot)
|
||||
|
||||
expect(projectPath.isTypeScript()).toBe(true)
|
||||
})
|
||||
|
||||
it("should return true for .tsx files", () => {
|
||||
const absolutePath = "/Users/dev/project/src/components/Button.tsx"
|
||||
const projectRoot = "/Users/dev/project"
|
||||
|
||||
const projectPath = ProjectPath.create(absolutePath, projectRoot)
|
||||
|
||||
expect(projectPath.isTypeScript()).toBe(true)
|
||||
})
|
||||
|
||||
it("should return false for .js files", () => {
|
||||
const absolutePath = "/Users/dev/project/src/utils/helper.js"
|
||||
const projectRoot = "/Users/dev/project"
|
||||
|
||||
const projectPath = ProjectPath.create(absolutePath, projectRoot)
|
||||
|
||||
expect(projectPath.isTypeScript()).toBe(false)
|
||||
})
|
||||
|
||||
it("should return false for .jsx files", () => {
|
||||
const absolutePath = "/Users/dev/project/src/components/Button.jsx"
|
||||
const projectRoot = "/Users/dev/project"
|
||||
|
||||
const projectPath = ProjectPath.create(absolutePath, projectRoot)
|
||||
|
||||
expect(projectPath.isTypeScript()).toBe(false)
|
||||
})
|
||||
|
||||
it("should return false for other file types", () => {
|
||||
const absolutePath = "/Users/dev/project/README.md"
|
||||
const projectRoot = "/Users/dev/project"
|
||||
|
||||
const projectPath = ProjectPath.create(absolutePath, projectRoot)
|
||||
|
||||
expect(projectPath.isTypeScript()).toBe(false)
|
||||
})
|
||||
})
|
||||
|
||||
describe("isJavaScript", () => {
|
||||
it("should return true for .js files", () => {
|
||||
const absolutePath = "/Users/dev/project/src/utils/helper.js"
|
||||
const projectRoot = "/Users/dev/project"
|
||||
|
||||
const projectPath = ProjectPath.create(absolutePath, projectRoot)
|
||||
|
||||
expect(projectPath.isJavaScript()).toBe(true)
|
||||
})
|
||||
|
||||
it("should return true for .jsx files", () => {
|
||||
const absolutePath = "/Users/dev/project/src/components/Button.jsx"
|
||||
const projectRoot = "/Users/dev/project"
|
||||
|
||||
const projectPath = ProjectPath.create(absolutePath, projectRoot)
|
||||
|
||||
expect(projectPath.isJavaScript()).toBe(true)
|
||||
})
|
||||
|
||||
it("should return false for .ts files", () => {
|
||||
const absolutePath = "/Users/dev/project/src/domain/User.ts"
|
||||
const projectRoot = "/Users/dev/project"
|
||||
|
||||
const projectPath = ProjectPath.create(absolutePath, projectRoot)
|
||||
|
||||
expect(projectPath.isJavaScript()).toBe(false)
|
||||
})
|
||||
|
||||
it("should return false for .tsx files", () => {
|
||||
const absolutePath = "/Users/dev/project/src/components/Button.tsx"
|
||||
const projectRoot = "/Users/dev/project"
|
||||
|
||||
const projectPath = ProjectPath.create(absolutePath, projectRoot)
|
||||
|
||||
expect(projectPath.isJavaScript()).toBe(false)
|
||||
})
|
||||
|
||||
it("should return false for other file types", () => {
|
||||
const absolutePath = "/Users/dev/project/README.md"
|
||||
const projectRoot = "/Users/dev/project"
|
||||
|
||||
const projectPath = ProjectPath.create(absolutePath, projectRoot)
|
||||
|
||||
expect(projectPath.isJavaScript()).toBe(false)
|
||||
})
|
||||
})
|
||||
|
||||
describe("equals", () => {
|
||||
it("should return true for identical paths", () => {
|
||||
const absolutePath = "/Users/dev/project/src/domain/User.ts"
|
||||
const projectRoot = "/Users/dev/project"
|
||||
|
||||
const path1 = ProjectPath.create(absolutePath, projectRoot)
|
||||
const path2 = ProjectPath.create(absolutePath, projectRoot)
|
||||
|
||||
expect(path1.equals(path2)).toBe(true)
|
||||
})
|
||||
|
||||
it("should return false for different absolute paths", () => {
|
||||
const projectRoot = "/Users/dev/project"
|
||||
const path1 = ProjectPath.create("/Users/dev/project/src/domain/User.ts", projectRoot)
|
||||
const path2 = ProjectPath.create("/Users/dev/project/src/domain/Order.ts", projectRoot)
|
||||
|
||||
expect(path1.equals(path2)).toBe(false)
|
||||
})
|
||||
|
||||
it("should return false for different relative paths", () => {
|
||||
const path1 = ProjectPath.create(
|
||||
"/Users/dev/project1/src/User.ts",
|
||||
"/Users/dev/project1",
|
||||
)
|
||||
const path2 = ProjectPath.create(
|
||||
"/Users/dev/project2/src/User.ts",
|
||||
"/Users/dev/project2",
|
||||
)
|
||||
|
||||
expect(path1.equals(path2)).toBe(false)
|
||||
})
|
||||
|
||||
it("should return false when comparing with undefined", () => {
|
||||
const absolutePath = "/Users/dev/project/src/domain/User.ts"
|
||||
const projectRoot = "/Users/dev/project"
|
||||
|
||||
const path1 = ProjectPath.create(absolutePath, projectRoot)
|
||||
|
||||
expect(path1.equals(undefined)).toBe(false)
|
||||
})
|
||||
})
|
||||
})
|
||||
521
packages/guardian/tests/unit/domain/RepositoryViolation.test.ts
Normal file
521
packages/guardian/tests/unit/domain/RepositoryViolation.test.ts
Normal file
@@ -0,0 +1,521 @@
|
||||
import { describe, it, expect } from "vitest"
|
||||
import { RepositoryViolation } from "../../../src/domain/value-objects/RepositoryViolation"
|
||||
import { REPOSITORY_VIOLATION_TYPES } from "../../../src/shared/constants/rules"
|
||||
|
||||
describe("RepositoryViolation", () => {
|
||||
describe("create", () => {
|
||||
it("should create a repository violation for ORM type in interface", () => {
|
||||
const violation = RepositoryViolation.create(
|
||||
REPOSITORY_VIOLATION_TYPES.ORM_TYPE_IN_INTERFACE,
|
||||
"src/domain/repositories/IUserRepository.ts",
|
||||
"domain",
|
||||
15,
|
||||
"Repository uses Prisma type",
|
||||
"Prisma.UserWhereInput",
|
||||
)
|
||||
|
||||
expect(violation.violationType).toBe(REPOSITORY_VIOLATION_TYPES.ORM_TYPE_IN_INTERFACE)
|
||||
expect(violation.filePath).toBe("src/domain/repositories/IUserRepository.ts")
|
||||
expect(violation.layer).toBe("domain")
|
||||
expect(violation.line).toBe(15)
|
||||
expect(violation.details).toBe("Repository uses Prisma type")
|
||||
expect(violation.ormType).toBe("Prisma.UserWhereInput")
|
||||
})
|
||||
|
||||
it("should create a repository violation for concrete repository in use case", () => {
|
||||
const violation = RepositoryViolation.create(
|
||||
REPOSITORY_VIOLATION_TYPES.CONCRETE_REPOSITORY_IN_USE_CASE,
|
||||
"src/application/use-cases/CreateUser.ts",
|
||||
"application",
|
||||
10,
|
||||
"Use case depends on concrete repository",
|
||||
undefined,
|
||||
"UserRepository",
|
||||
)
|
||||
|
||||
expect(violation.violationType).toBe(
|
||||
REPOSITORY_VIOLATION_TYPES.CONCRETE_REPOSITORY_IN_USE_CASE,
|
||||
)
|
||||
expect(violation.repositoryName).toBe("UserRepository")
|
||||
})
|
||||
|
||||
it("should create a repository violation for new repository in use case", () => {
|
||||
const violation = RepositoryViolation.create(
|
||||
REPOSITORY_VIOLATION_TYPES.NEW_REPOSITORY_IN_USE_CASE,
|
||||
"src/application/use-cases/CreateUser.ts",
|
||||
"application",
|
||||
12,
|
||||
"Use case creates repository with new",
|
||||
undefined,
|
||||
"UserRepository",
|
||||
)
|
||||
|
||||
expect(violation.violationType).toBe(
|
||||
REPOSITORY_VIOLATION_TYPES.NEW_REPOSITORY_IN_USE_CASE,
|
||||
)
|
||||
expect(violation.repositoryName).toBe("UserRepository")
|
||||
})
|
||||
|
||||
it("should create a repository violation for non-domain method name", () => {
|
||||
const violation = RepositoryViolation.create(
|
||||
REPOSITORY_VIOLATION_TYPES.NON_DOMAIN_METHOD_NAME,
|
||||
"src/domain/repositories/IUserRepository.ts",
|
||||
"domain",
|
||||
8,
|
||||
"Method uses technical name. Consider: findById()",
|
||||
undefined,
|
||||
undefined,
|
||||
"findOne",
|
||||
)
|
||||
|
||||
expect(violation.violationType).toBe(REPOSITORY_VIOLATION_TYPES.NON_DOMAIN_METHOD_NAME)
|
||||
expect(violation.methodName).toBe("findOne")
|
||||
})
|
||||
|
||||
it("should handle optional line parameter", () => {
|
||||
const violation = RepositoryViolation.create(
|
||||
REPOSITORY_VIOLATION_TYPES.ORM_TYPE_IN_INTERFACE,
|
||||
"src/domain/repositories/IUserRepository.ts",
|
||||
"domain",
|
||||
undefined,
|
||||
"Repository uses Prisma type",
|
||||
)
|
||||
|
||||
expect(violation.line).toBeUndefined()
|
||||
})
|
||||
})
|
||||
|
||||
describe("getters", () => {
|
||||
it("should return violation type", () => {
|
||||
const violation = RepositoryViolation.create(
|
||||
REPOSITORY_VIOLATION_TYPES.ORM_TYPE_IN_INTERFACE,
|
||||
"src/domain/repositories/IUserRepository.ts",
|
||||
"domain",
|
||||
15,
|
||||
"Test",
|
||||
)
|
||||
|
||||
expect(violation.violationType).toBe(REPOSITORY_VIOLATION_TYPES.ORM_TYPE_IN_INTERFACE)
|
||||
})
|
||||
|
||||
it("should return file path", () => {
|
||||
const violation = RepositoryViolation.create(
|
||||
REPOSITORY_VIOLATION_TYPES.ORM_TYPE_IN_INTERFACE,
|
||||
"src/domain/repositories/IUserRepository.ts",
|
||||
"domain",
|
||||
15,
|
||||
"Test",
|
||||
)
|
||||
|
||||
expect(violation.filePath).toBe("src/domain/repositories/IUserRepository.ts")
|
||||
})
|
||||
|
||||
it("should return layer", () => {
|
||||
const violation = RepositoryViolation.create(
|
||||
REPOSITORY_VIOLATION_TYPES.ORM_TYPE_IN_INTERFACE,
|
||||
"src/domain/repositories/IUserRepository.ts",
|
||||
"domain",
|
||||
15,
|
||||
"Test",
|
||||
)
|
||||
|
||||
expect(violation.layer).toBe("domain")
|
||||
})
|
||||
|
||||
it("should return line number", () => {
|
||||
const violation = RepositoryViolation.create(
|
||||
REPOSITORY_VIOLATION_TYPES.ORM_TYPE_IN_INTERFACE,
|
||||
"src/domain/repositories/IUserRepository.ts",
|
||||
"domain",
|
||||
15,
|
||||
"Test",
|
||||
)
|
||||
|
||||
expect(violation.line).toBe(15)
|
||||
})
|
||||
|
||||
it("should return details", () => {
|
||||
const violation = RepositoryViolation.create(
|
||||
REPOSITORY_VIOLATION_TYPES.ORM_TYPE_IN_INTERFACE,
|
||||
"src/domain/repositories/IUserRepository.ts",
|
||||
"domain",
|
||||
15,
|
||||
"Repository uses Prisma type",
|
||||
)
|
||||
|
||||
expect(violation.details).toBe("Repository uses Prisma type")
|
||||
})
|
||||
|
||||
it("should return ORM type", () => {
|
||||
const violation = RepositoryViolation.create(
|
||||
REPOSITORY_VIOLATION_TYPES.ORM_TYPE_IN_INTERFACE,
|
||||
"src/domain/repositories/IUserRepository.ts",
|
||||
"domain",
|
||||
15,
|
||||
"Test",
|
||||
"Prisma.UserWhereInput",
|
||||
)
|
||||
|
||||
expect(violation.ormType).toBe("Prisma.UserWhereInput")
|
||||
})
|
||||
|
||||
it("should return repository name", () => {
|
||||
const violation = RepositoryViolation.create(
|
||||
REPOSITORY_VIOLATION_TYPES.CONCRETE_REPOSITORY_IN_USE_CASE,
|
||||
"src/application/use-cases/CreateUser.ts",
|
||||
"application",
|
||||
10,
|
||||
"Test",
|
||||
undefined,
|
||||
"UserRepository",
|
||||
)
|
||||
|
||||
expect(violation.repositoryName).toBe("UserRepository")
|
||||
})
|
||||
|
||||
it("should return method name", () => {
|
||||
const violation = RepositoryViolation.create(
|
||||
REPOSITORY_VIOLATION_TYPES.NON_DOMAIN_METHOD_NAME,
|
||||
"src/domain/repositories/IUserRepository.ts",
|
||||
"domain",
|
||||
8,
|
||||
"Test",
|
||||
undefined,
|
||||
undefined,
|
||||
"findOne",
|
||||
)
|
||||
|
||||
expect(violation.methodName).toBe("findOne")
|
||||
})
|
||||
})
|
||||
|
||||
describe("getMessage", () => {
|
||||
it("should return message for ORM type in interface", () => {
|
||||
const violation = RepositoryViolation.create(
|
||||
REPOSITORY_VIOLATION_TYPES.ORM_TYPE_IN_INTERFACE,
|
||||
"src/domain/repositories/IUserRepository.ts",
|
||||
"domain",
|
||||
15,
|
||||
"Test",
|
||||
"Prisma.UserWhereInput",
|
||||
)
|
||||
|
||||
const message = violation.getMessage()
|
||||
|
||||
expect(message).toContain("ORM-specific type")
|
||||
expect(message).toContain("Prisma.UserWhereInput")
|
||||
})
|
||||
|
||||
it("should return message for concrete repository in use case", () => {
|
||||
const violation = RepositoryViolation.create(
|
||||
REPOSITORY_VIOLATION_TYPES.CONCRETE_REPOSITORY_IN_USE_CASE,
|
||||
"src/application/use-cases/CreateUser.ts",
|
||||
"application",
|
||||
10,
|
||||
"Test",
|
||||
undefined,
|
||||
"UserRepository",
|
||||
)
|
||||
|
||||
const message = violation.getMessage()
|
||||
|
||||
expect(message).toContain("depends on concrete repository")
|
||||
expect(message).toContain("UserRepository")
|
||||
})
|
||||
|
||||
it("should return message for new repository in use case", () => {
|
||||
const violation = RepositoryViolation.create(
|
||||
REPOSITORY_VIOLATION_TYPES.NEW_REPOSITORY_IN_USE_CASE,
|
||||
"src/application/use-cases/CreateUser.ts",
|
||||
"application",
|
||||
12,
|
||||
"Test",
|
||||
undefined,
|
||||
"UserRepository",
|
||||
)
|
||||
|
||||
const message = violation.getMessage()
|
||||
|
||||
expect(message).toContain("creates repository with 'new")
|
||||
expect(message).toContain("UserRepository")
|
||||
})
|
||||
|
||||
it("should return message for non-domain method name", () => {
|
||||
const violation = RepositoryViolation.create(
|
||||
REPOSITORY_VIOLATION_TYPES.NON_DOMAIN_METHOD_NAME,
|
||||
"src/domain/repositories/IUserRepository.ts",
|
||||
"domain",
|
||||
8,
|
||||
"Test",
|
||||
undefined,
|
||||
undefined,
|
||||
"findOne",
|
||||
)
|
||||
|
||||
const message = violation.getMessage()
|
||||
|
||||
expect(message).toContain("uses technical name")
|
||||
expect(message).toContain("findOne")
|
||||
})
|
||||
|
||||
it("should handle unknown ORM type gracefully", () => {
|
||||
const violation = RepositoryViolation.create(
|
||||
REPOSITORY_VIOLATION_TYPES.ORM_TYPE_IN_INTERFACE,
|
||||
"src/domain/repositories/IUserRepository.ts",
|
||||
"domain",
|
||||
15,
|
||||
"Test",
|
||||
)
|
||||
|
||||
const message = violation.getMessage()
|
||||
|
||||
expect(message).toContain("unknown")
|
||||
})
|
||||
})
|
||||
|
||||
describe("getSuggestion", () => {
|
||||
it("should return suggestion for ORM type in interface", () => {
|
||||
const violation = RepositoryViolation.create(
|
||||
REPOSITORY_VIOLATION_TYPES.ORM_TYPE_IN_INTERFACE,
|
||||
"src/domain/repositories/IUserRepository.ts",
|
||||
"domain",
|
||||
15,
|
||||
"Test",
|
||||
"Prisma.UserWhereInput",
|
||||
)
|
||||
|
||||
const suggestion = violation.getSuggestion()
|
||||
|
||||
expect(suggestion).toContain("Remove ORM-specific types")
|
||||
expect(suggestion).toContain("Use domain types")
|
||||
})
|
||||
|
||||
it("should return suggestion for concrete repository in use case", () => {
|
||||
const violation = RepositoryViolation.create(
|
||||
REPOSITORY_VIOLATION_TYPES.CONCRETE_REPOSITORY_IN_USE_CASE,
|
||||
"src/application/use-cases/CreateUser.ts",
|
||||
"application",
|
||||
10,
|
||||
"Test",
|
||||
undefined,
|
||||
"UserRepository",
|
||||
)
|
||||
|
||||
const suggestion = violation.getSuggestion()
|
||||
|
||||
expect(suggestion).toContain("Depend on repository interface")
|
||||
expect(suggestion).toContain("IUserRepository")
|
||||
})
|
||||
|
||||
it("should return suggestion for new repository in use case", () => {
|
||||
const violation = RepositoryViolation.create(
|
||||
REPOSITORY_VIOLATION_TYPES.NEW_REPOSITORY_IN_USE_CASE,
|
||||
"src/application/use-cases/CreateUser.ts",
|
||||
"application",
|
||||
12,
|
||||
"Test",
|
||||
undefined,
|
||||
"UserRepository",
|
||||
)
|
||||
|
||||
const suggestion = violation.getSuggestion()
|
||||
|
||||
expect(suggestion).toContain("Remove 'new Repository()'")
|
||||
expect(suggestion).toContain("dependency injection")
|
||||
})
|
||||
|
||||
it("should return suggestion for non-domain method name with smart suggestion", () => {
|
||||
const violation = RepositoryViolation.create(
|
||||
REPOSITORY_VIOLATION_TYPES.NON_DOMAIN_METHOD_NAME,
|
||||
"src/domain/repositories/IUserRepository.ts",
|
||||
"domain",
|
||||
8,
|
||||
"Method uses technical name. Consider: findById()",
|
||||
undefined,
|
||||
undefined,
|
||||
"findOne",
|
||||
)
|
||||
|
||||
const suggestion = violation.getSuggestion()
|
||||
|
||||
expect(suggestion).toContain("findById()")
|
||||
})
|
||||
|
||||
it("should return fallback suggestion for known technical method", () => {
|
||||
const violation = RepositoryViolation.create(
|
||||
REPOSITORY_VIOLATION_TYPES.NON_DOMAIN_METHOD_NAME,
|
||||
"src/domain/repositories/IUserRepository.ts",
|
||||
"domain",
|
||||
8,
|
||||
"Method uses technical name",
|
||||
undefined,
|
||||
undefined,
|
||||
"insert",
|
||||
)
|
||||
|
||||
const suggestion = violation.getSuggestion()
|
||||
|
||||
expect(suggestion).toContain("save or create")
|
||||
})
|
||||
|
||||
it("should return default suggestion for unknown method", () => {
|
||||
const violation = RepositoryViolation.create(
|
||||
REPOSITORY_VIOLATION_TYPES.NON_DOMAIN_METHOD_NAME,
|
||||
"src/domain/repositories/IUserRepository.ts",
|
||||
"domain",
|
||||
8,
|
||||
"Method uses technical name",
|
||||
undefined,
|
||||
undefined,
|
||||
"unknownMethod",
|
||||
)
|
||||
|
||||
const suggestion = violation.getSuggestion()
|
||||
|
||||
expect(suggestion).toBeDefined()
|
||||
expect(suggestion.length).toBeGreaterThan(0)
|
||||
})
|
||||
})
|
||||
|
||||
describe("getExampleFix", () => {
|
||||
it("should return example fix for ORM type in interface", () => {
|
||||
const violation = RepositoryViolation.create(
|
||||
REPOSITORY_VIOLATION_TYPES.ORM_TYPE_IN_INTERFACE,
|
||||
"src/domain/repositories/IUserRepository.ts",
|
||||
"domain",
|
||||
15,
|
||||
"Test",
|
||||
)
|
||||
|
||||
const example = violation.getExampleFix()
|
||||
|
||||
expect(example).toContain("BAD")
|
||||
expect(example).toContain("GOOD")
|
||||
expect(example).toContain("IUserRepository")
|
||||
})
|
||||
|
||||
it("should return example fix for concrete repository in use case", () => {
|
||||
const violation = RepositoryViolation.create(
|
||||
REPOSITORY_VIOLATION_TYPES.CONCRETE_REPOSITORY_IN_USE_CASE,
|
||||
"src/application/use-cases/CreateUser.ts",
|
||||
"application",
|
||||
10,
|
||||
"Test",
|
||||
)
|
||||
|
||||
const example = violation.getExampleFix()
|
||||
|
||||
expect(example).toContain("BAD")
|
||||
expect(example).toContain("GOOD")
|
||||
expect(example).toContain("CreateUser")
|
||||
})
|
||||
|
||||
it("should return example fix for new repository in use case", () => {
|
||||
const violation = RepositoryViolation.create(
|
||||
REPOSITORY_VIOLATION_TYPES.NEW_REPOSITORY_IN_USE_CASE,
|
||||
"src/application/use-cases/CreateUser.ts",
|
||||
"application",
|
||||
12,
|
||||
"Test",
|
||||
)
|
||||
|
||||
const example = violation.getExampleFix()
|
||||
|
||||
expect(example).toContain("BAD")
|
||||
expect(example).toContain("GOOD")
|
||||
expect(example).toContain("new UserRepository")
|
||||
})
|
||||
|
||||
it("should return example fix for non-domain method name", () => {
|
||||
const violation = RepositoryViolation.create(
|
||||
REPOSITORY_VIOLATION_TYPES.NON_DOMAIN_METHOD_NAME,
|
||||
"src/domain/repositories/IUserRepository.ts",
|
||||
"domain",
|
||||
8,
|
||||
"Test",
|
||||
)
|
||||
|
||||
const example = violation.getExampleFix()
|
||||
|
||||
expect(example).toContain("BAD")
|
||||
expect(example).toContain("GOOD")
|
||||
expect(example).toContain("findOne")
|
||||
})
|
||||
})
|
||||
|
||||
describe("equals", () => {
|
||||
it("should return true for violations with identical properties", () => {
|
||||
const violation1 = RepositoryViolation.create(
|
||||
REPOSITORY_VIOLATION_TYPES.ORM_TYPE_IN_INTERFACE,
|
||||
"src/domain/repositories/IUserRepository.ts",
|
||||
"domain",
|
||||
15,
|
||||
"Test",
|
||||
"Prisma.UserWhereInput",
|
||||
)
|
||||
|
||||
const violation2 = RepositoryViolation.create(
|
||||
REPOSITORY_VIOLATION_TYPES.ORM_TYPE_IN_INTERFACE,
|
||||
"src/domain/repositories/IUserRepository.ts",
|
||||
"domain",
|
||||
15,
|
||||
"Test",
|
||||
"Prisma.UserWhereInput",
|
||||
)
|
||||
|
||||
expect(violation1.equals(violation2)).toBe(true)
|
||||
})
|
||||
|
||||
it("should return false for violations with different types", () => {
|
||||
const violation1 = RepositoryViolation.create(
|
||||
REPOSITORY_VIOLATION_TYPES.ORM_TYPE_IN_INTERFACE,
|
||||
"src/domain/repositories/IUserRepository.ts",
|
||||
"domain",
|
||||
15,
|
||||
"Test",
|
||||
)
|
||||
|
||||
const violation2 = RepositoryViolation.create(
|
||||
REPOSITORY_VIOLATION_TYPES.CONCRETE_REPOSITORY_IN_USE_CASE,
|
||||
"src/domain/repositories/IUserRepository.ts",
|
||||
"domain",
|
||||
15,
|
||||
"Test",
|
||||
)
|
||||
|
||||
expect(violation1.equals(violation2)).toBe(false)
|
||||
})
|
||||
|
||||
it("should return false for violations with different file paths", () => {
|
||||
const violation1 = RepositoryViolation.create(
|
||||
REPOSITORY_VIOLATION_TYPES.ORM_TYPE_IN_INTERFACE,
|
||||
"src/domain/repositories/IUserRepository.ts",
|
||||
"domain",
|
||||
15,
|
||||
"Test",
|
||||
)
|
||||
|
||||
const violation2 = RepositoryViolation.create(
|
||||
REPOSITORY_VIOLATION_TYPES.ORM_TYPE_IN_INTERFACE,
|
||||
"src/domain/repositories/IOrderRepository.ts",
|
||||
"domain",
|
||||
15,
|
||||
"Test",
|
||||
)
|
||||
|
||||
expect(violation1.equals(violation2)).toBe(false)
|
||||
})
|
||||
|
||||
it("should return false when comparing with undefined", () => {
|
||||
const violation = RepositoryViolation.create(
|
||||
REPOSITORY_VIOLATION_TYPES.ORM_TYPE_IN_INTERFACE,
|
||||
"src/domain/repositories/IUserRepository.ts",
|
||||
"domain",
|
||||
15,
|
||||
"Test",
|
||||
)
|
||||
|
||||
expect(violation.equals(undefined)).toBe(false)
|
||||
})
|
||||
})
|
||||
})
|
||||
320
packages/guardian/tests/unit/domain/SecretViolation.test.ts
Normal file
320
packages/guardian/tests/unit/domain/SecretViolation.test.ts
Normal file
@@ -0,0 +1,320 @@
|
||||
import { describe, it, expect } from "vitest"
|
||||
import { SecretViolation } from "../../../src/domain/value-objects/SecretViolation"
|
||||
|
||||
describe("SecretViolation", () => {
|
||||
describe("create", () => {
|
||||
it("should create a secret violation with all properties", () => {
|
||||
const violation = SecretViolation.create(
|
||||
"src/config/aws.ts",
|
||||
10,
|
||||
15,
|
||||
"AWS Access Key",
|
||||
"AKIA1234567890ABCDEF",
|
||||
)
|
||||
|
||||
expect(violation.file).toBe("src/config/aws.ts")
|
||||
expect(violation.line).toBe(10)
|
||||
expect(violation.column).toBe(15)
|
||||
expect(violation.secretType).toBe("AWS Access Key")
|
||||
expect(violation.matchedPattern).toBe("AKIA1234567890ABCDEF")
|
||||
})
|
||||
|
||||
it("should create a secret violation with GitHub token", () => {
|
||||
const violation = SecretViolation.create(
|
||||
"src/config/github.ts",
|
||||
5,
|
||||
20,
|
||||
"GitHub Personal Access Token",
|
||||
"ghp_1234567890abcdefghijklmnopqrstuv",
|
||||
)
|
||||
|
||||
expect(violation.secretType).toBe("GitHub Personal Access Token")
|
||||
expect(violation.file).toBe("src/config/github.ts")
|
||||
})
|
||||
|
||||
it("should create a secret violation with NPM token", () => {
|
||||
const violation = SecretViolation.create(".npmrc", 1, 1, "NPM Token", "npm_abc123xyz")
|
||||
|
||||
expect(violation.secretType).toBe("NPM Token")
|
||||
})
|
||||
})
|
||||
|
||||
describe("getters", () => {
|
||||
it("should return file path", () => {
|
||||
const violation = SecretViolation.create(
|
||||
"src/config/aws.ts",
|
||||
10,
|
||||
15,
|
||||
"AWS Access Key",
|
||||
"test",
|
||||
)
|
||||
|
||||
expect(violation.file).toBe("src/config/aws.ts")
|
||||
})
|
||||
|
||||
it("should return line number", () => {
|
||||
const violation = SecretViolation.create(
|
||||
"src/config/aws.ts",
|
||||
10,
|
||||
15,
|
||||
"AWS Access Key",
|
||||
"test",
|
||||
)
|
||||
|
||||
expect(violation.line).toBe(10)
|
||||
})
|
||||
|
||||
it("should return column number", () => {
|
||||
const violation = SecretViolation.create(
|
||||
"src/config/aws.ts",
|
||||
10,
|
||||
15,
|
||||
"AWS Access Key",
|
||||
"test",
|
||||
)
|
||||
|
||||
expect(violation.column).toBe(15)
|
||||
})
|
||||
|
||||
it("should return secret type", () => {
|
||||
const violation = SecretViolation.create(
|
||||
"src/config/aws.ts",
|
||||
10,
|
||||
15,
|
||||
"AWS Access Key",
|
||||
"test",
|
||||
)
|
||||
|
||||
expect(violation.secretType).toBe("AWS Access Key")
|
||||
})
|
||||
|
||||
it("should return matched pattern", () => {
|
||||
const violation = SecretViolation.create(
|
||||
"src/config/aws.ts",
|
||||
10,
|
||||
15,
|
||||
"AWS Access Key",
|
||||
"AKIA1234567890ABCDEF",
|
||||
)
|
||||
|
||||
expect(violation.matchedPattern).toBe("AKIA1234567890ABCDEF")
|
||||
})
|
||||
})
|
||||
|
||||
describe("getMessage", () => {
|
||||
it("should return formatted message for AWS Access Key", () => {
|
||||
const violation = SecretViolation.create(
|
||||
"src/config/aws.ts",
|
||||
10,
|
||||
15,
|
||||
"AWS Access Key",
|
||||
"test",
|
||||
)
|
||||
|
||||
expect(violation.getMessage()).toBe("Hardcoded AWS Access Key detected")
|
||||
})
|
||||
|
||||
it("should return formatted message for GitHub token", () => {
|
||||
const violation = SecretViolation.create(
|
||||
"src/config/github.ts",
|
||||
5,
|
||||
20,
|
||||
"GitHub Token",
|
||||
"test",
|
||||
)
|
||||
|
||||
expect(violation.getMessage()).toBe("Hardcoded GitHub Token detected")
|
||||
})
|
||||
|
||||
it("should return formatted message for NPM token", () => {
|
||||
const violation = SecretViolation.create(".npmrc", 1, 1, "NPM Token", "test")
|
||||
|
||||
expect(violation.getMessage()).toBe("Hardcoded NPM Token detected")
|
||||
})
|
||||
})
|
||||
|
||||
describe("getSuggestion", () => {
|
||||
it("should return multi-line suggestion", () => {
|
||||
const violation = SecretViolation.create(
|
||||
"src/config/aws.ts",
|
||||
10,
|
||||
15,
|
||||
"AWS Access Key",
|
||||
"test",
|
||||
)
|
||||
|
||||
const suggestion = violation.getSuggestion()
|
||||
|
||||
expect(suggestion).toContain("1. Use environment variables")
|
||||
expect(suggestion).toContain("2. Use secret management services")
|
||||
expect(suggestion).toContain("3. Never commit secrets")
|
||||
expect(suggestion).toContain("4. If secret was committed, rotate it immediately")
|
||||
expect(suggestion).toContain("5. Add secret files to .gitignore")
|
||||
})
|
||||
|
||||
it("should return the same suggestion for all secret types", () => {
|
||||
const awsViolation = SecretViolation.create(
|
||||
"src/config/aws.ts",
|
||||
10,
|
||||
15,
|
||||
"AWS Access Key",
|
||||
"test",
|
||||
)
|
||||
|
||||
const githubViolation = SecretViolation.create(
|
||||
"src/config/github.ts",
|
||||
5,
|
||||
20,
|
||||
"GitHub Token",
|
||||
"test",
|
||||
)
|
||||
|
||||
expect(awsViolation.getSuggestion()).toBe(githubViolation.getSuggestion())
|
||||
})
|
||||
})
|
||||
|
||||
describe("getExampleFix", () => {
|
||||
it("should return AWS-specific example for AWS Access Key", () => {
|
||||
const violation = SecretViolation.create(
|
||||
"src/config/aws.ts",
|
||||
10,
|
||||
15,
|
||||
"AWS Access Key",
|
||||
"test",
|
||||
)
|
||||
|
||||
const example = violation.getExampleFix()
|
||||
|
||||
expect(example).toContain("AWS")
|
||||
expect(example).toContain("process.env.AWS_ACCESS_KEY_ID")
|
||||
expect(example).toContain("credentials provider")
|
||||
})
|
||||
|
||||
it("should return GitHub-specific example for GitHub token", () => {
|
||||
const violation = SecretViolation.create(
|
||||
"src/config/github.ts",
|
||||
5,
|
||||
20,
|
||||
"GitHub Token",
|
||||
"test",
|
||||
)
|
||||
|
||||
const example = violation.getExampleFix()
|
||||
|
||||
expect(example).toContain("GitHub")
|
||||
expect(example).toContain("process.env.GITHUB_TOKEN")
|
||||
expect(example).toContain("GitHub Apps")
|
||||
})
|
||||
|
||||
it("should return NPM-specific example for NPM token", () => {
|
||||
const violation = SecretViolation.create(".npmrc", 1, 1, "NPM Token", "test")
|
||||
|
||||
const example = violation.getExampleFix()
|
||||
|
||||
expect(example).toContain("NPM")
|
||||
expect(example).toContain(".npmrc")
|
||||
expect(example).toContain("process.env.NPM_TOKEN")
|
||||
})
|
||||
|
||||
it("should return SSH-specific example for SSH Private Key", () => {
|
||||
const violation = SecretViolation.create(
|
||||
"src/config/ssh.ts",
|
||||
1,
|
||||
1,
|
||||
"SSH Private Key",
|
||||
"test",
|
||||
)
|
||||
|
||||
const example = violation.getExampleFix()
|
||||
|
||||
expect(example).toContain("SSH")
|
||||
expect(example).toContain("readFileSync")
|
||||
expect(example).toContain("SSH_KEY_PATH")
|
||||
})
|
||||
|
||||
it("should return SSH RSA-specific example for SSH RSA Private Key", () => {
|
||||
const violation = SecretViolation.create(
|
||||
"src/config/ssh.ts",
|
||||
1,
|
||||
1,
|
||||
"SSH RSA Private Key",
|
||||
"test",
|
||||
)
|
||||
|
||||
const example = violation.getExampleFix()
|
||||
|
||||
expect(example).toContain("SSH")
|
||||
expect(example).toContain("RSA PRIVATE KEY")
|
||||
})
|
||||
|
||||
it("should return Slack-specific example for Slack token", () => {
|
||||
const violation = SecretViolation.create(
|
||||
"src/config/slack.ts",
|
||||
1,
|
||||
1,
|
||||
"Slack Bot Token",
|
||||
"test",
|
||||
)
|
||||
|
||||
const example = violation.getExampleFix()
|
||||
|
||||
expect(example).toContain("Slack")
|
||||
expect(example).toContain("process.env.SLACK_BOT_TOKEN")
|
||||
})
|
||||
|
||||
it("should return API Key example for generic API key", () => {
|
||||
const violation = SecretViolation.create("src/config/api.ts", 1, 1, "API Key", "test")
|
||||
|
||||
const example = violation.getExampleFix()
|
||||
|
||||
expect(example).toContain("API")
|
||||
expect(example).toContain("process.env.API_KEY")
|
||||
expect(example).toContain("secret management service")
|
||||
})
|
||||
|
||||
it("should return generic example for unknown secret type", () => {
|
||||
const violation = SecretViolation.create(
|
||||
"src/config/unknown.ts",
|
||||
1,
|
||||
1,
|
||||
"Unknown Secret",
|
||||
"test",
|
||||
)
|
||||
|
||||
const example = violation.getExampleFix()
|
||||
|
||||
expect(example).toContain("process.env.SECRET_KEY")
|
||||
expect(example).toContain("secret management")
|
||||
})
|
||||
})
|
||||
|
||||
describe("getSeverity", () => {
|
||||
it("should always return critical severity", () => {
|
||||
const violation = SecretViolation.create(
|
||||
"src/config/aws.ts",
|
||||
10,
|
||||
15,
|
||||
"AWS Access Key",
|
||||
"test",
|
||||
)
|
||||
|
||||
expect(violation.getSeverity()).toBe("critical")
|
||||
})
|
||||
|
||||
it("should return critical severity for all secret types", () => {
|
||||
const types = [
|
||||
"AWS Access Key",
|
||||
"GitHub Token",
|
||||
"NPM Token",
|
||||
"SSH Private Key",
|
||||
"Slack Token",
|
||||
"API Key",
|
||||
]
|
||||
|
||||
types.forEach((type) => {
|
||||
const violation = SecretViolation.create("test.ts", 1, 1, type, "test")
|
||||
expect(violation.getSeverity()).toBe("critical")
|
||||
})
|
||||
})
|
||||
})
|
||||
})
|
||||
329
packages/guardian/tests/unit/domain/SourceFile.test.ts
Normal file
329
packages/guardian/tests/unit/domain/SourceFile.test.ts
Normal file
@@ -0,0 +1,329 @@
|
||||
import { describe, it, expect } from "vitest"
|
||||
import { SourceFile } from "../../../src/domain/entities/SourceFile"
|
||||
import { ProjectPath } from "../../../src/domain/value-objects/ProjectPath"
|
||||
import { LAYERS } from "../../../src/shared/constants/rules"
|
||||
|
||||
describe("SourceFile", () => {
|
||||
describe("constructor", () => {
|
||||
it("should create a SourceFile instance with all properties", () => {
|
||||
const path = ProjectPath.create("/project/src/domain/User.ts", "/project")
|
||||
const content = "class User {}"
|
||||
const imports = ["./BaseEntity"]
|
||||
const exports = ["User"]
|
||||
const id = "test-id"
|
||||
|
||||
const sourceFile = new SourceFile(path, content, imports, exports, id)
|
||||
|
||||
expect(sourceFile.path).toBe(path)
|
||||
expect(sourceFile.content).toBe(content)
|
||||
expect(sourceFile.imports).toEqual(imports)
|
||||
expect(sourceFile.exports).toEqual(exports)
|
||||
expect(sourceFile.id).toBe(id)
|
||||
})
|
||||
|
||||
it("should create a SourceFile with empty imports and exports by default", () => {
|
||||
const path = ProjectPath.create("/project/src/domain/User.ts", "/project")
|
||||
const content = "class User {}"
|
||||
|
||||
const sourceFile = new SourceFile(path, content)
|
||||
|
||||
expect(sourceFile.imports).toEqual([])
|
||||
expect(sourceFile.exports).toEqual([])
|
||||
})
|
||||
|
||||
it("should generate an id if not provided", () => {
|
||||
const path = ProjectPath.create("/project/src/domain/User.ts", "/project")
|
||||
const content = "class User {}"
|
||||
|
||||
const sourceFile = new SourceFile(path, content)
|
||||
|
||||
expect(sourceFile.id).toBeDefined()
|
||||
expect(typeof sourceFile.id).toBe("string")
|
||||
expect(sourceFile.id.length).toBeGreaterThan(0)
|
||||
})
|
||||
})
|
||||
|
||||
describe("layer detection", () => {
|
||||
it("should detect domain layer from path", () => {
|
||||
const path = ProjectPath.create("/project/src/domain/entities/User.ts", "/project")
|
||||
const sourceFile = new SourceFile(path, "")
|
||||
|
||||
expect(sourceFile.layer).toBe(LAYERS.DOMAIN)
|
||||
})
|
||||
|
||||
it("should detect application layer from path", () => {
|
||||
const path = ProjectPath.create(
|
||||
"/project/src/application/use-cases/CreateUser.ts",
|
||||
"/project",
|
||||
)
|
||||
const sourceFile = new SourceFile(path, "")
|
||||
|
||||
expect(sourceFile.layer).toBe(LAYERS.APPLICATION)
|
||||
})
|
||||
|
||||
it("should detect infrastructure layer from path", () => {
|
||||
const path = ProjectPath.create(
|
||||
"/project/src/infrastructure/database/UserRepository.ts",
|
||||
"/project",
|
||||
)
|
||||
const sourceFile = new SourceFile(path, "")
|
||||
|
||||
expect(sourceFile.layer).toBe(LAYERS.INFRASTRUCTURE)
|
||||
})
|
||||
|
||||
it("should detect shared layer from path", () => {
|
||||
const path = ProjectPath.create("/project/src/shared/utils/helpers.ts", "/project")
|
||||
const sourceFile = new SourceFile(path, "")
|
||||
|
||||
expect(sourceFile.layer).toBe(LAYERS.SHARED)
|
||||
})
|
||||
|
||||
it("should return undefined for unknown layer", () => {
|
||||
const path = ProjectPath.create("/project/src/unknown/Test.ts", "/project")
|
||||
const sourceFile = new SourceFile(path, "")
|
||||
|
||||
expect(sourceFile.layer).toBeUndefined()
|
||||
})
|
||||
|
||||
it("should handle uppercase layer names in path", () => {
|
||||
const path = ProjectPath.create("/project/src/DOMAIN/User.ts", "/project")
|
||||
const sourceFile = new SourceFile(path, "")
|
||||
|
||||
expect(sourceFile.layer).toBe(LAYERS.DOMAIN)
|
||||
})
|
||||
|
||||
it("should handle mixed case layer names in path", () => {
|
||||
const path = ProjectPath.create("/project/src/Application/UseCase.ts", "/project")
|
||||
const sourceFile = new SourceFile(path, "")
|
||||
|
||||
expect(sourceFile.layer).toBe(LAYERS.APPLICATION)
|
||||
})
|
||||
})
|
||||
|
||||
describe("path getter", () => {
|
||||
it("should return the project path", () => {
|
||||
const path = ProjectPath.create("/project/src/domain/User.ts", "/project")
|
||||
const sourceFile = new SourceFile(path, "")
|
||||
|
||||
expect(sourceFile.path).toBe(path)
|
||||
})
|
||||
})
|
||||
|
||||
describe("content getter", () => {
|
||||
it("should return the file content", () => {
|
||||
const path = ProjectPath.create("/project/src/domain/User.ts", "/project")
|
||||
const content = "class User { constructor(public name: string) {} }"
|
||||
const sourceFile = new SourceFile(path, content)
|
||||
|
||||
expect(sourceFile.content).toBe(content)
|
||||
})
|
||||
})
|
||||
|
||||
describe("imports getter", () => {
|
||||
it("should return a copy of imports array", () => {
|
||||
const path = ProjectPath.create("/project/src/domain/User.ts", "/project")
|
||||
const imports = ["./BaseEntity", "./ValueObject"]
|
||||
const sourceFile = new SourceFile(path, "", imports)
|
||||
|
||||
const returnedImports = sourceFile.imports
|
||||
|
||||
expect(returnedImports).toEqual(imports)
|
||||
expect(returnedImports).not.toBe(imports)
|
||||
})
|
||||
|
||||
it("should not allow mutations of internal imports array", () => {
|
||||
const path = ProjectPath.create("/project/src/domain/User.ts", "/project")
|
||||
const imports = ["./BaseEntity"]
|
||||
const sourceFile = new SourceFile(path, "", imports)
|
||||
|
||||
const returnedImports = sourceFile.imports
|
||||
returnedImports.push("./NewImport")
|
||||
|
||||
expect(sourceFile.imports).toEqual(["./BaseEntity"])
|
||||
})
|
||||
})
|
||||
|
||||
describe("exports getter", () => {
|
||||
it("should return a copy of exports array", () => {
|
||||
const path = ProjectPath.create("/project/src/domain/User.ts", "/project")
|
||||
const exports = ["User", "UserProps"]
|
||||
const sourceFile = new SourceFile(path, "", [], exports)
|
||||
|
||||
const returnedExports = sourceFile.exports
|
||||
|
||||
expect(returnedExports).toEqual(exports)
|
||||
expect(returnedExports).not.toBe(exports)
|
||||
})
|
||||
|
||||
it("should not allow mutations of internal exports array", () => {
|
||||
const path = ProjectPath.create("/project/src/domain/User.ts", "/project")
|
||||
const exports = ["User"]
|
||||
const sourceFile = new SourceFile(path, "", [], exports)
|
||||
|
||||
const returnedExports = sourceFile.exports
|
||||
returnedExports.push("NewExport")
|
||||
|
||||
expect(sourceFile.exports).toEqual(["User"])
|
||||
})
|
||||
})
|
||||
|
||||
describe("addImport", () => {
|
||||
it("should add a new import to the list", () => {
|
||||
const path = ProjectPath.create("/project/src/domain/User.ts", "/project")
|
||||
const sourceFile = new SourceFile(path, "")
|
||||
|
||||
sourceFile.addImport("./BaseEntity")
|
||||
|
||||
expect(sourceFile.imports).toEqual(["./BaseEntity"])
|
||||
})
|
||||
|
||||
it("should not add duplicate imports", () => {
|
||||
const path = ProjectPath.create("/project/src/domain/User.ts", "/project")
|
||||
const sourceFile = new SourceFile(path, "", ["./BaseEntity"])
|
||||
|
||||
sourceFile.addImport("./BaseEntity")
|
||||
|
||||
expect(sourceFile.imports).toEqual(["./BaseEntity"])
|
||||
})
|
||||
|
||||
it("should update updatedAt timestamp when adding new import", () => {
|
||||
const path = ProjectPath.create("/project/src/domain/User.ts", "/project")
|
||||
const sourceFile = new SourceFile(path, "")
|
||||
|
||||
const originalUpdatedAt = sourceFile.updatedAt
|
||||
|
||||
setTimeout(() => {
|
||||
sourceFile.addImport("./BaseEntity")
|
||||
|
||||
expect(sourceFile.updatedAt.getTime()).toBeGreaterThanOrEqual(
|
||||
originalUpdatedAt.getTime(),
|
||||
)
|
||||
}, 10)
|
||||
})
|
||||
|
||||
it("should not update timestamp when adding duplicate import", () => {
|
||||
const path = ProjectPath.create("/project/src/domain/User.ts", "/project")
|
||||
const sourceFile = new SourceFile(path, "", ["./BaseEntity"])
|
||||
|
||||
const originalUpdatedAt = sourceFile.updatedAt
|
||||
|
||||
setTimeout(() => {
|
||||
sourceFile.addImport("./BaseEntity")
|
||||
|
||||
expect(sourceFile.updatedAt).toBe(originalUpdatedAt)
|
||||
}, 10)
|
||||
})
|
||||
|
||||
it("should add multiple different imports", () => {
|
||||
const path = ProjectPath.create("/project/src/domain/User.ts", "/project")
|
||||
const sourceFile = new SourceFile(path, "")
|
||||
|
||||
sourceFile.addImport("./BaseEntity")
|
||||
sourceFile.addImport("./ValueObject")
|
||||
sourceFile.addImport("./DomainEvent")
|
||||
|
||||
expect(sourceFile.imports).toEqual(["./BaseEntity", "./ValueObject", "./DomainEvent"])
|
||||
})
|
||||
})
|
||||
|
||||
describe("addExport", () => {
|
||||
it("should add a new export to the list", () => {
|
||||
const path = ProjectPath.create("/project/src/domain/User.ts", "/project")
|
||||
const sourceFile = new SourceFile(path, "")
|
||||
|
||||
sourceFile.addExport("User")
|
||||
|
||||
expect(sourceFile.exports).toEqual(["User"])
|
||||
})
|
||||
|
||||
it("should not add duplicate exports", () => {
|
||||
const path = ProjectPath.create("/project/src/domain/User.ts", "/project")
|
||||
const sourceFile = new SourceFile(path, "", [], ["User"])
|
||||
|
||||
sourceFile.addExport("User")
|
||||
|
||||
expect(sourceFile.exports).toEqual(["User"])
|
||||
})
|
||||
|
||||
it("should update updatedAt timestamp when adding new export", () => {
|
||||
const path = ProjectPath.create("/project/src/domain/User.ts", "/project")
|
||||
const sourceFile = new SourceFile(path, "")
|
||||
|
||||
const originalUpdatedAt = sourceFile.updatedAt
|
||||
|
||||
setTimeout(() => {
|
||||
sourceFile.addExport("User")
|
||||
|
||||
expect(sourceFile.updatedAt.getTime()).toBeGreaterThanOrEqual(
|
||||
originalUpdatedAt.getTime(),
|
||||
)
|
||||
}, 10)
|
||||
})
|
||||
|
||||
it("should not update timestamp when adding duplicate export", () => {
|
||||
const path = ProjectPath.create("/project/src/domain/User.ts", "/project")
|
||||
const sourceFile = new SourceFile(path, "", [], ["User"])
|
||||
|
||||
const originalUpdatedAt = sourceFile.updatedAt
|
||||
|
||||
setTimeout(() => {
|
||||
sourceFile.addExport("User")
|
||||
|
||||
expect(sourceFile.updatedAt).toBe(originalUpdatedAt)
|
||||
}, 10)
|
||||
})
|
||||
|
||||
it("should add multiple different exports", () => {
|
||||
const path = ProjectPath.create("/project/src/domain/User.ts", "/project")
|
||||
const sourceFile = new SourceFile(path, "")
|
||||
|
||||
sourceFile.addExport("User")
|
||||
sourceFile.addExport("UserProps")
|
||||
sourceFile.addExport("UserFactory")
|
||||
|
||||
expect(sourceFile.exports).toEqual(["User", "UserProps", "UserFactory"])
|
||||
})
|
||||
})
|
||||
|
||||
describe("importsFrom", () => {
|
||||
it("should return true if imports contain the specified layer", () => {
|
||||
const path = ProjectPath.create("/project/src/application/User.ts", "/project")
|
||||
const imports = ["../../domain/entities/User", "../use-cases/CreateUser"]
|
||||
const sourceFile = new SourceFile(path, "", imports)
|
||||
|
||||
expect(sourceFile.importsFrom("domain")).toBe(true)
|
||||
})
|
||||
|
||||
it("should return false if imports do not contain the specified layer", () => {
|
||||
const path = ProjectPath.create("/project/src/application/User.ts", "/project")
|
||||
const imports = ["../use-cases/CreateUser", "../dtos/UserDto"]
|
||||
const sourceFile = new SourceFile(path, "", imports)
|
||||
|
||||
expect(sourceFile.importsFrom("domain")).toBe(false)
|
||||
})
|
||||
|
||||
it("should be case-insensitive", () => {
|
||||
const path = ProjectPath.create("/project/src/application/User.ts", "/project")
|
||||
const imports = ["../../DOMAIN/entities/User"]
|
||||
const sourceFile = new SourceFile(path, "", imports)
|
||||
|
||||
expect(sourceFile.importsFrom("domain")).toBe(true)
|
||||
})
|
||||
|
||||
it("should return false for empty imports", () => {
|
||||
const path = ProjectPath.create("/project/src/application/User.ts", "/project")
|
||||
const sourceFile = new SourceFile(path, "")
|
||||
|
||||
expect(sourceFile.importsFrom("domain")).toBe(false)
|
||||
})
|
||||
|
||||
it("should handle partial matches in import paths", () => {
|
||||
const path = ProjectPath.create("/project/src/application/User.ts", "/project")
|
||||
const imports = ["../../infrastructure/database/UserRepository"]
|
||||
const sourceFile = new SourceFile(path, "", imports)
|
||||
|
||||
expect(sourceFile.importsFrom("infrastructure")).toBe(true)
|
||||
expect(sourceFile.importsFrom("domain")).toBe(false)
|
||||
})
|
||||
})
|
||||
})
|
||||
199
packages/guardian/tests/unit/domain/ValueObject.test.ts
Normal file
199
packages/guardian/tests/unit/domain/ValueObject.test.ts
Normal file
@@ -0,0 +1,199 @@
|
||||
import { describe, it, expect } from "vitest"
|
||||
import { ValueObject } from "../../../src/domain/value-objects/ValueObject"
|
||||
|
||||
interface TestProps {
|
||||
readonly value: string
|
||||
readonly count: number
|
||||
}
|
||||
|
||||
class TestValueObject extends ValueObject<TestProps> {
|
||||
constructor(value: string, count: number) {
|
||||
super({ value, count })
|
||||
}
|
||||
|
||||
public get value(): string {
|
||||
return this.props.value
|
||||
}
|
||||
|
||||
public get count(): number {
|
||||
return this.props.count
|
||||
}
|
||||
}
|
||||
|
||||
interface ComplexProps {
|
||||
readonly name: string
|
||||
readonly items: string[]
|
||||
readonly metadata: { key: string; value: number }
|
||||
}
|
||||
|
||||
class ComplexValueObject extends ValueObject<ComplexProps> {
|
||||
constructor(name: string, items: string[], metadata: { key: string; value: number }) {
|
||||
super({ name, items, metadata })
|
||||
}
|
||||
|
||||
public get name(): string {
|
||||
return this.props.name
|
||||
}
|
||||
|
||||
public get items(): string[] {
|
||||
return this.props.items
|
||||
}
|
||||
|
||||
public get metadata(): { key: string; value: number } {
|
||||
return this.props.metadata
|
||||
}
|
||||
}
|
||||
|
||||
describe("ValueObject", () => {
|
||||
describe("constructor", () => {
|
||||
it("should create a value object with provided properties", () => {
|
||||
const vo = new TestValueObject("test", 42)
|
||||
|
||||
expect(vo.value).toBe("test")
|
||||
expect(vo.count).toBe(42)
|
||||
})
|
||||
|
||||
it("should freeze the properties object", () => {
|
||||
const vo = new TestValueObject("test", 42)
|
||||
|
||||
expect(Object.isFrozen(vo["props"])).toBe(true)
|
||||
})
|
||||
|
||||
it("should prevent modification of properties", () => {
|
||||
const vo = new TestValueObject("test", 42)
|
||||
|
||||
expect(() => {
|
||||
;(vo["props"] as any).value = "modified"
|
||||
}).toThrow()
|
||||
})
|
||||
|
||||
it("should handle complex nested properties", () => {
|
||||
const vo = new ComplexValueObject("test", ["item1", "item2"], {
|
||||
key: "key1",
|
||||
value: 100,
|
||||
})
|
||||
|
||||
expect(vo.name).toBe("test")
|
||||
expect(vo.items).toEqual(["item1", "item2"])
|
||||
expect(vo.metadata).toEqual({ key: "key1", value: 100 })
|
||||
})
|
||||
})
|
||||
|
||||
describe("equals", () => {
|
||||
it("should return true for value objects with identical properties", () => {
|
||||
const vo1 = new TestValueObject("test", 42)
|
||||
const vo2 = new TestValueObject("test", 42)
|
||||
|
||||
expect(vo1.equals(vo2)).toBe(true)
|
||||
})
|
||||
|
||||
it("should return false for value objects with different values", () => {
|
||||
const vo1 = new TestValueObject("test1", 42)
|
||||
const vo2 = new TestValueObject("test2", 42)
|
||||
|
||||
expect(vo1.equals(vo2)).toBe(false)
|
||||
})
|
||||
|
||||
it("should return false for value objects with different counts", () => {
|
||||
const vo1 = new TestValueObject("test", 42)
|
||||
const vo2 = new TestValueObject("test", 43)
|
||||
|
||||
expect(vo1.equals(vo2)).toBe(false)
|
||||
})
|
||||
|
||||
it("should return false when comparing with undefined", () => {
|
||||
const vo1 = new TestValueObject("test", 42)
|
||||
|
||||
expect(vo1.equals(undefined)).toBe(false)
|
||||
})
|
||||
|
||||
it("should return false when comparing with null", () => {
|
||||
const vo1 = new TestValueObject("test", 42)
|
||||
|
||||
expect(vo1.equals(null as any)).toBe(false)
|
||||
})
|
||||
|
||||
it("should handle complex nested property comparisons", () => {
|
||||
const vo1 = new ComplexValueObject("test", ["item1", "item2"], {
|
||||
key: "key1",
|
||||
value: 100,
|
||||
})
|
||||
const vo2 = new ComplexValueObject("test", ["item1", "item2"], {
|
||||
key: "key1",
|
||||
value: 100,
|
||||
})
|
||||
|
||||
expect(vo1.equals(vo2)).toBe(true)
|
||||
})
|
||||
|
||||
it("should detect differences in nested arrays", () => {
|
||||
const vo1 = new ComplexValueObject("test", ["item1", "item2"], {
|
||||
key: "key1",
|
||||
value: 100,
|
||||
})
|
||||
const vo2 = new ComplexValueObject("test", ["item1", "item3"], {
|
||||
key: "key1",
|
||||
value: 100,
|
||||
})
|
||||
|
||||
expect(vo1.equals(vo2)).toBe(false)
|
||||
})
|
||||
|
||||
it("should detect differences in nested objects", () => {
|
||||
const vo1 = new ComplexValueObject("test", ["item1", "item2"], {
|
||||
key: "key1",
|
||||
value: 100,
|
||||
})
|
||||
const vo2 = new ComplexValueObject("test", ["item1", "item2"], {
|
||||
key: "key2",
|
||||
value: 100,
|
||||
})
|
||||
|
||||
expect(vo1.equals(vo2)).toBe(false)
|
||||
})
|
||||
|
||||
it("should return true for same instance", () => {
|
||||
const vo1 = new TestValueObject("test", 42)
|
||||
|
||||
expect(vo1.equals(vo1)).toBe(true)
|
||||
})
|
||||
|
||||
it("should handle empty string values", () => {
|
||||
const vo1 = new TestValueObject("", 0)
|
||||
const vo2 = new TestValueObject("", 0)
|
||||
|
||||
expect(vo1.equals(vo2)).toBe(true)
|
||||
})
|
||||
|
||||
it("should distinguish between zero and undefined in comparisons", () => {
|
||||
const vo1 = new TestValueObject("test", 0)
|
||||
const vo2 = new TestValueObject("test", 0)
|
||||
|
||||
expect(vo1.equals(vo2)).toBe(true)
|
||||
})
|
||||
})
|
||||
|
||||
describe("immutability", () => {
|
||||
it("should freeze props object after creation", () => {
|
||||
const vo = new TestValueObject("original", 42)
|
||||
|
||||
expect(Object.isFrozen(vo["props"])).toBe(true)
|
||||
})
|
||||
|
||||
it("should not allow adding new properties", () => {
|
||||
const vo = new TestValueObject("test", 42)
|
||||
|
||||
expect(() => {
|
||||
;(vo["props"] as any).newProp = "new"
|
||||
}).toThrow()
|
||||
})
|
||||
|
||||
it("should not allow deleting properties", () => {
|
||||
const vo = new TestValueObject("test", 42)
|
||||
|
||||
expect(() => {
|
||||
delete (vo["props"] as any).value
|
||||
}).toThrow()
|
||||
})
|
||||
})
|
||||
})
|
||||
@@ -0,0 +1,465 @@
|
||||
import { describe, it, expect, beforeEach } from "vitest"
|
||||
import { DuplicateValueTracker } from "../../../src/infrastructure/analyzers/DuplicateValueTracker"
|
||||
import { HardcodedValue } from "../../../src/domain/value-objects/HardcodedValue"
|
||||
|
||||
describe("DuplicateValueTracker", () => {
|
||||
let tracker: DuplicateValueTracker
|
||||
|
||||
beforeEach(() => {
|
||||
tracker = new DuplicateValueTracker()
|
||||
})
|
||||
|
||||
describe("track", () => {
|
||||
it("should track a single hardcoded value", () => {
|
||||
const value = HardcodedValue.create(
|
||||
"test-value",
|
||||
"magic-string",
|
||||
10,
|
||||
5,
|
||||
"const x = 'test-value'",
|
||||
)
|
||||
|
||||
tracker.track(value, "file1.ts")
|
||||
|
||||
const duplicates = tracker.getDuplicates()
|
||||
expect(duplicates).toHaveLength(0)
|
||||
})
|
||||
|
||||
it("should track multiple occurrences of the same value", () => {
|
||||
const value1 = HardcodedValue.create(
|
||||
"test-value",
|
||||
"magic-string",
|
||||
10,
|
||||
5,
|
||||
"const x = 'test-value'",
|
||||
)
|
||||
const value2 = HardcodedValue.create(
|
||||
"test-value",
|
||||
"magic-string",
|
||||
20,
|
||||
5,
|
||||
"const y = 'test-value'",
|
||||
)
|
||||
|
||||
tracker.track(value1, "file1.ts")
|
||||
tracker.track(value2, "file2.ts")
|
||||
|
||||
const duplicates = tracker.getDuplicates()
|
||||
expect(duplicates).toHaveLength(1)
|
||||
expect(duplicates[0].value).toBe("test-value")
|
||||
expect(duplicates[0].count).toBe(2)
|
||||
})
|
||||
|
||||
it("should track values with different types separately", () => {
|
||||
const stringValue = HardcodedValue.create(
|
||||
"100",
|
||||
"magic-string",
|
||||
10,
|
||||
5,
|
||||
"const x = '100'",
|
||||
)
|
||||
const numberValue = HardcodedValue.create(100, "magic-number", 20, 5, "const y = 100")
|
||||
|
||||
tracker.track(stringValue, "file1.ts")
|
||||
tracker.track(numberValue, "file2.ts")
|
||||
|
||||
const duplicates = tracker.getDuplicates()
|
||||
expect(duplicates).toHaveLength(0)
|
||||
})
|
||||
|
||||
it("should track boolean values", () => {
|
||||
const value1 = HardcodedValue.create(true, "MAGIC_BOOLEAN", 10, 5, "const x = true")
|
||||
const value2 = HardcodedValue.create(true, "MAGIC_BOOLEAN", 20, 5, "const y = true")
|
||||
|
||||
tracker.track(value1, "file1.ts")
|
||||
tracker.track(value2, "file2.ts")
|
||||
|
||||
const duplicates = tracker.getDuplicates()
|
||||
expect(duplicates).toHaveLength(1)
|
||||
expect(duplicates[0].value).toBe("true")
|
||||
})
|
||||
})
|
||||
|
||||
describe("getDuplicates", () => {
|
||||
it("should return empty array when no duplicates exist", () => {
|
||||
const value1 = HardcodedValue.create(
|
||||
"value1",
|
||||
"magic-string",
|
||||
10,
|
||||
5,
|
||||
"const x = 'value1'",
|
||||
)
|
||||
const value2 = HardcodedValue.create(
|
||||
"value2",
|
||||
"magic-string",
|
||||
20,
|
||||
5,
|
||||
"const y = 'value2'",
|
||||
)
|
||||
|
||||
tracker.track(value1, "file1.ts")
|
||||
tracker.track(value2, "file2.ts")
|
||||
|
||||
const duplicates = tracker.getDuplicates()
|
||||
expect(duplicates).toHaveLength(0)
|
||||
})
|
||||
|
||||
it("should return duplicates sorted by count in descending order", () => {
|
||||
const value1a = HardcodedValue.create(
|
||||
"value1",
|
||||
"magic-string",
|
||||
10,
|
||||
5,
|
||||
"const x = 'value1'",
|
||||
)
|
||||
const value1b = HardcodedValue.create(
|
||||
"value1",
|
||||
"magic-string",
|
||||
20,
|
||||
5,
|
||||
"const y = 'value1'",
|
||||
)
|
||||
const value2a = HardcodedValue.create(
|
||||
"value2",
|
||||
"magic-string",
|
||||
30,
|
||||
5,
|
||||
"const z = 'value2'",
|
||||
)
|
||||
const value2b = HardcodedValue.create(
|
||||
"value2",
|
||||
"magic-string",
|
||||
40,
|
||||
5,
|
||||
"const a = 'value2'",
|
||||
)
|
||||
const value2c = HardcodedValue.create(
|
||||
"value2",
|
||||
"magic-string",
|
||||
50,
|
||||
5,
|
||||
"const b = 'value2'",
|
||||
)
|
||||
|
||||
tracker.track(value1a, "file1.ts")
|
||||
tracker.track(value1b, "file2.ts")
|
||||
tracker.track(value2a, "file3.ts")
|
||||
tracker.track(value2b, "file4.ts")
|
||||
tracker.track(value2c, "file5.ts")
|
||||
|
||||
const duplicates = tracker.getDuplicates()
|
||||
expect(duplicates).toHaveLength(2)
|
||||
expect(duplicates[0].value).toBe("value2")
|
||||
expect(duplicates[0].count).toBe(3)
|
||||
expect(duplicates[1].value).toBe("value1")
|
||||
expect(duplicates[1].count).toBe(2)
|
||||
})
|
||||
|
||||
it("should include location information for duplicates", () => {
|
||||
const value1 = HardcodedValue.create("test", "magic-string", 10, 5, "const x = 'test'")
|
||||
const value2 = HardcodedValue.create("test", "magic-string", 20, 10, "const y = 'test'")
|
||||
|
||||
tracker.track(value1, "file1.ts")
|
||||
tracker.track(value2, "file2.ts")
|
||||
|
||||
const duplicates = tracker.getDuplicates()
|
||||
expect(duplicates[0].locations).toHaveLength(2)
|
||||
expect(duplicates[0].locations[0]).toEqual({
|
||||
file: "file1.ts",
|
||||
line: 10,
|
||||
context: "const x = 'test'",
|
||||
})
|
||||
expect(duplicates[0].locations[1]).toEqual({
|
||||
file: "file2.ts",
|
||||
line: 20,
|
||||
context: "const y = 'test'",
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
describe("getDuplicateLocations", () => {
|
||||
it("should return null when value is not duplicated", () => {
|
||||
const value = HardcodedValue.create("test", "magic-string", 10, 5, "const x = 'test'")
|
||||
|
||||
tracker.track(value, "file1.ts")
|
||||
|
||||
const locations = tracker.getDuplicateLocations("test", "magic-string")
|
||||
expect(locations).toBeNull()
|
||||
})
|
||||
|
||||
it("should return locations when value is duplicated", () => {
|
||||
const value1 = HardcodedValue.create("test", "magic-string", 10, 5, "const x = 'test'")
|
||||
const value2 = HardcodedValue.create("test", "magic-string", 20, 10, "const y = 'test'")
|
||||
|
||||
tracker.track(value1, "file1.ts")
|
||||
tracker.track(value2, "file2.ts")
|
||||
|
||||
const locations = tracker.getDuplicateLocations("test", "magic-string")
|
||||
expect(locations).toHaveLength(2)
|
||||
expect(locations).toEqual([
|
||||
{ file: "file1.ts", line: 10, context: "const x = 'test'" },
|
||||
{ file: "file2.ts", line: 20, context: "const y = 'test'" },
|
||||
])
|
||||
})
|
||||
|
||||
it("should return null for non-existent value", () => {
|
||||
const locations = tracker.getDuplicateLocations("non-existent", "magic-string")
|
||||
expect(locations).toBeNull()
|
||||
})
|
||||
|
||||
it("should handle numeric values", () => {
|
||||
const value1 = HardcodedValue.create(100, "magic-number", 10, 5, "const x = 100")
|
||||
const value2 = HardcodedValue.create(100, "magic-number", 20, 5, "const y = 100")
|
||||
|
||||
tracker.track(value1, "file1.ts")
|
||||
tracker.track(value2, "file2.ts")
|
||||
|
||||
const locations = tracker.getDuplicateLocations(100, "magic-number")
|
||||
expect(locations).toHaveLength(2)
|
||||
})
|
||||
})
|
||||
|
||||
describe("isDuplicate", () => {
|
||||
it("should return false for non-duplicated value", () => {
|
||||
const value = HardcodedValue.create("test", "magic-string", 10, 5, "const x = 'test'")
|
||||
|
||||
tracker.track(value, "file1.ts")
|
||||
|
||||
expect(tracker.isDuplicate("test", "magic-string")).toBe(false)
|
||||
})
|
||||
|
||||
it("should return true for duplicated value", () => {
|
||||
const value1 = HardcodedValue.create("test", "magic-string", 10, 5, "const x = 'test'")
|
||||
const value2 = HardcodedValue.create("test", "magic-string", 20, 10, "const y = 'test'")
|
||||
|
||||
tracker.track(value1, "file1.ts")
|
||||
tracker.track(value2, "file2.ts")
|
||||
|
||||
expect(tracker.isDuplicate("test", "magic-string")).toBe(true)
|
||||
})
|
||||
|
||||
it("should return false for non-existent value", () => {
|
||||
expect(tracker.isDuplicate("non-existent", "magic-string")).toBe(false)
|
||||
})
|
||||
|
||||
it("should handle boolean values", () => {
|
||||
const value1 = HardcodedValue.create(true, "MAGIC_BOOLEAN", 10, 5, "const x = true")
|
||||
const value2 = HardcodedValue.create(true, "MAGIC_BOOLEAN", 20, 5, "const y = true")
|
||||
|
||||
tracker.track(value1, "file1.ts")
|
||||
tracker.track(value2, "file2.ts")
|
||||
|
||||
expect(tracker.isDuplicate(true, "MAGIC_BOOLEAN")).toBe(true)
|
||||
})
|
||||
})
|
||||
|
||||
describe("getStats", () => {
|
||||
it("should return zero stats for empty tracker", () => {
|
||||
const stats = tracker.getStats()
|
||||
|
||||
expect(stats.totalValues).toBe(0)
|
||||
expect(stats.duplicateValues).toBe(0)
|
||||
expect(stats.duplicatePercentage).toBe(0)
|
||||
})
|
||||
|
||||
it("should calculate stats correctly with no duplicates", () => {
|
||||
const value1 = HardcodedValue.create(
|
||||
"value1",
|
||||
"magic-string",
|
||||
10,
|
||||
5,
|
||||
"const x = 'value1'",
|
||||
)
|
||||
const value2 = HardcodedValue.create(
|
||||
"value2",
|
||||
"magic-string",
|
||||
20,
|
||||
5,
|
||||
"const y = 'value2'",
|
||||
)
|
||||
|
||||
tracker.track(value1, "file1.ts")
|
||||
tracker.track(value2, "file2.ts")
|
||||
|
||||
const stats = tracker.getStats()
|
||||
expect(stats.totalValues).toBe(2)
|
||||
expect(stats.duplicateValues).toBe(0)
|
||||
expect(stats.duplicatePercentage).toBe(0)
|
||||
})
|
||||
|
||||
it("should calculate stats correctly with duplicates", () => {
|
||||
const value1a = HardcodedValue.create(
|
||||
"value1",
|
||||
"magic-string",
|
||||
10,
|
||||
5,
|
||||
"const x = 'value1'",
|
||||
)
|
||||
const value1b = HardcodedValue.create(
|
||||
"value1",
|
||||
"magic-string",
|
||||
20,
|
||||
5,
|
||||
"const y = 'value1'",
|
||||
)
|
||||
const value2 = HardcodedValue.create(
|
||||
"value2",
|
||||
"magic-string",
|
||||
30,
|
||||
5,
|
||||
"const z = 'value2'",
|
||||
)
|
||||
|
||||
tracker.track(value1a, "file1.ts")
|
||||
tracker.track(value1b, "file2.ts")
|
||||
tracker.track(value2, "file3.ts")
|
||||
|
||||
const stats = tracker.getStats()
|
||||
expect(stats.totalValues).toBe(2)
|
||||
expect(stats.duplicateValues).toBe(1)
|
||||
expect(stats.duplicatePercentage).toBe(50)
|
||||
})
|
||||
|
||||
it("should handle multiple duplicates", () => {
|
||||
const value1a = HardcodedValue.create(
|
||||
"value1",
|
||||
"magic-string",
|
||||
10,
|
||||
5,
|
||||
"const x = 'value1'",
|
||||
)
|
||||
const value1b = HardcodedValue.create(
|
||||
"value1",
|
||||
"magic-string",
|
||||
20,
|
||||
5,
|
||||
"const y = 'value1'",
|
||||
)
|
||||
const value2a = HardcodedValue.create(
|
||||
"value2",
|
||||
"magic-string",
|
||||
30,
|
||||
5,
|
||||
"const z = 'value2'",
|
||||
)
|
||||
const value2b = HardcodedValue.create(
|
||||
"value2",
|
||||
"magic-string",
|
||||
40,
|
||||
5,
|
||||
"const a = 'value2'",
|
||||
)
|
||||
|
||||
tracker.track(value1a, "file1.ts")
|
||||
tracker.track(value1b, "file2.ts")
|
||||
tracker.track(value2a, "file3.ts")
|
||||
tracker.track(value2b, "file4.ts")
|
||||
|
||||
const stats = tracker.getStats()
|
||||
expect(stats.totalValues).toBe(2)
|
||||
expect(stats.duplicateValues).toBe(2)
|
||||
expect(stats.duplicatePercentage).toBe(100)
|
||||
})
|
||||
})
|
||||
|
||||
describe("clear", () => {
|
||||
it("should clear all tracked values", () => {
|
||||
const value1 = HardcodedValue.create("test", "magic-string", 10, 5, "const x = 'test'")
|
||||
const value2 = HardcodedValue.create("test", "magic-string", 20, 10, "const y = 'test'")
|
||||
|
||||
tracker.track(value1, "file1.ts")
|
||||
tracker.track(value2, "file2.ts")
|
||||
|
||||
expect(tracker.getDuplicates()).toHaveLength(1)
|
||||
|
||||
tracker.clear()
|
||||
|
||||
expect(tracker.getDuplicates()).toHaveLength(0)
|
||||
expect(tracker.getStats().totalValues).toBe(0)
|
||||
})
|
||||
|
||||
it("should allow tracking new values after clear", () => {
|
||||
const value1 = HardcodedValue.create(
|
||||
"test1",
|
||||
"magic-string",
|
||||
10,
|
||||
5,
|
||||
"const x = 'test1'",
|
||||
)
|
||||
|
||||
tracker.track(value1, "file1.ts")
|
||||
tracker.clear()
|
||||
|
||||
const value2 = HardcodedValue.create(
|
||||
"test2",
|
||||
"magic-string",
|
||||
20,
|
||||
5,
|
||||
"const y = 'test2'",
|
||||
)
|
||||
tracker.track(value2, "file2.ts")
|
||||
|
||||
const stats = tracker.getStats()
|
||||
expect(stats.totalValues).toBe(1)
|
||||
})
|
||||
})
|
||||
|
||||
describe("edge cases", () => {
|
||||
it("should handle values with colons in them", () => {
|
||||
const value1 = HardcodedValue.create(
|
||||
"url:http://example.com",
|
||||
"magic-string",
|
||||
10,
|
||||
5,
|
||||
"const x = 'url:http://example.com'",
|
||||
)
|
||||
const value2 = HardcodedValue.create(
|
||||
"url:http://example.com",
|
||||
"magic-string",
|
||||
20,
|
||||
5,
|
||||
"const y = 'url:http://example.com'",
|
||||
)
|
||||
|
||||
tracker.track(value1, "file1.ts")
|
||||
tracker.track(value2, "file2.ts")
|
||||
|
||||
const duplicates = tracker.getDuplicates()
|
||||
expect(duplicates).toHaveLength(1)
|
||||
expect(duplicates[0].value).toBe("url:http://example.com")
|
||||
})
|
||||
|
||||
it("should handle empty string values", () => {
|
||||
const value1 = HardcodedValue.create("", "magic-string", 10, 5, "const x = ''")
|
||||
const value2 = HardcodedValue.create("", "magic-string", 20, 5, "const y = ''")
|
||||
|
||||
tracker.track(value1, "file1.ts")
|
||||
tracker.track(value2, "file2.ts")
|
||||
|
||||
expect(tracker.isDuplicate("", "magic-string")).toBe(true)
|
||||
})
|
||||
|
||||
it("should handle zero as a number", () => {
|
||||
const value1 = HardcodedValue.create(0, "magic-number", 10, 5, "const x = 0")
|
||||
const value2 = HardcodedValue.create(0, "magic-number", 20, 5, "const y = 0")
|
||||
|
||||
tracker.track(value1, "file1.ts")
|
||||
tracker.track(value2, "file2.ts")
|
||||
|
||||
expect(tracker.isDuplicate(0, "magic-number")).toBe(true)
|
||||
})
|
||||
|
||||
it("should track same file multiple times", () => {
|
||||
const value1 = HardcodedValue.create("test", "magic-string", 10, 5, "const x = 'test'")
|
||||
const value2 = HardcodedValue.create("test", "magic-string", 20, 5, "const y = 'test'")
|
||||
|
||||
tracker.track(value1, "file1.ts")
|
||||
tracker.track(value2, "file1.ts")
|
||||
|
||||
const locations = tracker.getDuplicateLocations("test", "magic-string")
|
||||
expect(locations).toHaveLength(2)
|
||||
expect(locations?.[0].file).toBe("file1.ts")
|
||||
expect(locations?.[1].file).toBe("file1.ts")
|
||||
})
|
||||
})
|
||||
})
|
||||
@@ -468,4 +468,102 @@ const b = 2`
|
||||
expect(result[0].context).toContain("5000")
|
||||
})
|
||||
})
|
||||
|
||||
describe("TypeScript type contexts (false positive reduction)", () => {
|
||||
it("should NOT detect strings in union types", () => {
|
||||
const code = `type Status = 'active' | 'inactive' | 'pending'`
|
||||
const result = detector.detectMagicStrings(code, "test.ts")
|
||||
|
||||
expect(result).toHaveLength(0)
|
||||
})
|
||||
|
||||
it("should NOT detect strings in interface property types", () => {
|
||||
const code = `interface Config { mode: 'development' | 'production' }`
|
||||
const result = detector.detectMagicStrings(code, "test.ts")
|
||||
|
||||
expect(result).toHaveLength(0)
|
||||
})
|
||||
|
||||
it("should NOT detect strings in type aliases", () => {
|
||||
const code = `type Theme = 'light' | 'dark'`
|
||||
const result = detector.detectMagicStrings(code, "test.ts")
|
||||
|
||||
expect(result).toHaveLength(0)
|
||||
})
|
||||
|
||||
it("should NOT detect strings in type assertions", () => {
|
||||
const code = `const mode = getMode() as 'read' | 'write'`
|
||||
const result = detector.detectMagicStrings(code, "test.ts")
|
||||
|
||||
expect(result).toHaveLength(0)
|
||||
})
|
||||
|
||||
it("should NOT detect strings in Symbol() calls", () => {
|
||||
const code = `const TOKEN = Symbol('MY_TOKEN')`
|
||||
const result = detector.detectMagicStrings(code, "test.ts")
|
||||
|
||||
expect(result).toHaveLength(0)
|
||||
})
|
||||
|
||||
it("should NOT detect strings in multiple Symbol() calls", () => {
|
||||
const code = `
|
||||
export const LOGGER = Symbol('LOGGER')
|
||||
export const DATABASE = Symbol('DATABASE')
|
||||
export const CACHE = Symbol('CACHE')
|
||||
`
|
||||
const result = detector.detectMagicStrings(code, "test.ts")
|
||||
|
||||
expect(result).toHaveLength(0)
|
||||
})
|
||||
|
||||
it("should NOT detect strings in import() calls", () => {
|
||||
const code = `const module = import('../../path/to/module.js')`
|
||||
const result = detector.detectMagicStrings(code, "test.ts")
|
||||
|
||||
expect(result).toHaveLength(0)
|
||||
})
|
||||
|
||||
it("should NOT detect strings in typeof checks", () => {
|
||||
const code = `if (typeof x === 'string') { }`
|
||||
const result = detector.detectMagicStrings(code, "test.ts")
|
||||
|
||||
expect(result).toHaveLength(0)
|
||||
})
|
||||
|
||||
it("should NOT detect strings in reverse typeof checks", () => {
|
||||
const code = `if ('number' === typeof count) { }`
|
||||
const result = detector.detectMagicStrings(code, "test.ts")
|
||||
|
||||
expect(result).toHaveLength(0)
|
||||
})
|
||||
|
||||
it("should skip tokens.ts files completely", () => {
|
||||
const code = `
|
||||
export const LOGGER = Symbol('LOGGER')
|
||||
export const DATABASE = Symbol('DATABASE')
|
||||
const url = "http://localhost:8080"
|
||||
`
|
||||
const result = detector.detectAll(code, "src/di/tokens.ts")
|
||||
|
||||
expect(result).toHaveLength(0)
|
||||
})
|
||||
|
||||
it("should skip tokens.js files completely", () => {
|
||||
const code = `const TOKEN = Symbol('TOKEN')`
|
||||
const result = detector.detectAll(code, "src/di/tokens.js")
|
||||
|
||||
expect(result).toHaveLength(0)
|
||||
})
|
||||
|
||||
it("should detect real magic strings even with type contexts nearby", () => {
|
||||
const code = `
|
||||
type Mode = 'read' | 'write'
|
||||
const apiKey = "secret-key-12345"
|
||||
`
|
||||
const result = detector.detectMagicStrings(code, "test.ts")
|
||||
|
||||
expect(result.length).toBeGreaterThan(0)
|
||||
expect(result.some((r) => r.value === "secret-key-12345")).toBe(true)
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
@@ -0,0 +1,341 @@
|
||||
import { describe, it, expect, beforeEach } from "vitest"
|
||||
import { SecretDetector } from "../../../src/infrastructure/analyzers/SecretDetector"
|
||||
|
||||
describe("SecretDetector", () => {
|
||||
let detector: SecretDetector
|
||||
|
||||
beforeEach(() => {
|
||||
detector = new SecretDetector()
|
||||
})
|
||||
|
||||
describe("detectAll", () => {
|
||||
it("should return empty array for code without secrets", async () => {
|
||||
const code = `
|
||||
const greeting = "Hello World"
|
||||
const count = 42
|
||||
function test() {
|
||||
return true
|
||||
}
|
||||
`
|
||||
|
||||
const violations = await detector.detectAll(code, "test.ts")
|
||||
|
||||
expect(violations).toHaveLength(0)
|
||||
})
|
||||
|
||||
it("should return empty array for normal environment variable usage", async () => {
|
||||
const code = `
|
||||
const apiKey = process.env.API_KEY
|
||||
const dbUrl = process.env.DATABASE_URL
|
||||
`
|
||||
|
||||
const violations = await detector.detectAll(code, "config.ts")
|
||||
|
||||
expect(violations).toHaveLength(0)
|
||||
})
|
||||
|
||||
it("should handle empty code", async () => {
|
||||
const violations = await detector.detectAll("", "empty.ts")
|
||||
|
||||
expect(violations).toHaveLength(0)
|
||||
})
|
||||
|
||||
it("should handle code with only comments", async () => {
|
||||
const code = `
|
||||
// This is a comment
|
||||
/* Multi-line
|
||||
comment */
|
||||
`
|
||||
|
||||
const violations = await detector.detectAll(code, "comments.ts")
|
||||
|
||||
expect(violations).toHaveLength(0)
|
||||
})
|
||||
|
||||
it("should handle multiline strings without secrets", async () => {
|
||||
const code = `
|
||||
const template = \`
|
||||
Hello World
|
||||
This is a test
|
||||
No secrets here
|
||||
\`
|
||||
`
|
||||
|
||||
const violations = await detector.detectAll(code, "template.ts")
|
||||
|
||||
expect(violations).toHaveLength(0)
|
||||
})
|
||||
|
||||
it("should handle code with URLs", async () => {
|
||||
const code = `
|
||||
const apiUrl = "https://api.example.com/v1"
|
||||
const websiteUrl = "http://localhost:3000"
|
||||
`
|
||||
|
||||
const violations = await detector.detectAll(code, "urls.ts")
|
||||
|
||||
expect(violations).toHaveLength(0)
|
||||
})
|
||||
|
||||
it("should handle imports and requires", async () => {
|
||||
const code = `
|
||||
import { something } from "some-package"
|
||||
const fs = require('fs')
|
||||
`
|
||||
|
||||
const violations = await detector.detectAll(code, "imports.ts")
|
||||
|
||||
expect(violations).toHaveLength(0)
|
||||
})
|
||||
|
||||
it("should return violations with correct file path", async () => {
|
||||
const code = `const secret = "test-secret-value"`
|
||||
const filePath = "src/config/secrets.ts"
|
||||
|
||||
const violations = await detector.detectAll(code, filePath)
|
||||
|
||||
violations.forEach((v) => {
|
||||
expect(v.file).toBe(filePath)
|
||||
})
|
||||
})
|
||||
|
||||
it("should handle .js files", async () => {
|
||||
const code = `const test = "value"`
|
||||
|
||||
const violations = await detector.detectAll(code, "test.js")
|
||||
|
||||
expect(violations).toBeInstanceOf(Array)
|
||||
})
|
||||
|
||||
it("should handle .jsx files", async () => {
|
||||
const code = `const Component = () => <div>Test</div>`
|
||||
|
||||
const violations = await detector.detectAll(code, "Component.jsx")
|
||||
|
||||
expect(violations).toBeInstanceOf(Array)
|
||||
})
|
||||
|
||||
it("should handle .tsx files", async () => {
|
||||
const code = `const Component: React.FC = () => <div>Test</div>`
|
||||
|
||||
const violations = await detector.detectAll(code, "Component.tsx")
|
||||
|
||||
expect(violations).toBeInstanceOf(Array)
|
||||
})
|
||||
|
||||
it("should handle errors gracefully", async () => {
|
||||
const code = null as unknown as string
|
||||
|
||||
const violations = await detector.detectAll(code, "test.ts")
|
||||
|
||||
expect(violations).toHaveLength(0)
|
||||
})
|
||||
|
||||
it("should handle malformed code gracefully", async () => {
|
||||
const code = "const = = ="
|
||||
|
||||
const violations = await detector.detectAll(code, "malformed.ts")
|
||||
|
||||
expect(violations).toBeInstanceOf(Array)
|
||||
})
|
||||
})
|
||||
|
||||
describe("parseOutputToViolations", () => {
|
||||
it("should parse empty output", async () => {
|
||||
const code = ""
|
||||
|
||||
const violations = await detector.detectAll(code, "test.ts")
|
||||
|
||||
expect(violations).toHaveLength(0)
|
||||
})
|
||||
|
||||
it("should handle whitespace-only output", async () => {
|
||||
const code = " \n \n "
|
||||
|
||||
const violations = await detector.detectAll(code, "test.ts")
|
||||
|
||||
expect(violations).toHaveLength(0)
|
||||
})
|
||||
})
|
||||
|
||||
describe("extractSecretType", () => {
|
||||
it("should handle various secret types correctly", async () => {
|
||||
const code = `const value = "test"`
|
||||
|
||||
const violations = await detector.detectAll(code, "test.ts")
|
||||
|
||||
violations.forEach((v) => {
|
||||
expect(v.secretType).toBeTruthy()
|
||||
expect(typeof v.secretType).toBe("string")
|
||||
expect(v.secretType.length).toBeGreaterThan(0)
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
describe("integration", () => {
|
||||
it("should work with TypeScript code", async () => {
|
||||
const code = `
|
||||
interface Config {
|
||||
apiKey: string
|
||||
}
|
||||
|
||||
const config: Config = {
|
||||
apiKey: process.env.API_KEY || "default"
|
||||
}
|
||||
`
|
||||
|
||||
const violations = await detector.detectAll(code, "config.ts")
|
||||
|
||||
expect(violations).toBeInstanceOf(Array)
|
||||
})
|
||||
|
||||
it("should work with ES6+ syntax", async () => {
|
||||
const code = `
|
||||
const fetchData = async () => {
|
||||
const response = await fetch(url)
|
||||
return response.json()
|
||||
}
|
||||
|
||||
const [data, setData] = useState(null)
|
||||
`
|
||||
|
||||
const violations = await detector.detectAll(code, "hooks.ts")
|
||||
|
||||
expect(violations).toBeInstanceOf(Array)
|
||||
})
|
||||
|
||||
it("should work with JSX/TSX", async () => {
|
||||
const code = `
|
||||
export const Button = ({ onClick }: Props) => {
|
||||
return <button onClick={onClick}>Click me</button>
|
||||
}
|
||||
`
|
||||
|
||||
const violations = await detector.detectAll(code, "Button.tsx")
|
||||
|
||||
expect(violations).toBeInstanceOf(Array)
|
||||
})
|
||||
|
||||
it("should handle concurrent detections", async () => {
|
||||
const code1 = "const test1 = 'value1'"
|
||||
const code2 = "const test2 = 'value2'"
|
||||
const code3 = "const test3 = 'value3'"
|
||||
|
||||
const [result1, result2, result3] = await Promise.all([
|
||||
detector.detectAll(code1, "file1.ts"),
|
||||
detector.detectAll(code2, "file2.ts"),
|
||||
detector.detectAll(code3, "file3.ts"),
|
||||
])
|
||||
|
||||
expect(result1).toBeInstanceOf(Array)
|
||||
expect(result2).toBeInstanceOf(Array)
|
||||
expect(result3).toBeInstanceOf(Array)
|
||||
})
|
||||
})
|
||||
|
||||
describe("edge cases", () => {
|
||||
it("should handle very long code", async () => {
|
||||
const longCode = "const value = 'test'\n".repeat(1000)
|
||||
|
||||
const violations = await detector.detectAll(longCode, "long.ts")
|
||||
|
||||
expect(violations).toBeInstanceOf(Array)
|
||||
})
|
||||
|
||||
it("should handle special characters in code", async () => {
|
||||
const code = `
|
||||
const special = "!@#$%^&*()_+-=[]{}|;:',.<>?"
|
||||
const unicode = "日本語 🚀"
|
||||
`
|
||||
|
||||
const violations = await detector.detectAll(code, "special.ts")
|
||||
|
||||
expect(violations).toBeInstanceOf(Array)
|
||||
})
|
||||
|
||||
it("should handle code with regex patterns", async () => {
|
||||
const code = `
|
||||
const pattern = /^[A-Z0-9._%+-]+@[A-Z0-9.-]+\\.[A-Z]{2,}$/i
|
||||
const matches = text.match(pattern)
|
||||
`
|
||||
|
||||
const violations = await detector.detectAll(code, "regex.ts")
|
||||
|
||||
expect(violations).toBeInstanceOf(Array)
|
||||
})
|
||||
|
||||
it("should handle code with template literals", async () => {
|
||||
const code = `
|
||||
const message = \`Hello \${name}, your balance is \${balance}\`
|
||||
`
|
||||
|
||||
const violations = await detector.detectAll(code, "template.ts")
|
||||
|
||||
expect(violations).toBeInstanceOf(Array)
|
||||
})
|
||||
})
|
||||
|
||||
describe("real secret detection", () => {
|
||||
it("should detect AWS access key pattern", async () => {
|
||||
const code = `const awsKey = "AKIAIOSFODNN7EXAMPLE"`
|
||||
|
||||
const violations = await detector.detectAll(code, "aws.ts")
|
||||
|
||||
if (violations.length > 0) {
|
||||
expect(violations[0].secretType).toContain("AWS")
|
||||
}
|
||||
})
|
||||
|
||||
it("should detect basic auth credentials", async () => {
|
||||
const code = `const auth = "https://user:password@example.com"`
|
||||
|
||||
const violations = await detector.detectAll(code, "auth.ts")
|
||||
|
||||
if (violations.length > 0) {
|
||||
expect(violations[0].file).toBe("auth.ts")
|
||||
expect(violations[0].line).toBeGreaterThan(0)
|
||||
expect(violations[0].column).toBeGreaterThan(0)
|
||||
}
|
||||
})
|
||||
|
||||
it("should detect private SSH key", async () => {
|
||||
const code = `
|
||||
const privateKey = \`-----BEGIN RSA PRIVATE KEY-----
|
||||
MIIBogIBAAJBALRiMLAA...
|
||||
-----END RSA PRIVATE KEY-----\`
|
||||
`
|
||||
|
||||
const violations = await detector.detectAll(code, "ssh.ts")
|
||||
|
||||
if (violations.length > 0) {
|
||||
expect(violations[0].secretType).toBeTruthy()
|
||||
}
|
||||
})
|
||||
|
||||
it("should return violation objects with required properties", async () => {
|
||||
const code = `const key = "AKIAIOSFODNN7EXAMPLE"`
|
||||
|
||||
const violations = await detector.detectAll(code, "test.ts")
|
||||
|
||||
violations.forEach((v) => {
|
||||
expect(v).toHaveProperty("file")
|
||||
expect(v).toHaveProperty("line")
|
||||
expect(v).toHaveProperty("column")
|
||||
expect(v).toHaveProperty("secretType")
|
||||
expect(v.getMessage).toBeDefined()
|
||||
expect(v.getSuggestion).toBeDefined()
|
||||
})
|
||||
})
|
||||
|
||||
it("should handle files with multiple secrets", async () => {
|
||||
const code = `
|
||||
const key1 = "AKIAIOSFODNN7EXAMPLE"
|
||||
const key2 = "AKIAIOSFODNN8EXAMPLE"
|
||||
`
|
||||
|
||||
const violations = await detector.detectAll(code, "multiple.ts")
|
||||
|
||||
expect(violations).toBeInstanceOf(Array)
|
||||
})
|
||||
})
|
||||
})
|
||||
315
pnpm-lock.yaml
generated
315
pnpm-lock.yaml
generated
@@ -80,6 +80,18 @@ importers:
|
||||
|
||||
packages/guardian:
|
||||
dependencies:
|
||||
'@secretlint/core':
|
||||
specifier: ^11.2.5
|
||||
version: 11.2.5
|
||||
'@secretlint/node':
|
||||
specifier: ^11.2.5
|
||||
version: 11.2.5
|
||||
'@secretlint/secretlint-rule-preset-recommend':
|
||||
specifier: ^11.2.5
|
||||
version: 11.2.5
|
||||
'@secretlint/types':
|
||||
specifier: ^11.2.5
|
||||
version: 11.2.5
|
||||
commander:
|
||||
specifier: ^12.1.0
|
||||
version: 12.1.0
|
||||
@@ -154,6 +166,12 @@ packages:
|
||||
resolution: {integrity: sha512-J4Jarr0SohdrHcb40gTL4wGPCQ952IMWF1G/MSAQfBAPvA9ZKApYhpxcY7PmehVePve+ujpus1dGsJ7dPxz8Kg==}
|
||||
engines: {node: ^18.19.1 || ^20.11.1 || >=22.0.0, npm: ^6.11.0 || ^7.5.6 || >=8.0.0, yarn: '>= 1.13.0'}
|
||||
|
||||
'@azu/format-text@1.0.2':
|
||||
resolution: {integrity: sha512-Swi4N7Edy1Eqq82GxgEECXSSLyn6GOb5htRFPzBDdUkECGXtlf12ynO5oJSpWKPwCaUssOu7NfhDcCWpIC6Ywg==}
|
||||
|
||||
'@azu/style-format@1.0.1':
|
||||
resolution: {integrity: sha512-AHcTojlNBdD/3/KxIKlg8sxIWHfOtQszLvOpagLTO+bjC3u7SAszu1lf//u7JJC50aUSH+BVWDD/KvaA6Gfn5g==}
|
||||
|
||||
'@babel/code-frame@7.27.1':
|
||||
resolution: {integrity: sha512-cjQ7ZlQ0Mv3b47hABuTevyTuYN4i+loJKGeV9flcCgIK37cCXRh+L1bd3iBHlynerhQ7BhCkn2BPbQUL+rGqFg==}
|
||||
engines: {node: '>=6.9.0'}
|
||||
@@ -1040,6 +1058,40 @@ packages:
|
||||
cpu: [x64]
|
||||
os: [win32]
|
||||
|
||||
'@secretlint/config-loader@11.2.5':
|
||||
resolution: {integrity: sha512-pUiH5xc3x8RLEDq+0dCz65v4kohtfp68I7qmYPuymTwHodzjyJ089ZbNdN1ZX8SZV4xZLQsFIrRLn1lJ55QyyQ==}
|
||||
engines: {node: '>=20.0.0'}
|
||||
|
||||
'@secretlint/core@11.2.5':
|
||||
resolution: {integrity: sha512-PZNpBd6+KVya2tA3o1oC2kTWYKju8lZG9phXyQY7geWKf+a+fInN4/HSYfCQS495oyTSjhc9qI0mNQEw83PY2Q==}
|
||||
engines: {node: '>=20.0.0'}
|
||||
|
||||
'@secretlint/formatter@11.2.5':
|
||||
resolution: {integrity: sha512-9XBMeveo1eKXMC9zLjA6nd2lb5JjUgjj8NUpCo1Il8jO4YJ12k7qXZk3T/QJup+Kh0ThpHO03D9C1xLDIPIEPQ==}
|
||||
engines: {node: '>=20.0.0'}
|
||||
|
||||
'@secretlint/node@11.2.5':
|
||||
resolution: {integrity: sha512-nPdtUsTzDzBJzFiKh80/H5+2ZRRogtDuHhnNiGtF7LSHp8YsQHU5piAVbESdV0AmUjbWijAjscIsWqvtU+2JUQ==}
|
||||
engines: {node: '>=20.0.0'}
|
||||
|
||||
'@secretlint/profiler@11.2.5':
|
||||
resolution: {integrity: sha512-evQ2PeO3Ub0apWIPaXJy8lMDO1OFgvgQhZd+MhYLcLHgR559EtJ9V02Sh5c10wTLkLAtJ+czlJg2kmlt0nm8fw==}
|
||||
|
||||
'@secretlint/resolver@11.2.5':
|
||||
resolution: {integrity: sha512-Zn9+Gj7cRNjEDX8d1NYZNjTG9/Wjlc8N+JvARFYYYu6JxfbtkabhFxzwxBLkRZ2ZCkPCCnuXJwepcgfVXSPsng==}
|
||||
|
||||
'@secretlint/secretlint-rule-preset-recommend@11.2.5':
|
||||
resolution: {integrity: sha512-FAnp/dPdbvHEw50aF9JMPF/OwW58ULvVXEsk+mXTtBD09VJZhG0vFum8WzxMbB98Eo4xDddGzYtE3g27pBOaQA==}
|
||||
engines: {node: '>=20.0.0'}
|
||||
|
||||
'@secretlint/source-creator@11.2.5':
|
||||
resolution: {integrity: sha512-+ApoNDS4uIaLb2PG9PPEP9Zu1HDBWpxSd/+Qlb3MzKTwp2BG9sbUhvpGgxuIHFn7pMWQU60DhzYJJUBpbXZEHQ==}
|
||||
engines: {node: '>=20.0.0'}
|
||||
|
||||
'@secretlint/types@11.2.5':
|
||||
resolution: {integrity: sha512-iA7E+uXuiEydOwv8glEYM4tCHnl8C7wTgLxg+3upHhH/iSSnefWfoRqrJwVBhwxPg4MDoypVI7Oal7bX7/ne+w==}
|
||||
engines: {node: '>=20.0.0'}
|
||||
|
||||
'@sinclair/typebox@0.34.41':
|
||||
resolution: {integrity: sha512-6gS8pZzSXdyRHTIqoqSVknxolr1kzfy4/CeDnrzsVz8TTIWUbOBr6gnzOmTYJ3eXQNh4IYHIGi5aIL7sOZ2G/g==}
|
||||
|
||||
@@ -1052,6 +1104,21 @@ packages:
|
||||
'@standard-schema/spec@1.0.0':
|
||||
resolution: {integrity: sha512-m2bOd0f2RT9k8QJx1JN85cZYyH1RqFBdlwtkSlf4tBDYLCiiZnv1fIIwacK6cqwXavOydf0NPToMQgpKq+dVlA==}
|
||||
|
||||
'@textlint/ast-node-types@15.4.0':
|
||||
resolution: {integrity: sha512-IqY8i7IOGuvy05wZxISB7Me1ZyrvhaQGgx6DavfQjH3cfwpPFdDbDYmMXMuSv2xLS1kDB1kYKBV7fL2Vi16lRA==}
|
||||
|
||||
'@textlint/linter-formatter@15.4.0':
|
||||
resolution: {integrity: sha512-rfqOZmnI1Wwc/Pa4LK+vagvVPmvxf9oRsBRqIOB04DwhucingZyAIJI/TyG18DIDYbP2aFXBZ3oOvyAxHe/8PQ==}
|
||||
|
||||
'@textlint/module-interop@15.4.0':
|
||||
resolution: {integrity: sha512-uGf+SFIfzOLCbZI0gp+2NLsrkSArsvEWulPP6lJuKp7yRHadmy7Xf/YHORe46qhNyyxc8PiAfiixHJSaHGUrGg==}
|
||||
|
||||
'@textlint/resolver@15.4.0':
|
||||
resolution: {integrity: sha512-Vh/QceKZQHFJFG4GxxIsKM1Xhwv93mbtKHmFE5/ybal1mIKHdqF03Z9Guaqt6Sx/AeNUshq0hkMOEhEyEWnehQ==}
|
||||
|
||||
'@textlint/types@15.4.0':
|
||||
resolution: {integrity: sha512-ZMwJgw/xjxJufOD+IB7I2Enl9Si4Hxo04B76RwUZ5cKBKzOPcmd6WvGe2F7jqdgmTdGnfMU+Bo/joQrjPNIWqg==}
|
||||
|
||||
'@tokenizer/inflate@0.3.1':
|
||||
resolution: {integrity: sha512-4oeoZEBQdLdt5WmP/hx1KZ6D3/Oid/0cUb2nk4F0pTDAWy+KCH3/EnAkZF/bvckWo8I33EqBm01lIPgmgc8rCA==}
|
||||
engines: {node: '>=18'}
|
||||
@@ -1488,6 +1555,10 @@ packages:
|
||||
resolution: {integrity: sha512-gKXj5ALrKWQLsYG9jlTRmR/xKluxHV+Z9QEwNIgCfM1/uwPMCuzVVnh5mwTd+OuBZcwSIMbqssNWRm1lE51QaQ==}
|
||||
engines: {node: '>=8'}
|
||||
|
||||
ansi-escapes@7.2.0:
|
||||
resolution: {integrity: sha512-g6LhBsl+GBPRWGWsBtutpzBYuIIdBkLEvad5C/va/74Db018+5TZiyA26cZJAr3Rft5lprVqOIPxf5Vid6tqAw==}
|
||||
engines: {node: '>=18'}
|
||||
|
||||
ansi-regex@5.0.1:
|
||||
resolution: {integrity: sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ==}
|
||||
engines: {node: '>=8'}
|
||||
@@ -1538,6 +1609,10 @@ packages:
|
||||
ast-v8-to-istanbul@0.3.8:
|
||||
resolution: {integrity: sha512-szgSZqUxI5T8mLKvS7WTjF9is+MVbOeLADU73IseOcrqhxr/VAvy6wfoVE39KnKzA7JRhjF5eUagNlHwvZPlKQ==}
|
||||
|
||||
astral-regex@2.0.0:
|
||||
resolution: {integrity: sha512-Z7tMw1ytTXt5jqMcOP+OQteU1VuNK9Y02uuJtKQ1Sv69jXQKKg5cibLwGJow8yzZP+eAc18EmLGPal0bp36rvQ==}
|
||||
engines: {node: '>=8'}
|
||||
|
||||
asynckit@0.4.0:
|
||||
resolution: {integrity: sha512-Oei9OH4tRh0YqU3GxhX79dM/mwVgvbZJaSNaRk+bshkj0S5cfHcgYakreBjrHwatXKbz+IoIdYLxrKim2MjW0Q==}
|
||||
|
||||
@@ -1576,9 +1651,16 @@ packages:
|
||||
resolution: {integrity: sha512-a28v2eWrrRWPpJSzxc+mKwm0ZtVx/G8SepdQZDArnXYU/XS+IF6mp8aB/4E+hH1tyGCoDo3KlUCdlSxGDsRkAw==}
|
||||
hasBin: true
|
||||
|
||||
binaryextensions@6.11.0:
|
||||
resolution: {integrity: sha512-sXnYK/Ij80TO3lcqZVV2YgfKN5QjUWIRk/XSm2J/4bd/lPko3lvk0O4ZppH6m+6hB2/GTu+ptNwVFe1xh+QLQw==}
|
||||
engines: {node: '>=4'}
|
||||
|
||||
bl@4.1.0:
|
||||
resolution: {integrity: sha512-1W07cM9gS6DcLperZfFSj+bWLtaPGSOHWhPiGzXmvVJbRLdG82sH/Kn8EtW1VqWVA54AKf2h5k5BbnIbwF3h6w==}
|
||||
|
||||
boundary@2.0.0:
|
||||
resolution: {integrity: sha512-rJKn5ooC9u8q13IMCrW0RSp31pxBCHE3y9V/tp3TdWSLf8Em3p6Di4NBpfzbJge9YjjFEsD0RtFEjtvHL5VyEA==}
|
||||
|
||||
brace-expansion@1.1.12:
|
||||
resolution: {integrity: sha512-9T9UjW3r0UW5c1Q7GTwllptXwhvYmEzFhzMfZ9H7FQWt+uZePjZPjBP/W1ZEyZ1twGWom5/56TF4lPcqjnDHcg==}
|
||||
|
||||
@@ -1638,6 +1720,10 @@ packages:
|
||||
resolution: {integrity: sha512-oKnbhFyRIXpUuez8iBMmyEa4nbj4IOQyuhc/wy9kY7/WVPcwIO9VA668Pu8RkO7+0G76SLROeyw9CpQ061i4mA==}
|
||||
engines: {node: '>=10'}
|
||||
|
||||
chalk@5.6.2:
|
||||
resolution: {integrity: sha512-7NzBL0rN6fMUW+f7A6Io4h40qQlG+xGmtMxfbnH/K7TAtt8JQWVQK+6g0UXKMeVJoyV5EkkNsErQ8pVD3bLHbA==}
|
||||
engines: {node: ^12.17.0 || ^14.13 || >=16.0.0}
|
||||
|
||||
char-regex@1.0.2:
|
||||
resolution: {integrity: sha512-kWWXztvZ5SBQV+eRgKFeh8q5sLuZY2+8WUIzlxWVTg+oGwY14qylx1KbKzHd8P6ZYkAg0xyIDU9JMHhyJMZ1jw==}
|
||||
engines: {node: '>=10'}
|
||||
@@ -1801,6 +1887,10 @@ packages:
|
||||
eastasianwidth@0.2.0:
|
||||
resolution: {integrity: sha512-I88TYZWc9XiYHRQ4/3c5rjjfgkjhLyW2luGIheGERbNQ6OY7yTybanSpDXZa8y7VUP9YmDcYa+eyq4ca7iLqWA==}
|
||||
|
||||
editions@6.22.0:
|
||||
resolution: {integrity: sha512-UgGlf8IW75je7HZjNDpJdCv4cGJWIi6yumFdZ0R7A8/CIhQiWUjyGLCxdHpd8bmyD1gnkfUNK0oeOXqUS2cpfQ==}
|
||||
engines: {ecmascript: '>= es5', node: '>=4'}
|
||||
|
||||
electron-to-chromium@1.5.259:
|
||||
resolution: {integrity: sha512-I+oLXgpEJzD6Cwuwt1gYjxsDmu/S/Kd41mmLA3O+/uH2pFRO/DvOjUyGozL8j3KeLV6WyZ7ssPwELMsXCcsJAQ==}
|
||||
|
||||
@@ -1818,6 +1908,10 @@ packages:
|
||||
resolution: {integrity: sha512-d4lC8xfavMeBjzGr2vECC3fsGXziXZQyJxD868h2M/mBI3PwAuODxAkLkq5HYuvrPYcUtiLzsTo8U3PgX3Ocww==}
|
||||
engines: {node: '>=10.13.0'}
|
||||
|
||||
environment@1.1.0:
|
||||
resolution: {integrity: sha512-xUtoPkMggbz0MPyPiIWr1Kp4aeWJjDZ6SMvURhimjdZgsRuDplF5/s9hcgGhyXMhs+6vpnuoiZ2kFiu3FMnS8Q==}
|
||||
engines: {node: '>=18'}
|
||||
|
||||
error-ex@1.3.4:
|
||||
resolution: {integrity: sha512-sqQamAnR14VgCr1A618A3sGrygcpK+HEbenA/HiEAkkUwcZIIB/tgWqHFxWgOyDh4nB4JCRimh79dR5Ywc9MDQ==}
|
||||
|
||||
@@ -2249,6 +2343,10 @@ packages:
|
||||
resolution: {integrity: sha512-HGYWWS/ehqTV3xN10i23tkPkpH46MLCIMFNCaaKNavAXTF1RkqxawEPtnjnGZ6XKSInBKkiOA5BKS+aZiY3AvA==}
|
||||
engines: {node: '>=8'}
|
||||
|
||||
istextorbinary@9.5.0:
|
||||
resolution: {integrity: sha512-5mbUj3SiZXCuRf9fT3ibzbSSEWiy63gFfksmGfdOzujPjW3k+z8WvIBxcJHBoQNlaZaiyB25deviif2+osLmLw==}
|
||||
engines: {node: '>=4'}
|
||||
|
||||
iterare@1.2.1:
|
||||
resolution: {integrity: sha512-RKYVTCjAnRthyJes037NX/IiqeidgN1xc3j1RjFfECFp28A1GVwK9nA+i0rJPaHqSZwygLzRnFlzUuHFoWWy+Q==}
|
||||
engines: {node: '>=6'}
|
||||
@@ -2473,6 +2571,9 @@ packages:
|
||||
lodash.merge@4.6.2:
|
||||
resolution: {integrity: sha512-0KpjqXRVvrYyCsX1swR/XTK0va6VQkQM6MNo7PqW77ByjAhoARA8EfrP1N4+KlKj8YS0ZUCtRT/YUuhyYDujIQ==}
|
||||
|
||||
lodash.truncate@4.4.2:
|
||||
resolution: {integrity: sha512-jttmRe7bRse52OsWIMDLaXxWqRAmtIUccAQ3garviCqJjafXOfNMO0yMfNpdD6zbGaTU0P5Nz7e7gAT6cKmJRw==}
|
||||
|
||||
lodash@4.17.21:
|
||||
resolution: {integrity: sha512-v2kDEe57lecTulaDIuNTPy3Ry4gLGJ6Z1O3vE1krgXZNrsQ+LFTGHVxVjcXPs17LhbZVGedAJv8XZ1tvj5FvSg==}
|
||||
|
||||
@@ -2657,6 +2758,10 @@ packages:
|
||||
resolution: {integrity: sha512-LaNjtRWUBY++zB5nE/NwcaoMylSPk+S+ZHNB1TzdbMJMny6dynpAGt7X/tl/QYq3TIeE6nxHppbo2LGymrG5Pw==}
|
||||
engines: {node: '>=10'}
|
||||
|
||||
p-map@7.0.4:
|
||||
resolution: {integrity: sha512-tkAQEw8ysMzmkhgw8k+1U/iPhWNhykKnSk4Rd5zLoPJCuJaGRPo6YposrZgaxHKzDHdDWWZvE/Sk7hsL2X/CpQ==}
|
||||
engines: {node: '>=18'}
|
||||
|
||||
p-try@2.2.0:
|
||||
resolution: {integrity: sha512-R4nPAVTAU0B9D35/Gk3uJf/7XYbQcyohSKdvAxIRSNghFl4e71hVoGnBNQz9cWaXxO2I10KTC+3jMdvvoKw6dQ==}
|
||||
engines: {node: '>=6'}
|
||||
@@ -2725,6 +2830,9 @@ packages:
|
||||
resolution: {integrity: sha512-HRDzbaKjC+AOWVXxAU/x54COGeIv9eb+6CkDSQoNTt4XyWoIJvuPsXizxu/Fr23EiekbtZwmh1IcIG/l/a10GQ==}
|
||||
engines: {node: '>=8'}
|
||||
|
||||
pluralize@2.0.0:
|
||||
resolution: {integrity: sha512-TqNZzQCD4S42De9IfnnBvILN7HAW7riLqsCyp8lgjXeysyPlX5HhqKAcJHHHb9XskE4/a+7VGC9zzx8Ls0jOAw==}
|
||||
|
||||
pluralize@8.0.0:
|
||||
resolution: {integrity: sha512-Nc3IT5yHzflTfbjgqWcCPpo7DaKy4FnpB0l/zCAW0Tc7jxAiuqSxHasntB3D7887LSrA93kDJ9IXovxJYxyLCA==}
|
||||
engines: {node: '>=4'}
|
||||
@@ -2767,6 +2875,9 @@ packages:
|
||||
randombytes@2.1.0:
|
||||
resolution: {integrity: sha512-vYl3iOX+4CKUWuxGi9Ukhie6fsqXqS9FE2Zaic4tNFD2N2QQaXOMFbuKK4QmDHC0JO6B1Zp41J0LpT0oR68amQ==}
|
||||
|
||||
rc-config-loader@4.1.3:
|
||||
resolution: {integrity: sha512-kD7FqML7l800i6pS6pvLyIE2ncbk9Du8Q0gp/4hMPhJU6ZxApkoLcGD8ZeqgiAlfwZ6BlETq6qqe+12DUL207w==}
|
||||
|
||||
react-is@18.3.1:
|
||||
resolution: {integrity: sha512-/LLMVyas0ljjAtoYiPqYiL8VWXzUUdThrmU5+n20DZv+a+ClRoevUzw5JxU+Ieh5/c87ytoTBV9G1FiKfNJdmg==}
|
||||
|
||||
@@ -2894,6 +3005,10 @@ packages:
|
||||
resolution: {integrity: sha512-g9Q1haeby36OSStwb4ntCGGGaKsaVSjQ68fBxoQcutl5fS1vuY18H3wSt3jFyFtrkx+Kz0V1G85A4MyAdDMi2Q==}
|
||||
engines: {node: '>=8'}
|
||||
|
||||
slice-ansi@4.0.0:
|
||||
resolution: {integrity: sha512-qMCMfhY040cVHT43K9BFygqYbUPFZKHOg7K73mtTWJRb8pyP3fzf4Ixd5SzdEJQ6MRUg/WBnOLxghZtKKurENQ==}
|
||||
engines: {node: '>=10'}
|
||||
|
||||
source-map-js@1.2.1:
|
||||
resolution: {integrity: sha512-UXWMKhLOwVKb728IUtQPXxfYU+usdybtUrK/8uGE8CQMvrhOpwvzDBwj0QhSL7MQc7vIsISBG8VQ8+IDQxpfQA==}
|
||||
engines: {node: '>=0.10.0'}
|
||||
@@ -2972,6 +3087,9 @@ packages:
|
||||
resolution: {integrity: sha512-KIy5nylvC5le1OdaaoCJ07L+8iQzJHGH6pWDuzS+d07Cu7n1MZ2x26P8ZKIWfbK02+XIL8Mp4RkWeqdUCrDMfg==}
|
||||
engines: {node: '>=18'}
|
||||
|
||||
structured-source@4.0.0:
|
||||
resolution: {integrity: sha512-qGzRFNJDjFieQkl/sVOI2dUjHKRyL9dAJi2gCPGJLbJHBIkyOHxjuocpIEfbLioX+qSJpvbYdT49/YCdMznKxA==}
|
||||
|
||||
superagent@10.2.3:
|
||||
resolution: {integrity: sha512-y/hkYGeXAj7wUMjxRbB21g/l6aAEituGXM9Rwl4o20+SX3e8YOSV6BxFXl+dL3Uk0mjSL3kCbNkwURm8/gEDig==}
|
||||
engines: {node: '>=14.18.0'}
|
||||
@@ -2988,6 +3106,10 @@ packages:
|
||||
resolution: {integrity: sha512-MpUEN2OodtUzxvKQl72cUF7RQ5EiHsGvSsVG0ia9c5RbWGL2CI4C7EpPS8UTBIplnlzZiNuV56w+FuNxy3ty2Q==}
|
||||
engines: {node: '>=10'}
|
||||
|
||||
supports-hyperlinks@3.2.0:
|
||||
resolution: {integrity: sha512-zFObLMyZeEwzAoKCyu1B91U79K2t7ApXuQfo8OuxwXLDgcKxuwM+YvcbIhm6QWqz7mHUH1TVytR1PwVVjEuMig==}
|
||||
engines: {node: '>=14.18'}
|
||||
|
||||
symbol-observable@4.0.0:
|
||||
resolution: {integrity: sha512-b19dMThMV4HVFynSAM1++gBHAbk2Tc/osgLIBZMKsyqh34jb2e8Os7T6ZW/Bt3pJFdBTd2JwAnAAEQV7rSNvcQ==}
|
||||
engines: {node: '>=0.10'}
|
||||
@@ -2996,10 +3118,18 @@ packages:
|
||||
resolution: {integrity: sha512-MeQTA1r0litLUf0Rp/iisCaL8761lKAZHaimlbGK4j0HysC4PLfqygQj9srcs0m2RdtDYnF8UuYyKpbjHYp7Jw==}
|
||||
engines: {node: ^14.18.0 || >=16.0.0}
|
||||
|
||||
table@6.9.0:
|
||||
resolution: {integrity: sha512-9kY+CygyYM6j02t5YFHbNz2FN5QmYGv9zAjVp4lCDjlCw7amdckXlEt/bjMhUIfj4ThGRE4gCUH5+yGnNuPo5A==}
|
||||
engines: {node: '>=10.0.0'}
|
||||
|
||||
tapable@2.3.0:
|
||||
resolution: {integrity: sha512-g9ljZiwki/LfxmQADO3dEY1CbpmXT5Hm2fJ+QaGKwSXUylMybePR7/67YW7jOrrvjEgL1Fmz5kzyAjWVWLlucg==}
|
||||
engines: {node: '>=6'}
|
||||
|
||||
terminal-link@4.0.0:
|
||||
resolution: {integrity: sha512-lk+vH+MccxNqgVqSnkMVKx4VLJfnLjDBGzH16JVZjKE2DoxP57s6/vt6JmXV5I3jBcfGrxNrYtC+mPtU7WJztA==}
|
||||
engines: {node: '>=18'}
|
||||
|
||||
terser-webpack-plugin@5.3.14:
|
||||
resolution: {integrity: sha512-vkZjpUjb6OMS7dhV+tILUW6BhpDR7P2L/aQSAv+Uwk+m8KATX9EccViHTJR2qDtACKPIYndLGCyl3FMo+r2LMw==}
|
||||
engines: {node: '>= 10.13.0'}
|
||||
@@ -3025,6 +3155,13 @@ packages:
|
||||
resolution: {integrity: sha512-cAGWPIyOHU6zlmg88jwm7VRyXnMN7iV68OGAbYDk/Mh/xC/pzVPlQtY6ngoIH/5/tciuhGfvESU8GrHrcxD56w==}
|
||||
engines: {node: '>=8'}
|
||||
|
||||
text-table@0.2.0:
|
||||
resolution: {integrity: sha512-N+8UisAXDGk8PFXP4HAzVR9nbfmVJ3zYLAWiTIoqC5v5isinhr+r5uaO8+7r3BMfuNIufIsA7RdpVgacC2cSpw==}
|
||||
|
||||
textextensions@6.11.0:
|
||||
resolution: {integrity: sha512-tXJwSr9355kFJI3lbCkPpUH5cP8/M0GGy2xLO34aZCjMXBaK3SoPnZwr/oWmo1FdCnELcs4npdCIOFtq9W3ruQ==}
|
||||
engines: {node: '>=4'}
|
||||
|
||||
tinybench@2.9.0:
|
||||
resolution: {integrity: sha512-0+DUvqWMValLmha6lr4kD8iAMK1HzV0/aKnCtWb9v9641TnP/MFb7Pc2bxoxQjTXAErryXVgUOfv2YqNllqGeg==}
|
||||
|
||||
@@ -3217,6 +3354,10 @@ packages:
|
||||
resolution: {integrity: sha512-kiGUalWN+rgBJ/1OHZsBtU4rXZOfj/7rKQxULKlIzwzQSvMJUUNgPwJEEh7gU6xEVxC0ahoOBvN2YI8GH6FNgA==}
|
||||
engines: {node: '>=10.12.0'}
|
||||
|
||||
version-range@4.15.0:
|
||||
resolution: {integrity: sha512-Ck0EJbAGxHwprkzFO966t4/5QkRuzh+/I1RxhLgUKKwEn+Cd8NwM60mE3AqBZg5gYODoXW0EFsQvbZjRlvdqbg==}
|
||||
engines: {node: '>=4'}
|
||||
|
||||
vite@7.2.4:
|
||||
resolution: {integrity: sha512-NL8jTlbo0Tn4dUEXEsUg8KeyG/Lkmc4Fnzb8JXN/Ykm9G4HNImjtABMJgkQoVjOBN/j2WAwDTRytdqJbZsah7w==}
|
||||
engines: {node: ^20.19.0 || >=22.12.0}
|
||||
@@ -3441,6 +3582,12 @@ snapshots:
|
||||
transitivePeerDependencies:
|
||||
- chokidar
|
||||
|
||||
'@azu/format-text@1.0.2': {}
|
||||
|
||||
'@azu/style-format@1.0.1':
|
||||
dependencies:
|
||||
'@azu/format-text': 1.0.2
|
||||
|
||||
'@babel/code-frame@7.27.1':
|
||||
dependencies:
|
||||
'@babel/helper-validator-identifier': 7.28.5
|
||||
@@ -4344,6 +4491,68 @@ snapshots:
|
||||
'@rollup/rollup-win32-x64-msvc@4.53.3':
|
||||
optional: true
|
||||
|
||||
'@secretlint/config-loader@11.2.5':
|
||||
dependencies:
|
||||
'@secretlint/profiler': 11.2.5
|
||||
'@secretlint/resolver': 11.2.5
|
||||
'@secretlint/types': 11.2.5
|
||||
ajv: 8.17.1
|
||||
debug: 4.4.3
|
||||
rc-config-loader: 4.1.3
|
||||
transitivePeerDependencies:
|
||||
- supports-color
|
||||
|
||||
'@secretlint/core@11.2.5':
|
||||
dependencies:
|
||||
'@secretlint/profiler': 11.2.5
|
||||
'@secretlint/types': 11.2.5
|
||||
debug: 4.4.3
|
||||
structured-source: 4.0.0
|
||||
transitivePeerDependencies:
|
||||
- supports-color
|
||||
|
||||
'@secretlint/formatter@11.2.5':
|
||||
dependencies:
|
||||
'@secretlint/resolver': 11.2.5
|
||||
'@secretlint/types': 11.2.5
|
||||
'@textlint/linter-formatter': 15.4.0
|
||||
'@textlint/module-interop': 15.4.0
|
||||
'@textlint/types': 15.4.0
|
||||
chalk: 5.6.2
|
||||
debug: 4.4.3
|
||||
pluralize: 8.0.0
|
||||
strip-ansi: 7.1.2
|
||||
table: 6.9.0
|
||||
terminal-link: 4.0.0
|
||||
transitivePeerDependencies:
|
||||
- supports-color
|
||||
|
||||
'@secretlint/node@11.2.5':
|
||||
dependencies:
|
||||
'@secretlint/config-loader': 11.2.5
|
||||
'@secretlint/core': 11.2.5
|
||||
'@secretlint/formatter': 11.2.5
|
||||
'@secretlint/profiler': 11.2.5
|
||||
'@secretlint/source-creator': 11.2.5
|
||||
'@secretlint/types': 11.2.5
|
||||
debug: 4.4.3
|
||||
p-map: 7.0.4
|
||||
transitivePeerDependencies:
|
||||
- supports-color
|
||||
|
||||
'@secretlint/profiler@11.2.5': {}
|
||||
|
||||
'@secretlint/resolver@11.2.5': {}
|
||||
|
||||
'@secretlint/secretlint-rule-preset-recommend@11.2.5': {}
|
||||
|
||||
'@secretlint/source-creator@11.2.5':
|
||||
dependencies:
|
||||
'@secretlint/types': 11.2.5
|
||||
istextorbinary: 9.5.0
|
||||
|
||||
'@secretlint/types@11.2.5': {}
|
||||
|
||||
'@sinclair/typebox@0.34.41': {}
|
||||
|
||||
'@sinonjs/commons@3.0.1':
|
||||
@@ -4356,6 +4565,35 @@ snapshots:
|
||||
|
||||
'@standard-schema/spec@1.0.0': {}
|
||||
|
||||
'@textlint/ast-node-types@15.4.0': {}
|
||||
|
||||
'@textlint/linter-formatter@15.4.0':
|
||||
dependencies:
|
||||
'@azu/format-text': 1.0.2
|
||||
'@azu/style-format': 1.0.1
|
||||
'@textlint/module-interop': 15.4.0
|
||||
'@textlint/resolver': 15.4.0
|
||||
'@textlint/types': 15.4.0
|
||||
chalk: 4.1.2
|
||||
debug: 4.4.3
|
||||
js-yaml: 3.14.2
|
||||
lodash: 4.17.21
|
||||
pluralize: 2.0.0
|
||||
string-width: 4.2.3
|
||||
strip-ansi: 6.0.1
|
||||
table: 6.9.0
|
||||
text-table: 0.2.0
|
||||
transitivePeerDependencies:
|
||||
- supports-color
|
||||
|
||||
'@textlint/module-interop@15.4.0': {}
|
||||
|
||||
'@textlint/resolver@15.4.0': {}
|
||||
|
||||
'@textlint/types@15.4.0':
|
||||
dependencies:
|
||||
'@textlint/ast-node-types': 15.4.0
|
||||
|
||||
'@tokenizer/inflate@0.3.1':
|
||||
dependencies:
|
||||
debug: 4.4.3
|
||||
@@ -4865,6 +5103,10 @@ snapshots:
|
||||
dependencies:
|
||||
type-fest: 0.21.3
|
||||
|
||||
ansi-escapes@7.2.0:
|
||||
dependencies:
|
||||
environment: 1.1.0
|
||||
|
||||
ansi-regex@5.0.1: {}
|
||||
|
||||
ansi-regex@6.2.2: {}
|
||||
@@ -4904,6 +5146,8 @@ snapshots:
|
||||
estree-walker: 3.0.3
|
||||
js-tokens: 9.0.1
|
||||
|
||||
astral-regex@2.0.0: {}
|
||||
|
||||
asynckit@0.4.0: {}
|
||||
|
||||
babel-jest@30.2.0(@babel/core@7.28.5):
|
||||
@@ -4964,12 +5208,18 @@ snapshots:
|
||||
|
||||
baseline-browser-mapping@2.8.31: {}
|
||||
|
||||
binaryextensions@6.11.0:
|
||||
dependencies:
|
||||
editions: 6.22.0
|
||||
|
||||
bl@4.1.0:
|
||||
dependencies:
|
||||
buffer: 5.7.1
|
||||
inherits: 2.0.4
|
||||
readable-stream: 3.6.2
|
||||
|
||||
boundary@2.0.0: {}
|
||||
|
||||
brace-expansion@1.1.12:
|
||||
dependencies:
|
||||
balanced-match: 1.0.2
|
||||
@@ -5031,6 +5281,8 @@ snapshots:
|
||||
ansi-styles: 4.3.0
|
||||
supports-color: 7.2.0
|
||||
|
||||
chalk@5.6.2: {}
|
||||
|
||||
char-regex@1.0.2: {}
|
||||
|
||||
chardet@2.1.1: {}
|
||||
@@ -5155,6 +5407,10 @@ snapshots:
|
||||
|
||||
eastasianwidth@0.2.0: {}
|
||||
|
||||
editions@6.22.0:
|
||||
dependencies:
|
||||
version-range: 4.15.0
|
||||
|
||||
electron-to-chromium@1.5.259: {}
|
||||
|
||||
emittery@0.13.1: {}
|
||||
@@ -5168,6 +5424,8 @@ snapshots:
|
||||
graceful-fs: 4.2.11
|
||||
tapable: 2.3.0
|
||||
|
||||
environment@1.1.0: {}
|
||||
|
||||
error-ex@1.3.4:
|
||||
dependencies:
|
||||
is-arrayish: 0.2.1
|
||||
@@ -5647,6 +5905,12 @@ snapshots:
|
||||
html-escaper: 2.0.2
|
||||
istanbul-lib-report: 3.0.1
|
||||
|
||||
istextorbinary@9.5.0:
|
||||
dependencies:
|
||||
binaryextensions: 6.11.0
|
||||
editions: 6.22.0
|
||||
textextensions: 6.11.0
|
||||
|
||||
iterare@1.2.1: {}
|
||||
|
||||
jackspeak@3.4.3:
|
||||
@@ -6041,6 +6305,8 @@ snapshots:
|
||||
|
||||
lodash.merge@4.6.2: {}
|
||||
|
||||
lodash.truncate@4.4.2: {}
|
||||
|
||||
lodash@4.17.21: {}
|
||||
|
||||
log-symbols@4.1.0:
|
||||
@@ -6204,6 +6470,8 @@ snapshots:
|
||||
dependencies:
|
||||
p-limit: 3.1.0
|
||||
|
||||
p-map@7.0.4: {}
|
||||
|
||||
p-try@2.2.0: {}
|
||||
|
||||
package-json-from-dist@1.0.1: {}
|
||||
@@ -6255,6 +6523,8 @@ snapshots:
|
||||
dependencies:
|
||||
find-up: 4.1.0
|
||||
|
||||
pluralize@2.0.0: {}
|
||||
|
||||
pluralize@8.0.0: {}
|
||||
|
||||
postcss@8.5.6:
|
||||
@@ -6291,6 +6561,15 @@ snapshots:
|
||||
dependencies:
|
||||
safe-buffer: 5.2.1
|
||||
|
||||
rc-config-loader@4.1.3:
|
||||
dependencies:
|
||||
debug: 4.4.3
|
||||
js-yaml: 4.1.1
|
||||
json5: 2.2.3
|
||||
require-from-string: 2.0.2
|
||||
transitivePeerDependencies:
|
||||
- supports-color
|
||||
|
||||
react-is@18.3.1: {}
|
||||
|
||||
readable-stream@3.6.2:
|
||||
@@ -6441,6 +6720,12 @@ snapshots:
|
||||
|
||||
slash@3.0.0: {}
|
||||
|
||||
slice-ansi@4.0.0:
|
||||
dependencies:
|
||||
ansi-styles: 4.3.0
|
||||
astral-regex: 2.0.0
|
||||
is-fullwidth-code-point: 3.0.0
|
||||
|
||||
source-map-js@1.2.1: {}
|
||||
|
||||
source-map-support@0.5.13:
|
||||
@@ -6510,6 +6795,10 @@ snapshots:
|
||||
dependencies:
|
||||
'@tokenizer/token': 0.3.0
|
||||
|
||||
structured-source@4.0.0:
|
||||
dependencies:
|
||||
boundary: 2.0.0
|
||||
|
||||
superagent@10.2.3:
|
||||
dependencies:
|
||||
component-emitter: 1.3.1
|
||||
@@ -6539,14 +6828,32 @@ snapshots:
|
||||
dependencies:
|
||||
has-flag: 4.0.0
|
||||
|
||||
supports-hyperlinks@3.2.0:
|
||||
dependencies:
|
||||
has-flag: 4.0.0
|
||||
supports-color: 7.2.0
|
||||
|
||||
symbol-observable@4.0.0: {}
|
||||
|
||||
synckit@0.11.11:
|
||||
dependencies:
|
||||
'@pkgr/core': 0.2.9
|
||||
|
||||
table@6.9.0:
|
||||
dependencies:
|
||||
ajv: 8.17.1
|
||||
lodash.truncate: 4.4.2
|
||||
slice-ansi: 4.0.0
|
||||
string-width: 4.2.3
|
||||
strip-ansi: 6.0.1
|
||||
|
||||
tapable@2.3.0: {}
|
||||
|
||||
terminal-link@4.0.0:
|
||||
dependencies:
|
||||
ansi-escapes: 7.2.0
|
||||
supports-hyperlinks: 3.2.0
|
||||
|
||||
terser-webpack-plugin@5.3.14(webpack@5.100.2):
|
||||
dependencies:
|
||||
'@jridgewell/trace-mapping': 0.3.31
|
||||
@@ -6569,6 +6876,12 @@ snapshots:
|
||||
glob: 7.2.3
|
||||
minimatch: 3.1.2
|
||||
|
||||
text-table@0.2.0: {}
|
||||
|
||||
textextensions@6.11.0:
|
||||
dependencies:
|
||||
editions: 6.22.0
|
||||
|
||||
tinybench@2.9.0: {}
|
||||
|
||||
tinyexec@0.3.2: {}
|
||||
@@ -6770,6 +7083,8 @@ snapshots:
|
||||
'@types/istanbul-lib-coverage': 2.0.6
|
||||
convert-source-map: 2.0.0
|
||||
|
||||
version-range@4.15.0: {}
|
||||
|
||||
vite@7.2.4(@types/node@22.19.1)(terser@5.44.1)(tsx@4.20.6):
|
||||
dependencies:
|
||||
esbuild: 0.25.12
|
||||
|
||||
Reference in New Issue
Block a user