mirror of
https://github.com/samiyev/puaros.git
synced 2025-12-27 23:06:54 +05:00
Compare commits
10 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
b5f54fc3f8 | ||
|
|
8a2c6fdc0e | ||
|
|
2479bde9a8 | ||
|
|
f6bb65f2f1 | ||
|
|
8916ce9eab | ||
|
|
24f54d4b57 | ||
|
|
d038f90bd2 | ||
|
|
e79874e420 | ||
|
|
1663d191ee | ||
|
|
7b4cb60f13 |
@@ -5,6 +5,47 @@ All notable changes to @samiyev/guardian will be documented in this file.
|
||||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
|
||||
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
||||
|
||||
## [0.7.5] - 2025-11-25
|
||||
|
||||
### Changed
|
||||
|
||||
- ♻️ **Refactored AnalyzeProject use-case** - improved maintainability and testability:
|
||||
- Split 615-line God Use-Case into focused pipeline components
|
||||
- Created `FileCollectionStep.ts` for file scanning and basic parsing (66 lines)
|
||||
- Created `ParsingStep.ts` for AST parsing and dependency graph construction (51 lines)
|
||||
- Created `DetectionPipeline.ts` for running all 7 detectors (371 lines)
|
||||
- Created `ResultAggregator.ts` for building response DTO (81 lines)
|
||||
- Reduced `AnalyzeProject.ts` from 615 to 245 lines (60% reduction)
|
||||
- All 345 tests pass, no breaking changes
|
||||
- Improved separation of concerns and single responsibility
|
||||
- Easier to test and modify individual pipeline steps
|
||||
|
||||
### Added
|
||||
|
||||
- 🤖 **AI Agent Instructions in CLI help** - dedicated section for AI coding assistants:
|
||||
- Step-by-step workflow: scan → fix → verify → expand scope
|
||||
- Recommended commands for each step (`--only-critical --limit 5`)
|
||||
- Output format description for easy parsing
|
||||
- Priority order guidance (CRITICAL → HIGH → MEDIUM → LOW)
|
||||
- Helps Claude, Copilot, Cursor, and other AI agents immediately take action
|
||||
|
||||
Run `guardian --help` to see the new "AI AGENT INSTRUCTIONS" section.
|
||||
|
||||
## [0.7.4] - 2025-11-25
|
||||
|
||||
### Fixed
|
||||
|
||||
- 🐛 **TypeScript-aware hardcode detection** - dramatically reduces false positives by 35.7%:
|
||||
- Ignore strings in TypeScript union types (`type Status = 'active' | 'inactive'`)
|
||||
- Ignore strings in interface property types (`interface { mode: 'development' | 'production' }`)
|
||||
- Ignore strings in type assertions (`as 'read' | 'write'`)
|
||||
- Ignore strings in typeof checks (`typeof x === 'string'`)
|
||||
- Ignore strings in Symbol() calls for DI tokens (`Symbol('LOGGER')`)
|
||||
- Ignore strings in dynamic import() calls (`import('../../module.js')`)
|
||||
- Exclude tokens.ts/tokens.js files completely (DI container files)
|
||||
- Tested on real-world TypeScript project: 985 → 633 issues (352 false positives eliminated)
|
||||
- ✅ **Added 13 new tests** for TypeScript type context filtering
|
||||
|
||||
## [0.7.3] - 2025-11-25
|
||||
|
||||
### Fixed
|
||||
|
||||
@@ -2,9 +2,9 @@
|
||||
|
||||
This document outlines the current features and future plans for @puaros/guardian.
|
||||
|
||||
## Current Version: 0.6.0 ✅ RELEASED
|
||||
## Current Version: 0.7.5 ✅ RELEASED
|
||||
|
||||
**Released:** 2025-11-24
|
||||
**Released:** 2025-11-25
|
||||
|
||||
### Features Included in 0.1.0
|
||||
|
||||
@@ -301,7 +301,221 @@ class Order {
|
||||
|
||||
---
|
||||
|
||||
### Version 0.8.0 - Anemic Domain Model Detection 🩺
|
||||
### Version 0.7.5 - Refactor AnalyzeProject Use-Case 🔧 ✅ RELEASED
|
||||
|
||||
**Released:** 2025-11-25
|
||||
**Priority:** HIGH
|
||||
**Scope:** Single session (~128K tokens)
|
||||
|
||||
Split `AnalyzeProject.ts` (615 lines) into focused pipeline components.
|
||||
|
||||
**Problem:**
|
||||
- God Use-Case with 615 lines
|
||||
- Mixing: file scanning, parsing, detection, aggregation
|
||||
- Hard to test and modify individual steps
|
||||
|
||||
**Solution:**
|
||||
```
|
||||
application/use-cases/
|
||||
├── AnalyzeProject.ts # Orchestrator (245 lines)
|
||||
├── pipeline/
|
||||
│ ├── FileCollectionStep.ts # File scanning (66 lines)
|
||||
│ ├── ParsingStep.ts # AST + dependency graph (51 lines)
|
||||
│ ├── DetectionPipeline.ts # All 7 detectors (371 lines)
|
||||
│ └── ResultAggregator.ts # Build response DTO (81 lines)
|
||||
```
|
||||
|
||||
**Deliverables:**
|
||||
- ✅ Extract 4 pipeline components
|
||||
- ✅ Reduce `AnalyzeProject.ts` from 615 to 245 lines (60% reduction)
|
||||
- ✅ All 345 tests pass, no breaking changes
|
||||
- ✅ Publish to npm
|
||||
|
||||
---
|
||||
|
||||
### Version 0.7.6 - Refactor CLI Module 🔧
|
||||
|
||||
**Priority:** MEDIUM
|
||||
**Scope:** Single session (~128K tokens)
|
||||
|
||||
Split `cli/index.ts` (470 lines) into focused formatters.
|
||||
|
||||
**Problem:**
|
||||
- CLI file has 470 lines
|
||||
- Mixing: command setup, formatting, grouping, statistics
|
||||
|
||||
**Solution:**
|
||||
```
|
||||
cli/
|
||||
├── index.ts # Commands only (~100 lines)
|
||||
├── formatters/
|
||||
│ ├── OutputFormatter.ts # Violation formatting
|
||||
│ └── StatisticsFormatter.ts
|
||||
├── groupers/
|
||||
│ └── ViolationGrouper.ts # Sorting & grouping
|
||||
```
|
||||
|
||||
**Deliverables:**
|
||||
- [ ] Extract formatters and groupers
|
||||
- [ ] Reduce `cli/index.ts` to ~100-150 lines
|
||||
- [ ] CLI output identical to before
|
||||
- [ ] Publish to npm
|
||||
|
||||
---
|
||||
|
||||
### Version 0.7.7 - Improve Test Coverage 🧪
|
||||
|
||||
**Priority:** MEDIUM
|
||||
**Scope:** Single session (~128K tokens)
|
||||
|
||||
Increase coverage for under-tested domain files.
|
||||
|
||||
**Current State:**
|
||||
| File | Coverage |
|
||||
|------|----------|
|
||||
| SourceFile.ts | 46% |
|
||||
| ProjectPath.ts | 50% |
|
||||
| ValueObject.ts | 25% |
|
||||
| RepositoryViolation.ts | 58% |
|
||||
|
||||
**Deliverables:**
|
||||
- [ ] SourceFile.ts → 80%+
|
||||
- [ ] ProjectPath.ts → 80%+
|
||||
- [ ] ValueObject.ts → 80%+
|
||||
- [ ] RepositoryViolation.ts → 80%+
|
||||
- [ ] Publish to npm
|
||||
|
||||
---
|
||||
|
||||
### Version 0.7.8 - Add E2E Tests 🧪
|
||||
|
||||
**Priority:** MEDIUM
|
||||
**Scope:** Single session (~128K tokens)
|
||||
|
||||
Add integration tests for full pipeline and CLI.
|
||||
|
||||
**Deliverables:**
|
||||
- [ ] E2E test: `AnalyzeProject` full pipeline
|
||||
- [ ] CLI smoke test (spawn process, check output)
|
||||
- [ ] Test `examples/good-architecture/` → 0 violations
|
||||
- [ ] Test `examples/bad/` → specific violations
|
||||
- [ ] Test JSON output format
|
||||
- [ ] Publish to npm
|
||||
|
||||
---
|
||||
|
||||
### Version 0.7.9 - Refactor Large Detectors 🔧 (Optional)
|
||||
|
||||
**Priority:** LOW
|
||||
**Scope:** Single session (~128K tokens)
|
||||
|
||||
Refactor largest detectors to reduce complexity.
|
||||
|
||||
**Targets:**
|
||||
| Detector | Lines | Complexity |
|
||||
|----------|-------|------------|
|
||||
| RepositoryPatternDetector | 479 | 35 |
|
||||
| HardcodeDetector | 459 | 41 |
|
||||
| AggregateBoundaryDetector | 381 | 47 |
|
||||
|
||||
**Deliverables:**
|
||||
- [ ] Extract regex patterns into strategies
|
||||
- [ ] Reduce cyclomatic complexity < 25
|
||||
- [ ] Publish to npm
|
||||
|
||||
---
|
||||
|
||||
### Version 0.8.0 - Secret Detection 🔐
|
||||
**Target:** Q1 2025
|
||||
**Priority:** CRITICAL
|
||||
|
||||
Detect hardcoded secrets (API keys, tokens, credentials) using industry-standard Secretlint library.
|
||||
|
||||
**🎯 SecretDetector - NEW standalone detector:**
|
||||
|
||||
```typescript
|
||||
// ❌ CRITICAL: Hardcoded AWS credentials
|
||||
const AWS_KEY = "AKIA1234567890ABCDEF" // VIOLATION!
|
||||
const AWS_SECRET = "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" // VIOLATION!
|
||||
|
||||
// ❌ CRITICAL: Hardcoded GitHub token
|
||||
const GITHUB_TOKEN = "ghp_1234567890abcdefghijklmnopqrstuv" // VIOLATION!
|
||||
|
||||
// ❌ CRITICAL: SSH Private Key in code
|
||||
const privateKey = `-----BEGIN RSA PRIVATE KEY-----
|
||||
MIIEpAIBAAKCAQEA...` // VIOLATION!
|
||||
|
||||
// ❌ CRITICAL: NPM token
|
||||
//registry.npmjs.org/:_authToken=npm_abc123xyz // VIOLATION!
|
||||
|
||||
// ✅ GOOD: Use environment variables
|
||||
const AWS_KEY = process.env.AWS_ACCESS_KEY_ID
|
||||
const AWS_SECRET = process.env.AWS_SECRET_ACCESS_KEY
|
||||
const GITHUB_TOKEN = process.env.GITHUB_TOKEN
|
||||
```
|
||||
|
||||
**Planned Features:**
|
||||
- ✅ **SecretDetector** - Standalone detector (separate from HardcodeDetector)
|
||||
- ✅ **Secretlint Integration** - Industry-standard library (@secretlint/node)
|
||||
- ✅ **350+ Secret Patterns** - AWS, GitHub, NPM, SSH, GCP, Slack, Basic Auth, etc.
|
||||
- ✅ **CRITICAL Severity** - All secret violations marked as critical
|
||||
- ✅ **Smart Suggestions** - Context-aware remediation per secret type
|
||||
- ✅ **Clean Architecture** - New ISecretDetector interface, SecretViolation value object
|
||||
- ✅ **CLI Integration** - New "🔐 Secrets" section in output
|
||||
- ✅ **Parallel Execution** - Runs alongside existing detectors
|
||||
|
||||
**Secret Types Detected:**
|
||||
- AWS Access Keys & Secret Keys
|
||||
- GitHub Tokens (ghp_, github_pat_, gho_, etc.)
|
||||
- NPM tokens in .npmrc and code
|
||||
- SSH Private Keys
|
||||
- GCP Service Account credentials
|
||||
- Slack tokens (xoxb-, xoxp-, etc.)
|
||||
- Basic Auth credentials
|
||||
- JWT tokens
|
||||
- Private encryption keys
|
||||
|
||||
**Architecture:**
|
||||
```typescript
|
||||
// New domain layer
|
||||
interface ISecretDetector {
|
||||
detectAll(code: string, filePath: string): Promise<SecretViolation[]>
|
||||
}
|
||||
|
||||
class SecretViolation {
|
||||
file: string
|
||||
line: number
|
||||
secretType: string // AWS, GitHub, NPM, etc.
|
||||
message: string
|
||||
severity: "critical"
|
||||
suggestion: string // Context-aware guidance
|
||||
}
|
||||
|
||||
// New infrastructure implementation
|
||||
class SecretDetector implements ISecretDetector {
|
||||
// Uses @secretlint/node internally
|
||||
}
|
||||
```
|
||||
|
||||
**Why Secretlint?**
|
||||
- ✅ Actively maintained (updates weekly)
|
||||
- ✅ TypeScript native
|
||||
- ✅ Pluggable architecture
|
||||
- ✅ Low false positives
|
||||
- ✅ Industry standard
|
||||
|
||||
**Why NOT custom implementation?**
|
||||
- ❌ No good npm library for magic numbers/strings
|
||||
- ❌ Our HardcodeDetector is better than existing solutions
|
||||
- ✅ Secretlint is perfect for secrets (don't reinvent the wheel)
|
||||
- ✅ Two focused detectors better than one bloated detector
|
||||
|
||||
**Impact:**
|
||||
Guardian will now catch critical security issues BEFORE they reach production, complementing existing magic number/string detection.
|
||||
|
||||
---
|
||||
|
||||
### Version 0.9.0 - Anemic Domain Model Detection 🩺
|
||||
**Target:** Q2 2026
|
||||
**Priority:** MEDIUM
|
||||
|
||||
@@ -342,7 +556,7 @@ class Order {
|
||||
|
||||
---
|
||||
|
||||
### Version 0.8.0 - Domain Event Usage Validation 📢
|
||||
### Version 0.10.0 - Domain Event Usage Validation 📢
|
||||
**Target:** Q2 2026
|
||||
**Priority:** MEDIUM
|
||||
|
||||
@@ -381,7 +595,7 @@ class Order {
|
||||
|
||||
---
|
||||
|
||||
### Version 0.9.0 - Value Object Immutability Check 🔐
|
||||
### Version 0.11.0 - Value Object Immutability Check 🔐
|
||||
**Target:** Q2 2026
|
||||
**Priority:** MEDIUM
|
||||
|
||||
@@ -424,7 +638,7 @@ class Email {
|
||||
|
||||
---
|
||||
|
||||
### Version 0.10.0 - Use Case Single Responsibility 🎯
|
||||
### Version 0.12.0 - Use Case Single Responsibility 🎯
|
||||
**Target:** Q2 2026
|
||||
**Priority:** LOW
|
||||
|
||||
@@ -461,7 +675,7 @@ class SendWelcomeEmail {
|
||||
|
||||
---
|
||||
|
||||
### Version 0.11.0 - Interface Segregation Validation 🔌
|
||||
### Version 0.13.0 - Interface Segregation Validation 🔌
|
||||
**Target:** Q2 2026
|
||||
**Priority:** LOW
|
||||
|
||||
@@ -506,7 +720,7 @@ interface IUserExporter {
|
||||
|
||||
---
|
||||
|
||||
### Version 0.12.0 - Port-Adapter Pattern Validation 🔌
|
||||
### Version 0.14.0 - Port-Adapter Pattern Validation 🔌
|
||||
**Target:** Q2 2026
|
||||
**Priority:** MEDIUM
|
||||
|
||||
@@ -545,7 +759,7 @@ class TwilioAdapter implements INotificationPort {
|
||||
|
||||
---
|
||||
|
||||
### Version 0.13.0 - Configuration File Support ⚙️
|
||||
### Version 0.15.0 - Configuration File Support ⚙️
|
||||
**Target:** Q3 2026
|
||||
**Priority:** MEDIUM
|
||||
|
||||
@@ -596,7 +810,7 @@ export default {
|
||||
|
||||
---
|
||||
|
||||
### Version 0.14.0 - Command Query Separation (CQS/CQRS) 📝
|
||||
### Version 0.16.0 - Command Query Separation (CQS/CQRS) 📝
|
||||
**Target:** Q3 2026
|
||||
**Priority:** MEDIUM
|
||||
|
||||
@@ -657,7 +871,7 @@ class GetUser { // Query
|
||||
|
||||
---
|
||||
|
||||
### Version 0.15.0 - Factory Pattern Validation 🏭
|
||||
### Version 0.17.0 - Factory Pattern Validation 🏭
|
||||
**Target:** Q3 2026
|
||||
**Priority:** LOW
|
||||
|
||||
@@ -740,7 +954,7 @@ class Order {
|
||||
|
||||
---
|
||||
|
||||
### Version 0.16.0 - Specification Pattern Detection 🔍
|
||||
### Version 0.18.0 - Specification Pattern Detection 🔍
|
||||
**Target:** Q3 2026
|
||||
**Priority:** MEDIUM
|
||||
|
||||
@@ -812,7 +1026,7 @@ class ApproveOrder {
|
||||
|
||||
---
|
||||
|
||||
### Version 0.17.0 - Layered Service Anti-pattern Detection ⚠️
|
||||
### Version 0.19.0 - Layered Service Anti-pattern Detection ⚠️
|
||||
**Target:** Q3 2026
|
||||
**Priority:** MEDIUM
|
||||
|
||||
@@ -889,7 +1103,7 @@ class OrderService {
|
||||
|
||||
---
|
||||
|
||||
### Version 0.18.0 - Bounded Context Leak Detection 🚧
|
||||
### Version 0.20.0 - Bounded Context Leak Detection 🚧
|
||||
**Target:** Q3 2026
|
||||
**Priority:** LOW
|
||||
|
||||
@@ -954,7 +1168,7 @@ class ProductPriceChangedHandler {
|
||||
|
||||
---
|
||||
|
||||
### Version 0.19.0 - Transaction Script vs Domain Model Detection 📜
|
||||
### Version 0.21.0 - Transaction Script vs Domain Model Detection 📜
|
||||
**Target:** Q3 2026
|
||||
**Priority:** LOW
|
||||
|
||||
@@ -1021,7 +1235,7 @@ class Order {
|
||||
|
||||
---
|
||||
|
||||
### Version 0.20.0 - Persistence Ignorance Validation 💾
|
||||
### Version 0.22.0 - Persistence Ignorance Validation 💾
|
||||
**Target:** Q3 2026
|
||||
**Priority:** MEDIUM
|
||||
|
||||
@@ -1107,7 +1321,7 @@ class UserEntityMapper {
|
||||
|
||||
---
|
||||
|
||||
### Version 0.21.0 - Null Object Pattern Detection 🎭
|
||||
### Version 0.23.0 - Null Object Pattern Detection 🎭
|
||||
**Target:** Q3 2026
|
||||
**Priority:** LOW
|
||||
|
||||
@@ -1189,7 +1403,7 @@ class ProcessOrder {
|
||||
|
||||
---
|
||||
|
||||
### Version 0.22.0 - Primitive Obsession in Methods 🔢
|
||||
### Version 0.24.0 - Primitive Obsession in Methods 🔢
|
||||
**Target:** Q3 2026
|
||||
**Priority:** MEDIUM
|
||||
|
||||
@@ -1256,7 +1470,7 @@ class Order {
|
||||
|
||||
---
|
||||
|
||||
### Version 0.23.0 - Service Locator Anti-pattern 🔍
|
||||
### Version 0.25.0 - Service Locator Anti-pattern 🔍
|
||||
**Target:** Q4 2026
|
||||
**Priority:** MEDIUM
|
||||
|
||||
@@ -1316,7 +1530,7 @@ class CreateUser {
|
||||
|
||||
---
|
||||
|
||||
### Version 0.24.0 - Double Dispatch Pattern Validation 🎯
|
||||
### Version 0.26.0 - Double Dispatch Pattern Validation 🎯
|
||||
**Target:** Q4 2026
|
||||
**Priority:** LOW
|
||||
|
||||
@@ -1393,7 +1607,7 @@ class ShippingCostCalculator implements IOrderItemVisitor {
|
||||
|
||||
---
|
||||
|
||||
### Version 0.25.0 - Entity Identity Validation 🆔
|
||||
### Version 0.27.0 - Entity Identity Validation 🆔
|
||||
**Target:** Q4 2026
|
||||
**Priority:** MEDIUM
|
||||
|
||||
@@ -1486,7 +1700,7 @@ class UserId {
|
||||
|
||||
---
|
||||
|
||||
### Version 0.26.0 - Saga Pattern Detection 🔄
|
||||
### Version 0.28.0 - Saga Pattern Detection 🔄
|
||||
**Target:** Q4 2026
|
||||
**Priority:** LOW
|
||||
|
||||
@@ -1584,7 +1798,7 @@ abstract class SagaStep {
|
||||
|
||||
---
|
||||
|
||||
### Version 0.27.0 - Anti-Corruption Layer Detection 🛡️
|
||||
### Version 0.29.0 - Anti-Corruption Layer Detection 🛡️
|
||||
**Target:** Q4 2026
|
||||
**Priority:** MEDIUM
|
||||
|
||||
@@ -1670,7 +1884,7 @@ interface IOrderSyncPort {
|
||||
|
||||
---
|
||||
|
||||
### Version 0.28.0 - Ubiquitous Language Validation 📖
|
||||
### Version 0.30.0 - Ubiquitous Language Validation 📖
|
||||
**Target:** Q4 2026
|
||||
**Priority:** HIGH
|
||||
|
||||
@@ -1857,5 +2071,5 @@ Until we reach 1.0.0, minor version bumps (0.x.0) may include breaking changes a
|
||||
|
||||
---
|
||||
|
||||
**Last Updated:** 2025-11-24
|
||||
**Current Version:** 0.6.0
|
||||
**Last Updated:** 2025-11-25
|
||||
**Current Version:** 0.7.4
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "@samiyev/guardian",
|
||||
"version": "0.7.3",
|
||||
"version": "0.7.5",
|
||||
"description": "Research-backed code quality guardian for AI-assisted development. Detects hardcodes, circular deps, framework leaks, entity exposure, and 8 architecture violations. Enforces Clean Architecture/DDD principles. Works with GitHub Copilot, Cursor, Windsurf, Claude, ChatGPT, Cline, and any AI coding tool.",
|
||||
"keywords": [
|
||||
"puaros",
|
||||
|
||||
@@ -11,18 +11,17 @@ import { IRepositoryPatternDetector } from "../../domain/services/RepositoryPatt
|
||||
import { IAggregateBoundaryDetector } from "../../domain/services/IAggregateBoundaryDetector"
|
||||
import { SourceFile } from "../../domain/entities/SourceFile"
|
||||
import { DependencyGraph } from "../../domain/entities/DependencyGraph"
|
||||
import { ProjectPath } from "../../domain/value-objects/ProjectPath"
|
||||
import { FileCollectionStep } from "./pipeline/FileCollectionStep"
|
||||
import { ParsingStep } from "./pipeline/ParsingStep"
|
||||
import { DetectionPipeline } from "./pipeline/DetectionPipeline"
|
||||
import { ResultAggregator } from "./pipeline/ResultAggregator"
|
||||
import {
|
||||
ERROR_MESSAGES,
|
||||
HARDCODE_TYPES,
|
||||
LAYERS,
|
||||
NAMING_VIOLATION_TYPES,
|
||||
REGEX_PATTERNS,
|
||||
REPOSITORY_VIOLATION_TYPES,
|
||||
RULES,
|
||||
SEVERITY_ORDER,
|
||||
type SeverityLevel,
|
||||
VIOLATION_SEVERITY_MAP,
|
||||
} from "../../shared/constants"
|
||||
|
||||
export interface AnalyzeProjectRequest {
|
||||
@@ -173,442 +172,74 @@ export interface ProjectMetrics {
|
||||
|
||||
/**
|
||||
* Main use case for analyzing a project's codebase
|
||||
* Orchestrates the analysis pipeline through focused components
|
||||
*/
|
||||
export class AnalyzeProject extends UseCase<
|
||||
AnalyzeProjectRequest,
|
||||
ResponseDto<AnalyzeProjectResponse>
|
||||
> {
|
||||
private readonly fileCollectionStep: FileCollectionStep
|
||||
private readonly parsingStep: ParsingStep
|
||||
private readonly detectionPipeline: DetectionPipeline
|
||||
private readonly resultAggregator: ResultAggregator
|
||||
|
||||
constructor(
|
||||
private readonly fileScanner: IFileScanner,
|
||||
private readonly codeParser: ICodeParser,
|
||||
private readonly hardcodeDetector: IHardcodeDetector,
|
||||
private readonly namingConventionDetector: INamingConventionDetector,
|
||||
private readonly frameworkLeakDetector: IFrameworkLeakDetector,
|
||||
private readonly entityExposureDetector: IEntityExposureDetector,
|
||||
private readonly dependencyDirectionDetector: IDependencyDirectionDetector,
|
||||
private readonly repositoryPatternDetector: IRepositoryPatternDetector,
|
||||
private readonly aggregateBoundaryDetector: IAggregateBoundaryDetector,
|
||||
fileScanner: IFileScanner,
|
||||
codeParser: ICodeParser,
|
||||
hardcodeDetector: IHardcodeDetector,
|
||||
namingConventionDetector: INamingConventionDetector,
|
||||
frameworkLeakDetector: IFrameworkLeakDetector,
|
||||
entityExposureDetector: IEntityExposureDetector,
|
||||
dependencyDirectionDetector: IDependencyDirectionDetector,
|
||||
repositoryPatternDetector: IRepositoryPatternDetector,
|
||||
aggregateBoundaryDetector: IAggregateBoundaryDetector,
|
||||
) {
|
||||
super()
|
||||
this.fileCollectionStep = new FileCollectionStep(fileScanner)
|
||||
this.parsingStep = new ParsingStep(codeParser)
|
||||
this.detectionPipeline = new DetectionPipeline(
|
||||
hardcodeDetector,
|
||||
namingConventionDetector,
|
||||
frameworkLeakDetector,
|
||||
entityExposureDetector,
|
||||
dependencyDirectionDetector,
|
||||
repositoryPatternDetector,
|
||||
aggregateBoundaryDetector,
|
||||
)
|
||||
this.resultAggregator = new ResultAggregator()
|
||||
}
|
||||
|
||||
public async execute(
|
||||
request: AnalyzeProjectRequest,
|
||||
): Promise<ResponseDto<AnalyzeProjectResponse>> {
|
||||
try {
|
||||
const filePaths = await this.fileScanner.scan({
|
||||
const { sourceFiles } = await this.fileCollectionStep.execute({
|
||||
rootDir: request.rootDir,
|
||||
include: request.include,
|
||||
exclude: request.exclude,
|
||||
})
|
||||
|
||||
const sourceFiles: SourceFile[] = []
|
||||
const dependencyGraph = new DependencyGraph()
|
||||
let totalFunctions = 0
|
||||
|
||||
for (const filePath of filePaths) {
|
||||
const content = await this.fileScanner.readFile(filePath)
|
||||
const projectPath = ProjectPath.create(filePath, request.rootDir)
|
||||
|
||||
const imports = this.extractImports(content)
|
||||
const exports = this.extractExports(content)
|
||||
|
||||
const sourceFile = new SourceFile(projectPath, content, imports, exports)
|
||||
|
||||
sourceFiles.push(sourceFile)
|
||||
dependencyGraph.addFile(sourceFile)
|
||||
|
||||
if (projectPath.isTypeScript()) {
|
||||
const tree = this.codeParser.parseTypeScript(content)
|
||||
const functions = this.codeParser.extractFunctions(tree)
|
||||
totalFunctions += functions.length
|
||||
}
|
||||
|
||||
for (const imp of imports) {
|
||||
dependencyGraph.addDependency(
|
||||
projectPath.relative,
|
||||
this.resolveImportPath(imp, filePath, request.rootDir),
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
const violations = this.sortBySeverity(this.detectViolations(sourceFiles))
|
||||
const hardcodeViolations = this.sortBySeverity(this.detectHardcode(sourceFiles))
|
||||
const circularDependencyViolations = this.sortBySeverity(
|
||||
this.detectCircularDependencies(dependencyGraph),
|
||||
)
|
||||
const namingViolations = this.sortBySeverity(this.detectNamingConventions(sourceFiles))
|
||||
const frameworkLeakViolations = this.sortBySeverity(
|
||||
this.detectFrameworkLeaks(sourceFiles),
|
||||
)
|
||||
const entityExposureViolations = this.sortBySeverity(
|
||||
this.detectEntityExposures(sourceFiles),
|
||||
)
|
||||
const dependencyDirectionViolations = this.sortBySeverity(
|
||||
this.detectDependencyDirections(sourceFiles),
|
||||
)
|
||||
const repositoryPatternViolations = this.sortBySeverity(
|
||||
this.detectRepositoryPatternViolations(sourceFiles),
|
||||
)
|
||||
const aggregateBoundaryViolations = this.sortBySeverity(
|
||||
this.detectAggregateBoundaryViolations(sourceFiles),
|
||||
)
|
||||
const metrics = this.calculateMetrics(sourceFiles, totalFunctions, dependencyGraph)
|
||||
|
||||
return ResponseDto.ok({
|
||||
files: sourceFiles,
|
||||
dependencyGraph,
|
||||
violations,
|
||||
hardcodeViolations,
|
||||
circularDependencyViolations,
|
||||
namingViolations,
|
||||
frameworkLeakViolations,
|
||||
entityExposureViolations,
|
||||
dependencyDirectionViolations,
|
||||
repositoryPatternViolations,
|
||||
aggregateBoundaryViolations,
|
||||
metrics,
|
||||
const { dependencyGraph, totalFunctions } = this.parsingStep.execute({
|
||||
sourceFiles,
|
||||
rootDir: request.rootDir,
|
||||
})
|
||||
|
||||
const detectionResult = this.detectionPipeline.execute({
|
||||
sourceFiles,
|
||||
dependencyGraph,
|
||||
})
|
||||
|
||||
const response = this.resultAggregator.execute({
|
||||
sourceFiles,
|
||||
dependencyGraph,
|
||||
totalFunctions,
|
||||
...detectionResult,
|
||||
})
|
||||
|
||||
return ResponseDto.ok(response)
|
||||
} catch (error) {
|
||||
const errorMessage = `${ERROR_MESSAGES.FAILED_TO_ANALYZE}: ${error instanceof Error ? error.message : String(error)}`
|
||||
return ResponseDto.fail(errorMessage)
|
||||
}
|
||||
}
|
||||
|
||||
private extractImports(content: string): string[] {
|
||||
const imports: string[] = []
|
||||
let match
|
||||
|
||||
while ((match = REGEX_PATTERNS.IMPORT_STATEMENT.exec(content)) !== null) {
|
||||
imports.push(match[1])
|
||||
}
|
||||
|
||||
return imports
|
||||
}
|
||||
|
||||
private extractExports(content: string): string[] {
|
||||
const exports: string[] = []
|
||||
let match
|
||||
|
||||
while ((match = REGEX_PATTERNS.EXPORT_STATEMENT.exec(content)) !== null) {
|
||||
exports.push(match[1])
|
||||
}
|
||||
|
||||
return exports
|
||||
}
|
||||
|
||||
private resolveImportPath(importPath: string, _currentFile: string, _rootDir: string): string {
|
||||
if (importPath.startsWith(".")) {
|
||||
return importPath
|
||||
}
|
||||
return importPath
|
||||
}
|
||||
|
||||
private detectViolations(sourceFiles: SourceFile[]): ArchitectureViolation[] {
|
||||
const violations: ArchitectureViolation[] = []
|
||||
|
||||
const layerRules: Record<string, string[]> = {
|
||||
[LAYERS.DOMAIN]: [LAYERS.SHARED],
|
||||
[LAYERS.APPLICATION]: [LAYERS.DOMAIN, LAYERS.SHARED],
|
||||
[LAYERS.INFRASTRUCTURE]: [LAYERS.DOMAIN, LAYERS.APPLICATION, LAYERS.SHARED],
|
||||
[LAYERS.SHARED]: [],
|
||||
}
|
||||
|
||||
for (const file of sourceFiles) {
|
||||
if (!file.layer) {
|
||||
continue
|
||||
}
|
||||
|
||||
const allowedLayers = layerRules[file.layer]
|
||||
|
||||
for (const imp of file.imports) {
|
||||
const importedLayer = this.detectLayerFromImport(imp)
|
||||
|
||||
if (
|
||||
importedLayer &&
|
||||
importedLayer !== file.layer &&
|
||||
!allowedLayers.includes(importedLayer)
|
||||
) {
|
||||
violations.push({
|
||||
rule: RULES.CLEAN_ARCHITECTURE,
|
||||
message: `Layer "${file.layer}" cannot import from "${importedLayer}"`,
|
||||
file: file.path.relative,
|
||||
severity: VIOLATION_SEVERITY_MAP.ARCHITECTURE,
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return violations
|
||||
}
|
||||
|
||||
private detectLayerFromImport(importPath: string): string | undefined {
|
||||
const layers = Object.values(LAYERS)
|
||||
|
||||
for (const layer of layers) {
|
||||
if (importPath.toLowerCase().includes(layer)) {
|
||||
return layer
|
||||
}
|
||||
}
|
||||
|
||||
return undefined
|
||||
}
|
||||
|
||||
private detectHardcode(sourceFiles: SourceFile[]): HardcodeViolation[] {
|
||||
const violations: HardcodeViolation[] = []
|
||||
|
||||
for (const file of sourceFiles) {
|
||||
const hardcodedValues = this.hardcodeDetector.detectAll(
|
||||
file.content,
|
||||
file.path.relative,
|
||||
)
|
||||
|
||||
for (const hardcoded of hardcodedValues) {
|
||||
violations.push({
|
||||
rule: RULES.HARDCODED_VALUE,
|
||||
type: hardcoded.type,
|
||||
value: hardcoded.value,
|
||||
file: file.path.relative,
|
||||
line: hardcoded.line,
|
||||
column: hardcoded.column,
|
||||
context: hardcoded.context,
|
||||
suggestion: {
|
||||
constantName: hardcoded.suggestConstantName(),
|
||||
location: hardcoded.suggestLocation(file.layer),
|
||||
},
|
||||
severity: VIOLATION_SEVERITY_MAP.HARDCODE,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return violations
|
||||
}
|
||||
|
||||
private detectCircularDependencies(
|
||||
dependencyGraph: DependencyGraph,
|
||||
): CircularDependencyViolation[] {
|
||||
const violations: CircularDependencyViolation[] = []
|
||||
const cycles = dependencyGraph.findCycles()
|
||||
|
||||
for (const cycle of cycles) {
|
||||
const cycleChain = [...cycle, cycle[0]].join(" → ")
|
||||
violations.push({
|
||||
rule: RULES.CIRCULAR_DEPENDENCY,
|
||||
message: `Circular dependency detected: ${cycleChain}`,
|
||||
cycle,
|
||||
severity: VIOLATION_SEVERITY_MAP.CIRCULAR_DEPENDENCY,
|
||||
})
|
||||
}
|
||||
|
||||
return violations
|
||||
}
|
||||
|
||||
private detectNamingConventions(sourceFiles: SourceFile[]): NamingConventionViolation[] {
|
||||
const violations: NamingConventionViolation[] = []
|
||||
|
||||
for (const file of sourceFiles) {
|
||||
const namingViolations = this.namingConventionDetector.detectViolations(
|
||||
file.path.filename,
|
||||
file.layer,
|
||||
file.path.relative,
|
||||
)
|
||||
|
||||
for (const violation of namingViolations) {
|
||||
violations.push({
|
||||
rule: RULES.NAMING_CONVENTION,
|
||||
type: violation.violationType,
|
||||
fileName: violation.fileName,
|
||||
layer: violation.layer,
|
||||
file: violation.filePath,
|
||||
expected: violation.expected,
|
||||
actual: violation.actual,
|
||||
message: violation.getMessage(),
|
||||
suggestion: violation.suggestion,
|
||||
severity: VIOLATION_SEVERITY_MAP.NAMING_CONVENTION,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return violations
|
||||
}
|
||||
|
||||
private detectFrameworkLeaks(sourceFiles: SourceFile[]): FrameworkLeakViolation[] {
|
||||
const violations: FrameworkLeakViolation[] = []
|
||||
|
||||
for (const file of sourceFiles) {
|
||||
const leaks = this.frameworkLeakDetector.detectLeaks(
|
||||
file.imports,
|
||||
file.path.relative,
|
||||
file.layer,
|
||||
)
|
||||
|
||||
for (const leak of leaks) {
|
||||
violations.push({
|
||||
rule: RULES.FRAMEWORK_LEAK,
|
||||
packageName: leak.packageName,
|
||||
category: leak.category,
|
||||
categoryDescription: leak.getCategoryDescription(),
|
||||
file: file.path.relative,
|
||||
layer: leak.layer,
|
||||
line: leak.line,
|
||||
message: leak.getMessage(),
|
||||
suggestion: leak.getSuggestion(),
|
||||
severity: VIOLATION_SEVERITY_MAP.FRAMEWORK_LEAK,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return violations
|
||||
}
|
||||
|
||||
private detectEntityExposures(sourceFiles: SourceFile[]): EntityExposureViolation[] {
|
||||
const violations: EntityExposureViolation[] = []
|
||||
|
||||
for (const file of sourceFiles) {
|
||||
const exposures = this.entityExposureDetector.detectExposures(
|
||||
file.content,
|
||||
file.path.relative,
|
||||
file.layer,
|
||||
)
|
||||
|
||||
for (const exposure of exposures) {
|
||||
violations.push({
|
||||
rule: RULES.ENTITY_EXPOSURE,
|
||||
entityName: exposure.entityName,
|
||||
returnType: exposure.returnType,
|
||||
file: file.path.relative,
|
||||
layer: exposure.layer,
|
||||
line: exposure.line,
|
||||
methodName: exposure.methodName,
|
||||
message: exposure.getMessage(),
|
||||
suggestion: exposure.getSuggestion(),
|
||||
severity: VIOLATION_SEVERITY_MAP.ENTITY_EXPOSURE,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return violations
|
||||
}
|
||||
|
||||
private detectDependencyDirections(sourceFiles: SourceFile[]): DependencyDirectionViolation[] {
|
||||
const violations: DependencyDirectionViolation[] = []
|
||||
|
||||
for (const file of sourceFiles) {
|
||||
const directionViolations = this.dependencyDirectionDetector.detectViolations(
|
||||
file.content,
|
||||
file.path.relative,
|
||||
file.layer,
|
||||
)
|
||||
|
||||
for (const violation of directionViolations) {
|
||||
violations.push({
|
||||
rule: RULES.DEPENDENCY_DIRECTION,
|
||||
fromLayer: violation.fromLayer,
|
||||
toLayer: violation.toLayer,
|
||||
importPath: violation.importPath,
|
||||
file: file.path.relative,
|
||||
line: violation.line,
|
||||
message: violation.getMessage(),
|
||||
suggestion: violation.getSuggestion(),
|
||||
severity: VIOLATION_SEVERITY_MAP.DEPENDENCY_DIRECTION,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return violations
|
||||
}
|
||||
|
||||
private detectRepositoryPatternViolations(
|
||||
sourceFiles: SourceFile[],
|
||||
): RepositoryPatternViolation[] {
|
||||
const violations: RepositoryPatternViolation[] = []
|
||||
|
||||
for (const file of sourceFiles) {
|
||||
const patternViolations = this.repositoryPatternDetector.detectViolations(
|
||||
file.content,
|
||||
file.path.relative,
|
||||
file.layer,
|
||||
)
|
||||
|
||||
for (const violation of patternViolations) {
|
||||
violations.push({
|
||||
rule: RULES.REPOSITORY_PATTERN,
|
||||
violationType: violation.violationType as
|
||||
| typeof REPOSITORY_VIOLATION_TYPES.ORM_TYPE_IN_INTERFACE
|
||||
| typeof REPOSITORY_VIOLATION_TYPES.CONCRETE_REPOSITORY_IN_USE_CASE
|
||||
| typeof REPOSITORY_VIOLATION_TYPES.NEW_REPOSITORY_IN_USE_CASE
|
||||
| typeof REPOSITORY_VIOLATION_TYPES.NON_DOMAIN_METHOD_NAME,
|
||||
file: file.path.relative,
|
||||
layer: violation.layer,
|
||||
line: violation.line,
|
||||
details: violation.details,
|
||||
message: violation.getMessage(),
|
||||
suggestion: violation.getSuggestion(),
|
||||
severity: VIOLATION_SEVERITY_MAP.REPOSITORY_PATTERN,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return violations
|
||||
}
|
||||
|
||||
private detectAggregateBoundaryViolations(
|
||||
sourceFiles: SourceFile[],
|
||||
): AggregateBoundaryViolation[] {
|
||||
const violations: AggregateBoundaryViolation[] = []
|
||||
|
||||
for (const file of sourceFiles) {
|
||||
const boundaryViolations = this.aggregateBoundaryDetector.detectViolations(
|
||||
file.content,
|
||||
file.path.relative,
|
||||
file.layer,
|
||||
)
|
||||
|
||||
for (const violation of boundaryViolations) {
|
||||
violations.push({
|
||||
rule: RULES.AGGREGATE_BOUNDARY,
|
||||
fromAggregate: violation.fromAggregate,
|
||||
toAggregate: violation.toAggregate,
|
||||
entityName: violation.entityName,
|
||||
importPath: violation.importPath,
|
||||
file: file.path.relative,
|
||||
line: violation.line,
|
||||
message: violation.getMessage(),
|
||||
suggestion: violation.getSuggestion(),
|
||||
severity: VIOLATION_SEVERITY_MAP.AGGREGATE_BOUNDARY,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return violations
|
||||
}
|
||||
|
||||
private calculateMetrics(
|
||||
sourceFiles: SourceFile[],
|
||||
totalFunctions: number,
|
||||
_dependencyGraph: DependencyGraph,
|
||||
): ProjectMetrics {
|
||||
const layerDistribution: Record<string, number> = {}
|
||||
let totalImports = 0
|
||||
|
||||
for (const file of sourceFiles) {
|
||||
if (file.layer) {
|
||||
layerDistribution[file.layer] = (layerDistribution[file.layer] || 0) + 1
|
||||
}
|
||||
totalImports += file.imports.length
|
||||
}
|
||||
|
||||
return {
|
||||
totalFiles: sourceFiles.length,
|
||||
totalFunctions,
|
||||
totalImports,
|
||||
layerDistribution,
|
||||
}
|
||||
}
|
||||
|
||||
private sortBySeverity<T extends { severity: SeverityLevel }>(violations: T[]): T[] {
|
||||
return violations.sort((a, b) => {
|
||||
return SEVERITY_ORDER[a.severity] - SEVERITY_ORDER[b.severity]
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -0,0 +1,373 @@
|
||||
import { IHardcodeDetector } from "../../../domain/services/IHardcodeDetector"
|
||||
import { INamingConventionDetector } from "../../../domain/services/INamingConventionDetector"
|
||||
import { IFrameworkLeakDetector } from "../../../domain/services/IFrameworkLeakDetector"
|
||||
import { IEntityExposureDetector } from "../../../domain/services/IEntityExposureDetector"
|
||||
import { IDependencyDirectionDetector } from "../../../domain/services/IDependencyDirectionDetector"
|
||||
import { IRepositoryPatternDetector } from "../../../domain/services/RepositoryPatternDetectorService"
|
||||
import { IAggregateBoundaryDetector } from "../../../domain/services/IAggregateBoundaryDetector"
|
||||
import { SourceFile } from "../../../domain/entities/SourceFile"
|
||||
import { DependencyGraph } from "../../../domain/entities/DependencyGraph"
|
||||
import {
|
||||
LAYERS,
|
||||
REPOSITORY_VIOLATION_TYPES,
|
||||
RULES,
|
||||
SEVERITY_ORDER,
|
||||
type SeverityLevel,
|
||||
VIOLATION_SEVERITY_MAP,
|
||||
} from "../../../shared/constants"
|
||||
import type {
|
||||
AggregateBoundaryViolation,
|
||||
ArchitectureViolation,
|
||||
CircularDependencyViolation,
|
||||
DependencyDirectionViolation,
|
||||
EntityExposureViolation,
|
||||
FrameworkLeakViolation,
|
||||
HardcodeViolation,
|
||||
NamingConventionViolation,
|
||||
RepositoryPatternViolation,
|
||||
} from "../AnalyzeProject"
|
||||
|
||||
export interface DetectionRequest {
|
||||
sourceFiles: SourceFile[]
|
||||
dependencyGraph: DependencyGraph
|
||||
}
|
||||
|
||||
export interface DetectionResult {
|
||||
violations: ArchitectureViolation[]
|
||||
hardcodeViolations: HardcodeViolation[]
|
||||
circularDependencyViolations: CircularDependencyViolation[]
|
||||
namingViolations: NamingConventionViolation[]
|
||||
frameworkLeakViolations: FrameworkLeakViolation[]
|
||||
entityExposureViolations: EntityExposureViolation[]
|
||||
dependencyDirectionViolations: DependencyDirectionViolation[]
|
||||
repositoryPatternViolations: RepositoryPatternViolation[]
|
||||
aggregateBoundaryViolations: AggregateBoundaryViolation[]
|
||||
}
|
||||
|
||||
/**
|
||||
* Pipeline step responsible for running all detectors
|
||||
*/
|
||||
export class DetectionPipeline {
|
||||
constructor(
|
||||
private readonly hardcodeDetector: IHardcodeDetector,
|
||||
private readonly namingConventionDetector: INamingConventionDetector,
|
||||
private readonly frameworkLeakDetector: IFrameworkLeakDetector,
|
||||
private readonly entityExposureDetector: IEntityExposureDetector,
|
||||
private readonly dependencyDirectionDetector: IDependencyDirectionDetector,
|
||||
private readonly repositoryPatternDetector: IRepositoryPatternDetector,
|
||||
private readonly aggregateBoundaryDetector: IAggregateBoundaryDetector,
|
||||
) {}
|
||||
|
||||
public execute(request: DetectionRequest): DetectionResult {
|
||||
return {
|
||||
violations: this.sortBySeverity(this.detectViolations(request.sourceFiles)),
|
||||
hardcodeViolations: this.sortBySeverity(this.detectHardcode(request.sourceFiles)),
|
||||
circularDependencyViolations: this.sortBySeverity(
|
||||
this.detectCircularDependencies(request.dependencyGraph),
|
||||
),
|
||||
namingViolations: this.sortBySeverity(
|
||||
this.detectNamingConventions(request.sourceFiles),
|
||||
),
|
||||
frameworkLeakViolations: this.sortBySeverity(
|
||||
this.detectFrameworkLeaks(request.sourceFiles),
|
||||
),
|
||||
entityExposureViolations: this.sortBySeverity(
|
||||
this.detectEntityExposures(request.sourceFiles),
|
||||
),
|
||||
dependencyDirectionViolations: this.sortBySeverity(
|
||||
this.detectDependencyDirections(request.sourceFiles),
|
||||
),
|
||||
repositoryPatternViolations: this.sortBySeverity(
|
||||
this.detectRepositoryPatternViolations(request.sourceFiles),
|
||||
),
|
||||
aggregateBoundaryViolations: this.sortBySeverity(
|
||||
this.detectAggregateBoundaryViolations(request.sourceFiles),
|
||||
),
|
||||
}
|
||||
}
|
||||
|
||||
private detectViolations(sourceFiles: SourceFile[]): ArchitectureViolation[] {
|
||||
const violations: ArchitectureViolation[] = []
|
||||
|
||||
const layerRules: Record<string, string[]> = {
|
||||
[LAYERS.DOMAIN]: [LAYERS.SHARED],
|
||||
[LAYERS.APPLICATION]: [LAYERS.DOMAIN, LAYERS.SHARED],
|
||||
[LAYERS.INFRASTRUCTURE]: [LAYERS.DOMAIN, LAYERS.APPLICATION, LAYERS.SHARED],
|
||||
[LAYERS.SHARED]: [],
|
||||
}
|
||||
|
||||
for (const file of sourceFiles) {
|
||||
if (!file.layer) {
|
||||
continue
|
||||
}
|
||||
|
||||
const allowedLayers = layerRules[file.layer]
|
||||
|
||||
for (const imp of file.imports) {
|
||||
const importedLayer = this.detectLayerFromImport(imp)
|
||||
|
||||
if (
|
||||
importedLayer &&
|
||||
importedLayer !== file.layer &&
|
||||
!allowedLayers.includes(importedLayer)
|
||||
) {
|
||||
violations.push({
|
||||
rule: RULES.CLEAN_ARCHITECTURE,
|
||||
message: `Layer "${file.layer}" cannot import from "${importedLayer}"`,
|
||||
file: file.path.relative,
|
||||
severity: VIOLATION_SEVERITY_MAP.ARCHITECTURE,
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return violations
|
||||
}
|
||||
|
||||
private detectLayerFromImport(importPath: string): string | undefined {
|
||||
const layers = Object.values(LAYERS)
|
||||
|
||||
for (const layer of layers) {
|
||||
if (importPath.toLowerCase().includes(layer)) {
|
||||
return layer
|
||||
}
|
||||
}
|
||||
|
||||
return undefined
|
||||
}
|
||||
|
||||
private detectHardcode(sourceFiles: SourceFile[]): HardcodeViolation[] {
|
||||
const violations: HardcodeViolation[] = []
|
||||
|
||||
for (const file of sourceFiles) {
|
||||
const hardcodedValues = this.hardcodeDetector.detectAll(
|
||||
file.content,
|
||||
file.path.relative,
|
||||
)
|
||||
|
||||
for (const hardcoded of hardcodedValues) {
|
||||
violations.push({
|
||||
rule: RULES.HARDCODED_VALUE,
|
||||
type: hardcoded.type,
|
||||
value: hardcoded.value,
|
||||
file: file.path.relative,
|
||||
line: hardcoded.line,
|
||||
column: hardcoded.column,
|
||||
context: hardcoded.context,
|
||||
suggestion: {
|
||||
constantName: hardcoded.suggestConstantName(),
|
||||
location: hardcoded.suggestLocation(file.layer),
|
||||
},
|
||||
severity: VIOLATION_SEVERITY_MAP.HARDCODE,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return violations
|
||||
}
|
||||
|
||||
private detectCircularDependencies(
|
||||
dependencyGraph: DependencyGraph,
|
||||
): CircularDependencyViolation[] {
|
||||
const violations: CircularDependencyViolation[] = []
|
||||
const cycles = dependencyGraph.findCycles()
|
||||
|
||||
for (const cycle of cycles) {
|
||||
const cycleChain = [...cycle, cycle[0]].join(" → ")
|
||||
violations.push({
|
||||
rule: RULES.CIRCULAR_DEPENDENCY,
|
||||
message: `Circular dependency detected: ${cycleChain}`,
|
||||
cycle,
|
||||
severity: VIOLATION_SEVERITY_MAP.CIRCULAR_DEPENDENCY,
|
||||
})
|
||||
}
|
||||
|
||||
return violations
|
||||
}
|
||||
|
||||
private detectNamingConventions(sourceFiles: SourceFile[]): NamingConventionViolation[] {
|
||||
const violations: NamingConventionViolation[] = []
|
||||
|
||||
for (const file of sourceFiles) {
|
||||
const namingViolations = this.namingConventionDetector.detectViolations(
|
||||
file.path.filename,
|
||||
file.layer,
|
||||
file.path.relative,
|
||||
)
|
||||
|
||||
for (const violation of namingViolations) {
|
||||
violations.push({
|
||||
rule: RULES.NAMING_CONVENTION,
|
||||
type: violation.violationType,
|
||||
fileName: violation.fileName,
|
||||
layer: violation.layer,
|
||||
file: violation.filePath,
|
||||
expected: violation.expected,
|
||||
actual: violation.actual,
|
||||
message: violation.getMessage(),
|
||||
suggestion: violation.suggestion,
|
||||
severity: VIOLATION_SEVERITY_MAP.NAMING_CONVENTION,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return violations
|
||||
}
|
||||
|
||||
private detectFrameworkLeaks(sourceFiles: SourceFile[]): FrameworkLeakViolation[] {
|
||||
const violations: FrameworkLeakViolation[] = []
|
||||
|
||||
for (const file of sourceFiles) {
|
||||
const leaks = this.frameworkLeakDetector.detectLeaks(
|
||||
file.imports,
|
||||
file.path.relative,
|
||||
file.layer,
|
||||
)
|
||||
|
||||
for (const leak of leaks) {
|
||||
violations.push({
|
||||
rule: RULES.FRAMEWORK_LEAK,
|
||||
packageName: leak.packageName,
|
||||
category: leak.category,
|
||||
categoryDescription: leak.getCategoryDescription(),
|
||||
file: file.path.relative,
|
||||
layer: leak.layer,
|
||||
line: leak.line,
|
||||
message: leak.getMessage(),
|
||||
suggestion: leak.getSuggestion(),
|
||||
severity: VIOLATION_SEVERITY_MAP.FRAMEWORK_LEAK,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return violations
|
||||
}
|
||||
|
||||
private detectEntityExposures(sourceFiles: SourceFile[]): EntityExposureViolation[] {
|
||||
const violations: EntityExposureViolation[] = []
|
||||
|
||||
for (const file of sourceFiles) {
|
||||
const exposures = this.entityExposureDetector.detectExposures(
|
||||
file.content,
|
||||
file.path.relative,
|
||||
file.layer,
|
||||
)
|
||||
|
||||
for (const exposure of exposures) {
|
||||
violations.push({
|
||||
rule: RULES.ENTITY_EXPOSURE,
|
||||
entityName: exposure.entityName,
|
||||
returnType: exposure.returnType,
|
||||
file: file.path.relative,
|
||||
layer: exposure.layer,
|
||||
line: exposure.line,
|
||||
methodName: exposure.methodName,
|
||||
message: exposure.getMessage(),
|
||||
suggestion: exposure.getSuggestion(),
|
||||
severity: VIOLATION_SEVERITY_MAP.ENTITY_EXPOSURE,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return violations
|
||||
}
|
||||
|
||||
private detectDependencyDirections(sourceFiles: SourceFile[]): DependencyDirectionViolation[] {
|
||||
const violations: DependencyDirectionViolation[] = []
|
||||
|
||||
for (const file of sourceFiles) {
|
||||
const directionViolations = this.dependencyDirectionDetector.detectViolations(
|
||||
file.content,
|
||||
file.path.relative,
|
||||
file.layer,
|
||||
)
|
||||
|
||||
for (const violation of directionViolations) {
|
||||
violations.push({
|
||||
rule: RULES.DEPENDENCY_DIRECTION,
|
||||
fromLayer: violation.fromLayer,
|
||||
toLayer: violation.toLayer,
|
||||
importPath: violation.importPath,
|
||||
file: file.path.relative,
|
||||
line: violation.line,
|
||||
message: violation.getMessage(),
|
||||
suggestion: violation.getSuggestion(),
|
||||
severity: VIOLATION_SEVERITY_MAP.DEPENDENCY_DIRECTION,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return violations
|
||||
}
|
||||
|
||||
private detectRepositoryPatternViolations(
|
||||
sourceFiles: SourceFile[],
|
||||
): RepositoryPatternViolation[] {
|
||||
const violations: RepositoryPatternViolation[] = []
|
||||
|
||||
for (const file of sourceFiles) {
|
||||
const patternViolations = this.repositoryPatternDetector.detectViolations(
|
||||
file.content,
|
||||
file.path.relative,
|
||||
file.layer,
|
||||
)
|
||||
|
||||
for (const violation of patternViolations) {
|
||||
violations.push({
|
||||
rule: RULES.REPOSITORY_PATTERN,
|
||||
violationType: violation.violationType as
|
||||
| typeof REPOSITORY_VIOLATION_TYPES.ORM_TYPE_IN_INTERFACE
|
||||
| typeof REPOSITORY_VIOLATION_TYPES.CONCRETE_REPOSITORY_IN_USE_CASE
|
||||
| typeof REPOSITORY_VIOLATION_TYPES.NEW_REPOSITORY_IN_USE_CASE
|
||||
| typeof REPOSITORY_VIOLATION_TYPES.NON_DOMAIN_METHOD_NAME,
|
||||
file: file.path.relative,
|
||||
layer: violation.layer,
|
||||
line: violation.line,
|
||||
details: violation.details,
|
||||
message: violation.getMessage(),
|
||||
suggestion: violation.getSuggestion(),
|
||||
severity: VIOLATION_SEVERITY_MAP.REPOSITORY_PATTERN,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return violations
|
||||
}
|
||||
|
||||
private detectAggregateBoundaryViolations(
|
||||
sourceFiles: SourceFile[],
|
||||
): AggregateBoundaryViolation[] {
|
||||
const violations: AggregateBoundaryViolation[] = []
|
||||
|
||||
for (const file of sourceFiles) {
|
||||
const boundaryViolations = this.aggregateBoundaryDetector.detectViolations(
|
||||
file.content,
|
||||
file.path.relative,
|
||||
file.layer,
|
||||
)
|
||||
|
||||
for (const violation of boundaryViolations) {
|
||||
violations.push({
|
||||
rule: RULES.AGGREGATE_BOUNDARY,
|
||||
fromAggregate: violation.fromAggregate,
|
||||
toAggregate: violation.toAggregate,
|
||||
entityName: violation.entityName,
|
||||
importPath: violation.importPath,
|
||||
file: file.path.relative,
|
||||
line: violation.line,
|
||||
message: violation.getMessage(),
|
||||
suggestion: violation.getSuggestion(),
|
||||
severity: VIOLATION_SEVERITY_MAP.AGGREGATE_BOUNDARY,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return violations
|
||||
}
|
||||
|
||||
private sortBySeverity<T extends { severity: SeverityLevel }>(violations: T[]): T[] {
|
||||
return violations.sort((a, b) => {
|
||||
return SEVERITY_ORDER[a.severity] - SEVERITY_ORDER[b.severity]
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,66 @@
|
||||
import { IFileScanner } from "../../../domain/services/IFileScanner"
|
||||
import { SourceFile } from "../../../domain/entities/SourceFile"
|
||||
import { ProjectPath } from "../../../domain/value-objects/ProjectPath"
|
||||
import { REGEX_PATTERNS } from "../../../shared/constants"
|
||||
|
||||
export interface FileCollectionRequest {
|
||||
rootDir: string
|
||||
include?: string[]
|
||||
exclude?: string[]
|
||||
}
|
||||
|
||||
export interface FileCollectionResult {
|
||||
sourceFiles: SourceFile[]
|
||||
}
|
||||
|
||||
/**
|
||||
* Pipeline step responsible for file collection and basic parsing
|
||||
*/
|
||||
export class FileCollectionStep {
|
||||
constructor(private readonly fileScanner: IFileScanner) {}
|
||||
|
||||
public async execute(request: FileCollectionRequest): Promise<FileCollectionResult> {
|
||||
const filePaths = await this.fileScanner.scan({
|
||||
rootDir: request.rootDir,
|
||||
include: request.include,
|
||||
exclude: request.exclude,
|
||||
})
|
||||
|
||||
const sourceFiles: SourceFile[] = []
|
||||
|
||||
for (const filePath of filePaths) {
|
||||
const content = await this.fileScanner.readFile(filePath)
|
||||
const projectPath = ProjectPath.create(filePath, request.rootDir)
|
||||
|
||||
const imports = this.extractImports(content)
|
||||
const exports = this.extractExports(content)
|
||||
|
||||
const sourceFile = new SourceFile(projectPath, content, imports, exports)
|
||||
sourceFiles.push(sourceFile)
|
||||
}
|
||||
|
||||
return { sourceFiles }
|
||||
}
|
||||
|
||||
private extractImports(content: string): string[] {
|
||||
const imports: string[] = []
|
||||
let match
|
||||
|
||||
while ((match = REGEX_PATTERNS.IMPORT_STATEMENT.exec(content)) !== null) {
|
||||
imports.push(match[1])
|
||||
}
|
||||
|
||||
return imports
|
||||
}
|
||||
|
||||
private extractExports(content: string): string[] {
|
||||
const exports: string[] = []
|
||||
let match
|
||||
|
||||
while ((match = REGEX_PATTERNS.EXPORT_STATEMENT.exec(content)) !== null) {
|
||||
exports.push(match[1])
|
||||
}
|
||||
|
||||
return exports
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,51 @@
|
||||
import { ICodeParser } from "../../../domain/services/ICodeParser"
|
||||
import { SourceFile } from "../../../domain/entities/SourceFile"
|
||||
import { DependencyGraph } from "../../../domain/entities/DependencyGraph"
|
||||
|
||||
export interface ParsingRequest {
|
||||
sourceFiles: SourceFile[]
|
||||
rootDir: string
|
||||
}
|
||||
|
||||
export interface ParsingResult {
|
||||
dependencyGraph: DependencyGraph
|
||||
totalFunctions: number
|
||||
}
|
||||
|
||||
/**
|
||||
* Pipeline step responsible for AST parsing and dependency graph construction
|
||||
*/
|
||||
export class ParsingStep {
|
||||
constructor(private readonly codeParser: ICodeParser) {}
|
||||
|
||||
public execute(request: ParsingRequest): ParsingResult {
|
||||
const dependencyGraph = new DependencyGraph()
|
||||
let totalFunctions = 0
|
||||
|
||||
for (const sourceFile of request.sourceFiles) {
|
||||
dependencyGraph.addFile(sourceFile)
|
||||
|
||||
if (sourceFile.path.isTypeScript()) {
|
||||
const tree = this.codeParser.parseTypeScript(sourceFile.content)
|
||||
const functions = this.codeParser.extractFunctions(tree)
|
||||
totalFunctions += functions.length
|
||||
}
|
||||
|
||||
for (const imp of sourceFile.imports) {
|
||||
dependencyGraph.addDependency(
|
||||
sourceFile.path.relative,
|
||||
this.resolveImportPath(imp, sourceFile.path.relative, request.rootDir),
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
return { dependencyGraph, totalFunctions }
|
||||
}
|
||||
|
||||
private resolveImportPath(importPath: string, _currentFile: string, _rootDir: string): string {
|
||||
if (importPath.startsWith(".")) {
|
||||
return importPath
|
||||
}
|
||||
return importPath
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,81 @@
|
||||
import { SourceFile } from "../../../domain/entities/SourceFile"
|
||||
import { DependencyGraph } from "../../../domain/entities/DependencyGraph"
|
||||
import type {
|
||||
AggregateBoundaryViolation,
|
||||
AnalyzeProjectResponse,
|
||||
ArchitectureViolation,
|
||||
CircularDependencyViolation,
|
||||
DependencyDirectionViolation,
|
||||
EntityExposureViolation,
|
||||
FrameworkLeakViolation,
|
||||
HardcodeViolation,
|
||||
NamingConventionViolation,
|
||||
ProjectMetrics,
|
||||
RepositoryPatternViolation,
|
||||
} from "../AnalyzeProject"
|
||||
|
||||
export interface AggregationRequest {
|
||||
sourceFiles: SourceFile[]
|
||||
dependencyGraph: DependencyGraph
|
||||
totalFunctions: number
|
||||
violations: ArchitectureViolation[]
|
||||
hardcodeViolations: HardcodeViolation[]
|
||||
circularDependencyViolations: CircularDependencyViolation[]
|
||||
namingViolations: NamingConventionViolation[]
|
||||
frameworkLeakViolations: FrameworkLeakViolation[]
|
||||
entityExposureViolations: EntityExposureViolation[]
|
||||
dependencyDirectionViolations: DependencyDirectionViolation[]
|
||||
repositoryPatternViolations: RepositoryPatternViolation[]
|
||||
aggregateBoundaryViolations: AggregateBoundaryViolation[]
|
||||
}
|
||||
|
||||
/**
|
||||
* Pipeline step responsible for building final response DTO
|
||||
*/
|
||||
export class ResultAggregator {
|
||||
public execute(request: AggregationRequest): AnalyzeProjectResponse {
|
||||
const metrics = this.calculateMetrics(
|
||||
request.sourceFiles,
|
||||
request.totalFunctions,
|
||||
request.dependencyGraph,
|
||||
)
|
||||
|
||||
return {
|
||||
files: request.sourceFiles,
|
||||
dependencyGraph: request.dependencyGraph,
|
||||
violations: request.violations,
|
||||
hardcodeViolations: request.hardcodeViolations,
|
||||
circularDependencyViolations: request.circularDependencyViolations,
|
||||
namingViolations: request.namingViolations,
|
||||
frameworkLeakViolations: request.frameworkLeakViolations,
|
||||
entityExposureViolations: request.entityExposureViolations,
|
||||
dependencyDirectionViolations: request.dependencyDirectionViolations,
|
||||
repositoryPatternViolations: request.repositoryPatternViolations,
|
||||
aggregateBoundaryViolations: request.aggregateBoundaryViolations,
|
||||
metrics,
|
||||
}
|
||||
}
|
||||
|
||||
private calculateMetrics(
|
||||
sourceFiles: SourceFile[],
|
||||
totalFunctions: number,
|
||||
_dependencyGraph: DependencyGraph,
|
||||
): ProjectMetrics {
|
||||
const layerDistribution: Record<string, number> = {}
|
||||
let totalImports = 0
|
||||
|
||||
for (const file of sourceFiles) {
|
||||
if (file.layer) {
|
||||
layerDistribution[file.layer] = (layerDistribution[file.layer] || 0) + 1
|
||||
}
|
||||
totalImports += file.imports.length
|
||||
}
|
||||
|
||||
return {
|
||||
totalFiles: sourceFiles.length,
|
||||
totalFunctions,
|
||||
totalImports,
|
||||
layerDistribution,
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -150,4 +150,30 @@ export const CLI_HELP_TEXT = {
|
||||
FIX_REPOSITORY:
|
||||
" Repository pattern → Create IUserRepository in domain, implement in infra\n\n",
|
||||
FOOTER: "Each violation includes a 💡 Suggestion with specific fix instructions.\n",
|
||||
AI_AGENT_HEADER: "AI AGENT INSTRUCTIONS:\n",
|
||||
AI_AGENT_INTRO:
|
||||
" When an AI coding assistant (Claude, Copilot, Cursor, etc.) uses Guardian:\n\n",
|
||||
AI_AGENT_STEP1: " STEP 1: Run initial scan\n",
|
||||
AI_AGENT_STEP1_CMD: " $ guardian check ./src --only-critical --limit 5\n\n",
|
||||
AI_AGENT_STEP2: " STEP 2: For each violation in output:\n",
|
||||
AI_AGENT_STEP2_DETAIL:
|
||||
" - Read the file at reported location (file:line:column)\n" +
|
||||
" - Apply the 💡 Suggestion provided\n" +
|
||||
" - The suggestion contains exact fix instructions\n\n",
|
||||
AI_AGENT_STEP3: " STEP 3: After fixing, verify:\n",
|
||||
AI_AGENT_STEP3_CMD: " $ guardian check ./src --only-critical\n\n",
|
||||
AI_AGENT_STEP4: " STEP 4: Expand scope progressively:\n",
|
||||
AI_AGENT_STEP4_CMDS:
|
||||
" $ guardian check ./src --min-severity high # Fix HIGH issues\n" +
|
||||
" $ guardian check ./src --min-severity medium # Fix MEDIUM issues\n" +
|
||||
" $ guardian check ./src # Full scan\n\n",
|
||||
AI_AGENT_OUTPUT: " OUTPUT FORMAT (parse this):\n",
|
||||
AI_AGENT_OUTPUT_DETAIL:
|
||||
" <index>. <file>:<line>:<column>\n" +
|
||||
" Severity: <emoji> <LEVEL>\n" +
|
||||
" Type: <violation-type>\n" +
|
||||
" Value: <problematic-value>\n" +
|
||||
" Context: <code-snippet>\n" +
|
||||
" 💡 Suggestion: <exact-fix-instruction>\n\n",
|
||||
AI_AGENT_PRIORITY: " PRIORITY ORDER: CRITICAL → HIGH → MEDIUM → LOW\n\n",
|
||||
} as const
|
||||
|
||||
@@ -122,7 +122,20 @@ program
|
||||
CLI_HELP_TEXT.FIX_ENTITY +
|
||||
CLI_HELP_TEXT.FIX_DEPENDENCY +
|
||||
CLI_HELP_TEXT.FIX_REPOSITORY +
|
||||
CLI_HELP_TEXT.FOOTER,
|
||||
CLI_HELP_TEXT.FOOTER +
|
||||
CLI_HELP_TEXT.AI_AGENT_HEADER +
|
||||
CLI_HELP_TEXT.AI_AGENT_INTRO +
|
||||
CLI_HELP_TEXT.AI_AGENT_STEP1 +
|
||||
CLI_HELP_TEXT.AI_AGENT_STEP1_CMD +
|
||||
CLI_HELP_TEXT.AI_AGENT_STEP2 +
|
||||
CLI_HELP_TEXT.AI_AGENT_STEP2_DETAIL +
|
||||
CLI_HELP_TEXT.AI_AGENT_STEP3 +
|
||||
CLI_HELP_TEXT.AI_AGENT_STEP3_CMD +
|
||||
CLI_HELP_TEXT.AI_AGENT_STEP4 +
|
||||
CLI_HELP_TEXT.AI_AGENT_STEP4_CMDS +
|
||||
CLI_HELP_TEXT.AI_AGENT_OUTPUT +
|
||||
CLI_HELP_TEXT.AI_AGENT_OUTPUT_DETAIL +
|
||||
CLI_HELP_TEXT.AI_AGENT_PRIORITY,
|
||||
)
|
||||
|
||||
program
|
||||
|
||||
@@ -26,6 +26,19 @@ export class HardcodeDetector implements IHardcodeDetector {
|
||||
|
||||
private readonly ALLOWED_STRING_PATTERNS = [/^[a-z]$/i, /^\/$/, /^\\$/, /^\s+$/, /^,$/, /^\.$/]
|
||||
|
||||
/**
|
||||
* Patterns to detect TypeScript type contexts where strings should be ignored
|
||||
*/
|
||||
private readonly TYPE_CONTEXT_PATTERNS = [
|
||||
/^\s*type\s+\w+\s*=/i, // type Foo = ...
|
||||
/^\s*interface\s+\w+/i, // interface Foo { ... }
|
||||
/^\s*\w+\s*:\s*['"`]/, // property: 'value' (in type or interface)
|
||||
/\s+as\s+['"`]/, // ... as 'type'
|
||||
/Record<.*,\s*import\(/, // Record with import type
|
||||
/typeof\s+\w+\s*===\s*['"`]/, // typeof x === 'string'
|
||||
/['"`]\s*===\s*typeof\s+\w+/, // 'string' === typeof x
|
||||
]
|
||||
|
||||
/**
|
||||
* Detects all hardcoded values (both numbers and strings) in the given code
|
||||
*
|
||||
@@ -43,14 +56,15 @@ export class HardcodeDetector implements IHardcodeDetector {
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if a file is a constants definition file
|
||||
* Check if a file is a constants definition file or DI tokens file
|
||||
*/
|
||||
private isConstantsFile(filePath: string): boolean {
|
||||
const _fileName = filePath.split("/").pop() ?? ""
|
||||
const constantsPatterns = [
|
||||
/^constants?\.(ts|js)$/i,
|
||||
/constants?\/.*\.(ts|js)$/i,
|
||||
/\/(constants|config|settings|defaults)\.ts$/i,
|
||||
/\/(constants|config|settings|defaults|tokens)\.ts$/i,
|
||||
/\/di\/tokens\.(ts|js)$/i,
|
||||
]
|
||||
return constantsPatterns.some((pattern) => pattern.test(filePath))
|
||||
}
|
||||
@@ -341,6 +355,18 @@ export class HardcodeDetector implements IHardcodeDetector {
|
||||
return false
|
||||
}
|
||||
|
||||
if (this.isInTypeContext(line)) {
|
||||
return false
|
||||
}
|
||||
|
||||
if (this.isInSymbolCall(line, value)) {
|
||||
return false
|
||||
}
|
||||
|
||||
if (this.isInImportCall(line, value)) {
|
||||
return false
|
||||
}
|
||||
|
||||
if (value.includes(DETECTION_KEYWORDS.HTTP) || value.includes(DETECTION_KEYWORDS.API)) {
|
||||
return true
|
||||
}
|
||||
@@ -388,4 +414,46 @@ export class HardcodeDetector implements IHardcodeDetector {
|
||||
const end = Math.min(line.length, index + 30)
|
||||
return line.substring(start, end)
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if a line is in a TypeScript type definition context
|
||||
* Examples:
|
||||
* - type Foo = 'a' | 'b'
|
||||
* - interface Bar { prop: 'value' }
|
||||
* - Record<X, import('path')>
|
||||
* - ... as 'type'
|
||||
*/
|
||||
private isInTypeContext(line: string): boolean {
|
||||
const trimmedLine = line.trim()
|
||||
|
||||
if (this.TYPE_CONTEXT_PATTERNS.some((pattern) => pattern.test(trimmedLine))) {
|
||||
return true
|
||||
}
|
||||
|
||||
if (trimmedLine.includes("|") && /['"`][^'"`]+['"`]\s*\|/.test(trimmedLine)) {
|
||||
return true
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if a string is inside a Symbol() call
|
||||
* Example: Symbol('TOKEN_NAME')
|
||||
*/
|
||||
private isInSymbolCall(line: string, stringValue: string): boolean {
|
||||
const symbolPattern = new RegExp(
|
||||
`Symbol\\s*\\(\\s*['"\`]${stringValue.replace(/[.*+?^${}()|[\]\\]/g, "\\$&")}['"\`]\\s*\\)`,
|
||||
)
|
||||
return symbolPattern.test(line)
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if a string is inside an import() call
|
||||
* Example: import('../../path/to/module.js')
|
||||
*/
|
||||
private isInImportCall(line: string, stringValue: string): boolean {
|
||||
const importPattern = /import\s*\(\s*['"`][^'"`]+['"`]\s*\)/
|
||||
return importPattern.test(line) && line.includes(stringValue)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -468,4 +468,102 @@ const b = 2`
|
||||
expect(result[0].context).toContain("5000")
|
||||
})
|
||||
})
|
||||
|
||||
describe("TypeScript type contexts (false positive reduction)", () => {
|
||||
it("should NOT detect strings in union types", () => {
|
||||
const code = `type Status = 'active' | 'inactive' | 'pending'`
|
||||
const result = detector.detectMagicStrings(code, "test.ts")
|
||||
|
||||
expect(result).toHaveLength(0)
|
||||
})
|
||||
|
||||
it("should NOT detect strings in interface property types", () => {
|
||||
const code = `interface Config { mode: 'development' | 'production' }`
|
||||
const result = detector.detectMagicStrings(code, "test.ts")
|
||||
|
||||
expect(result).toHaveLength(0)
|
||||
})
|
||||
|
||||
it("should NOT detect strings in type aliases", () => {
|
||||
const code = `type Theme = 'light' | 'dark'`
|
||||
const result = detector.detectMagicStrings(code, "test.ts")
|
||||
|
||||
expect(result).toHaveLength(0)
|
||||
})
|
||||
|
||||
it("should NOT detect strings in type assertions", () => {
|
||||
const code = `const mode = getMode() as 'read' | 'write'`
|
||||
const result = detector.detectMagicStrings(code, "test.ts")
|
||||
|
||||
expect(result).toHaveLength(0)
|
||||
})
|
||||
|
||||
it("should NOT detect strings in Symbol() calls", () => {
|
||||
const code = `const TOKEN = Symbol('MY_TOKEN')`
|
||||
const result = detector.detectMagicStrings(code, "test.ts")
|
||||
|
||||
expect(result).toHaveLength(0)
|
||||
})
|
||||
|
||||
it("should NOT detect strings in multiple Symbol() calls", () => {
|
||||
const code = `
|
||||
export const LOGGER = Symbol('LOGGER')
|
||||
export const DATABASE = Symbol('DATABASE')
|
||||
export const CACHE = Symbol('CACHE')
|
||||
`
|
||||
const result = detector.detectMagicStrings(code, "test.ts")
|
||||
|
||||
expect(result).toHaveLength(0)
|
||||
})
|
||||
|
||||
it("should NOT detect strings in import() calls", () => {
|
||||
const code = `const module = import('../../path/to/module.js')`
|
||||
const result = detector.detectMagicStrings(code, "test.ts")
|
||||
|
||||
expect(result).toHaveLength(0)
|
||||
})
|
||||
|
||||
it("should NOT detect strings in typeof checks", () => {
|
||||
const code = `if (typeof x === 'string') { }`
|
||||
const result = detector.detectMagicStrings(code, "test.ts")
|
||||
|
||||
expect(result).toHaveLength(0)
|
||||
})
|
||||
|
||||
it("should NOT detect strings in reverse typeof checks", () => {
|
||||
const code = `if ('number' === typeof count) { }`
|
||||
const result = detector.detectMagicStrings(code, "test.ts")
|
||||
|
||||
expect(result).toHaveLength(0)
|
||||
})
|
||||
|
||||
it("should skip tokens.ts files completely", () => {
|
||||
const code = `
|
||||
export const LOGGER = Symbol('LOGGER')
|
||||
export const DATABASE = Symbol('DATABASE')
|
||||
const url = "http://localhost:8080"
|
||||
`
|
||||
const result = detector.detectAll(code, "src/di/tokens.ts")
|
||||
|
||||
expect(result).toHaveLength(0)
|
||||
})
|
||||
|
||||
it("should skip tokens.js files completely", () => {
|
||||
const code = `const TOKEN = Symbol('TOKEN')`
|
||||
const result = detector.detectAll(code, "src/di/tokens.js")
|
||||
|
||||
expect(result).toHaveLength(0)
|
||||
})
|
||||
|
||||
it("should detect real magic strings even with type contexts nearby", () => {
|
||||
const code = `
|
||||
type Mode = 'read' | 'write'
|
||||
const apiKey = "secret-key-12345"
|
||||
`
|
||||
const result = detector.detectMagicStrings(code, "test.ts")
|
||||
|
||||
expect(result.length).toBeGreaterThan(0)
|
||||
expect(result.some((r) => r.value === "secret-key-12345")).toBe(true)
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
Reference in New Issue
Block a user