mirror of
https://github.com/samiyev/puaros.git
synced 2025-12-27 23:06:54 +05:00
Compare commits
13 Commits
ipuaro-v0.
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
3e7762ec4e | ||
|
|
c82006bbda | ||
|
|
2e84472e49 | ||
|
|
17d75dbd54 | ||
|
|
fac5966678 | ||
|
|
92ba3fd9ba | ||
|
|
e9aaa708fe | ||
|
|
d6d15dd271 | ||
|
|
d63d85d850 | ||
|
|
41cfc21f20 | ||
|
|
eeaa223436 | ||
|
|
36768c06d1 | ||
|
|
5a22cd5c9b |
@@ -5,6 +5,300 @@ All notable changes to this project will be documented in this file.
|
||||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
|
||||
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
||||
|
||||
## [0.30.2] - 2025-12-05 - JSON Tool Call Parsing & Improved Prompts
|
||||
|
||||
### Added
|
||||
|
||||
- **JSON Tool Call Fallback in ResponseParser**
|
||||
- LLM responses with JSON format `{"name": "tool", "arguments": {...}}` are now parsed
|
||||
- Fallback to JSON when XML format not found
|
||||
- Works with models like qwen2.5-coder that prefer JSON over XML
|
||||
|
||||
- **Tool Name Aliases**
|
||||
- `get_functions`, `read_file`, `read_lines` → `get_lines`
|
||||
- `list_files`, `get_files` → `get_structure`
|
||||
- `find_todos` → `get_todos`
|
||||
- And more common LLM typos/variations
|
||||
|
||||
### Changed
|
||||
|
||||
- **Improved System Prompt**
|
||||
- Added clear "When to Use Tools" / "Do NOT use tools" sections
|
||||
- More concise and directive instructions
|
||||
- Better examples for tool usage
|
||||
|
||||
### Technical Details
|
||||
|
||||
- Total tests: 1848 passed (+8 new tests for JSON parsing)
|
||||
- 0 ESLint errors, 3 warnings (pre-existing complexity)
|
||||
|
||||
---
|
||||
|
||||
## [0.30.1] - 2025-12-05 - Display Transitive Counts in Context
|
||||
|
||||
### Changed
|
||||
|
||||
- **High Impact Files table now includes transitive counts**
|
||||
- Table header changed from `| File | Impact | Dependents |` to `| File | Impact | Direct | Transitive |`
|
||||
- Shows both direct dependent count and transitive dependent count
|
||||
- Sorting changed: now sorts by transitive count first, then by impact score
|
||||
- Example: `| utils/validation | 67% | 12 | 24 |`
|
||||
|
||||
### Technical Details
|
||||
|
||||
- Total tests: 1839 passed
|
||||
- 0 ESLint errors, 3 warnings (pre-existing complexity)
|
||||
|
||||
---
|
||||
|
||||
## [0.30.0] - 2025-12-05 - Transitive Dependencies Count
|
||||
|
||||
### Added
|
||||
|
||||
- **Transitive Dependency Counts in FileMeta (v0.30.0)**
|
||||
- New `transitiveDepCount: number` field - count of files that depend on this file transitively
|
||||
- New `transitiveDepByCount: number` field - count of files this file depends on transitively
|
||||
- Includes both direct and indirect dependencies/dependents
|
||||
- Excludes the file itself from counts (handles circular dependencies)
|
||||
|
||||
- **Transitive Dependency Computation in MetaAnalyzer**
|
||||
- New `computeTransitiveCounts()` method - computes transitive counts for all files
|
||||
- New `getTransitiveDependents()` method - DFS with cycle detection for dependents
|
||||
- New `getTransitiveDependencies()` method - DFS with cycle detection for dependencies
|
||||
- Top-level caching for efficiency (avoids re-computing for each file)
|
||||
- Graceful handling of circular dependencies
|
||||
|
||||
### Technical Details
|
||||
|
||||
- Total tests: 1840 passed (was 1826, +14 new tests)
|
||||
- 9 new tests for computeTransitiveCounts()
|
||||
- 2 new tests for getTransitiveDependents()
|
||||
- 2 new tests for getTransitiveDependencies()
|
||||
- 1 new test for analyzeAll with transitive counts
|
||||
- Coverage: 97.58% lines, 91.5% branches, 98.64% functions
|
||||
- 0 ESLint errors, 3 warnings (pre-existing complexity)
|
||||
- Build successful
|
||||
|
||||
### Notes
|
||||
|
||||
This completes v0.30.0 - the final feature milestone before v1.0.0:
|
||||
- ✅ 0.27.0 - Inline Dependency Graph
|
||||
- ✅ 0.28.0 - Circular Dependencies in Context
|
||||
- ✅ 0.29.0 - Impact Score
|
||||
- ✅ 0.30.0 - Transitive Dependencies Count
|
||||
|
||||
Next milestone: v1.0.0 - Production Ready
|
||||
|
||||
---
|
||||
|
||||
## [0.29.0] - 2025-12-05 - Impact Score
|
||||
|
||||
### Added
|
||||
|
||||
- **High Impact Files in Initial Context (v0.29.0)**
|
||||
- New `## High Impact Files` section in initial context
|
||||
- Shows files with highest impact scores (percentage of codebase depending on them)
|
||||
- Table format with File, Impact %, and Dependents count
|
||||
- Files sorted by impact score descending
|
||||
- Default: shows top 10 files with impact score >= 5%
|
||||
|
||||
- **Impact Score Computation**
|
||||
- New `impactScore: number` field in `FileMeta` (0-100)
|
||||
- Formula: `(dependents.length / (totalFiles - 1)) * 100`
|
||||
- Computed in `MetaAnalyzer.analyzeAll()` after all files analyzed
|
||||
- New `calculateImpactScore()` helper function in FileMeta.ts
|
||||
|
||||
- **Configuration Option**
|
||||
- `includeHighImpactFiles: boolean` in ContextConfigSchema (default: `true`)
|
||||
- `includeHighImpactFiles` option in `BuildContextOptions`
|
||||
- Users can disable to save tokens: `context.includeHighImpactFiles: false`
|
||||
|
||||
- **New Helper Function in prompts.ts**
|
||||
- `formatHighImpactFiles()` - formats high impact files table for display
|
||||
|
||||
### New Context Format
|
||||
|
||||
```
|
||||
## High Impact Files
|
||||
|
||||
| File | Impact | Dependents |
|
||||
|------|--------|------------|
|
||||
| utils/validation | 67% | 12 files |
|
||||
| types/user | 45% | 8 files |
|
||||
| services/user | 34% | 6 files |
|
||||
```
|
||||
|
||||
### Technical Details
|
||||
|
||||
- Total tests: 1826 passed (was 1798, +28 new tests)
|
||||
- 9 new tests for calculateImpactScore()
|
||||
- 14 new tests for formatHighImpactFiles() and buildInitialContext
|
||||
- 5 new tests for includeHighImpactFiles config option
|
||||
- Coverage: 97.52% lines, 91.3% branches, 98.63% functions
|
||||
- 0 ESLint errors, 3 warnings (pre-existing complexity)
|
||||
- Build successful
|
||||
|
||||
### Notes
|
||||
|
||||
This completes v0.29.0 of the Graph Metrics milestone:
|
||||
- ✅ 0.27.0 - Inline Dependency Graph
|
||||
- ✅ 0.28.0 - Circular Dependencies in Context
|
||||
- ✅ 0.29.0 - Impact Score
|
||||
|
||||
Next milestone: v0.30.0 - Transitive Dependencies Count
|
||||
|
||||
---
|
||||
|
||||
## [0.28.0] - 2025-12-05 - Circular Dependencies in Context
|
||||
|
||||
### Added
|
||||
|
||||
- **Circular Dependencies in Initial Context (v0.28.0)**
|
||||
- New `## ⚠️ Circular Dependencies` section in initial context
|
||||
- Shows cycle chains immediately without requiring tool calls
|
||||
- Format: `- services/user → services/auth → services/user`
|
||||
- Uses same path shortening as dependency graph (removes `src/`, extensions, `/index`)
|
||||
|
||||
- **Configuration Option**
|
||||
- `includeCircularDeps: boolean` in ContextConfigSchema (default: `true`)
|
||||
- `includeCircularDeps` option in `BuildContextOptions`
|
||||
- `circularDeps: string[][]` parameter to pass pre-computed cycles
|
||||
- Users can disable to save tokens: `context.includeCircularDeps: false`
|
||||
|
||||
- **New Helper Function in prompts.ts**
|
||||
- `formatCircularDeps()` - formats circular dependency cycles for display
|
||||
|
||||
### New Context Format
|
||||
|
||||
```
|
||||
## ⚠️ Circular Dependencies
|
||||
|
||||
- services/user → services/auth → services/user
|
||||
- utils/a → utils/b → utils/c → utils/a
|
||||
```
|
||||
|
||||
### Technical Details
|
||||
|
||||
- Total tests: 1798 passed (was 1775, +23 new tests)
|
||||
- 12 new tests for formatCircularDeps()
|
||||
- 6 new tests for buildInitialContext with includeCircularDeps
|
||||
- 5 new tests for includeCircularDeps config option
|
||||
- Coverage: 97.48% lines, 91.13% branches, 98.63% functions
|
||||
- 0 ESLint errors, 3 warnings (pre-existing complexity in ASTParser and prompts)
|
||||
- Build successful
|
||||
|
||||
## [0.27.0] - 2025-12-05 - Inline Dependency Graph
|
||||
|
||||
### Added
|
||||
|
||||
- **Dependency Graph in Initial Context (v0.27.0)**
|
||||
- New `## Dependency Graph` section in initial context
|
||||
- Shows file relationships without requiring tool calls
|
||||
- Format: `services/user: → types/user, utils/validation ← controllers/user`
|
||||
- `→` indicates files this file imports (dependencies)
|
||||
- `←` indicates files that import this file (dependents)
|
||||
- Hub files (>5 dependents) shown first
|
||||
- Files sorted by total connections (descending)
|
||||
|
||||
- **Configuration Option**
|
||||
- `includeDepsGraph: boolean` in ContextConfigSchema (default: `true`)
|
||||
- `includeDepsGraph` option in `BuildContextOptions`
|
||||
- Users can disable to save tokens: `context.includeDepsGraph: false`
|
||||
|
||||
- **New Helper Functions in prompts.ts**
|
||||
- `formatDependencyGraph()` - formats entire dependency graph from metas
|
||||
- `formatDepsEntry()` - formats single file's dependencies/dependents
|
||||
- `shortenPath()` - shortens paths (removes `src/`, extensions, `/index`)
|
||||
|
||||
### New Context Format
|
||||
|
||||
```
|
||||
## Dependency Graph
|
||||
|
||||
utils/validation: ← services/user, services/auth, controllers/api
|
||||
services/user: → types/user, utils/validation ← controllers/user, api/routes
|
||||
services/auth: → services/user, utils/jwt ← controllers/auth
|
||||
types/user: ← services/user, services/auth
|
||||
```
|
||||
|
||||
### Technical Details
|
||||
|
||||
- Total tests: 1775 passed (was 1754, +21 new tests)
|
||||
- 16 new tests for formatDependencyGraph()
|
||||
- 5 new tests for includeDepsGraph config option
|
||||
- Coverage: 97.48% lines, 91.07% branches, 98.62% functions
|
||||
- 0 ESLint errors, 2 warnings (pre-existing complexity in ASTParser and prompts)
|
||||
- Build successful
|
||||
|
||||
### Notes
|
||||
|
||||
This completes v0.27.0 of the Graph Metrics milestone:
|
||||
- ✅ 0.27.0 - Inline Dependency Graph
|
||||
|
||||
Next milestone: v0.28.0 - Circular Dependencies in Context
|
||||
|
||||
---
|
||||
|
||||
## [0.26.0] - 2025-12-05 - Rich Initial Context: Decorator Extraction
|
||||
|
||||
### Added
|
||||
|
||||
- **Decorator Extraction (0.24.4)**
|
||||
- Functions now show their decorators in initial context
|
||||
- Classes now show their decorators in initial context
|
||||
- Methods show decorators per-method
|
||||
- New format: `@Controller('users') class UserController`
|
||||
- Function format: `@Get(':id') async getUser(id: string): Promise<User>`
|
||||
- Supports NestJS decorators: `@Controller`, `@Get`, `@Post`, `@Injectable`, `@UseGuards`, etc.
|
||||
- Supports Angular decorators: `@Component`, `@Injectable`, `@Input`, `@Output`, etc.
|
||||
|
||||
- **FileAST.ts Enhancements**
|
||||
- `decorators?: string[]` field on `FunctionInfo`
|
||||
- `decorators?: string[]` field on `MethodInfo`
|
||||
- `decorators?: string[]` field on `ClassInfo`
|
||||
|
||||
- **ASTParser.ts Enhancements**
|
||||
- `formatDecorator()` - formats decorator node to string (e.g., `@Get(':id')`)
|
||||
- `extractNodeDecorators()` - extracts decorators that are direct children of a node
|
||||
- `extractDecoratorsFromSiblings()` - extracts decorators before the declaration in export statements
|
||||
- Decorators are extracted for classes, methods, and exported functions
|
||||
|
||||
- **prompts.ts Enhancements**
|
||||
- `formatDecoratorsPrefix()` - formats decorators as a prefix string for display
|
||||
- Used in `formatFunctionSignature()` for function decorators
|
||||
- Used in `formatFileSummary()` for class decorators
|
||||
|
||||
### New Context Format
|
||||
|
||||
```
|
||||
### src/controllers/user.controller.ts
|
||||
- @Controller('users') class UserController extends BaseController
|
||||
- @Get(':id') @Auth() async getUser(id: string): Promise<User>
|
||||
- @Post() @ValidateBody() async createUser(data: UserDTO): Promise<User>
|
||||
```
|
||||
|
||||
### Technical Details
|
||||
|
||||
- Total tests: 1754 passed (was 1720, +34 new tests)
|
||||
- 14 new tests for ASTParser decorator extraction
|
||||
- 6 new tests for prompts decorator formatting
|
||||
- +14 other tests from internal improvements
|
||||
- Coverage: 97.49% lines, 91.14% branches, 98.61% functions
|
||||
- 0 ESLint errors, 2 warnings (pre-existing complexity in ASTParser and prompts)
|
||||
- Build successful
|
||||
|
||||
### Notes
|
||||
|
||||
This completes the v0.24.0 Rich Initial Context milestone:
|
||||
- ✅ 0.24.1 - Function Signatures with Types
|
||||
- ✅ 0.24.2 - Interface/Type Field Definitions
|
||||
- ✅ 0.24.3 - Enum Value Definitions
|
||||
- ✅ 0.24.4 - Decorator Extraction
|
||||
|
||||
Next milestone: v0.25.0 - Graph Metrics in Context
|
||||
|
||||
---
|
||||
|
||||
## [0.25.0] - 2025-12-04 - Rich Initial Context: Interface Fields & Type Definitions
|
||||
|
||||
### Added
|
||||
|
||||
@@ -1779,10 +1779,10 @@ export interface ScanResult {
|
||||
|
||||
---
|
||||
|
||||
## Version 0.24.0 - Rich Initial Context 📋
|
||||
## Version 0.24.0 - Rich Initial Context 📋 ✅
|
||||
|
||||
**Priority:** HIGH
|
||||
**Status:** In Progress (2/4 complete)
|
||||
**Status:** Complete (v0.24.0 released)
|
||||
|
||||
Enhance initial context for LLM: add function signatures, interface field types, and enum values. This allows LLM to answer questions about types and parameters without tool calls.
|
||||
|
||||
@@ -1836,7 +1836,7 @@ Enhance initial context for LLM: add function signatures, interface field types,
|
||||
|
||||
**Why:** LLM knows data structure, won't invent fields.
|
||||
|
||||
### 0.24.3 - Enum Value Definitions
|
||||
### 0.24.3 - Enum Value Definitions ⭐ ✅
|
||||
|
||||
**Problem:** LLM only sees `type: Status`
|
||||
**Solution:** Show values: `Status { Active=1, Inactive=0, Pending=2 }`
|
||||
@@ -1852,13 +1852,13 @@ Enhance initial context for LLM: add function signatures, interface field types,
|
||||
```
|
||||
|
||||
**Changes:**
|
||||
- [ ] Add `EnumInfo` to FileAST with members and values
|
||||
- [ ] Update `ASTParser.ts` to extract enum members
|
||||
- [ ] Update `formatFileSummary()` to output enum values
|
||||
- [x] Add `EnumInfo` to FileAST with members and values
|
||||
- [x] Update `ASTParser.ts` to extract enum members
|
||||
- [x] Update `formatFileSummary()` to output enum values
|
||||
|
||||
**Why:** LLM knows valid enum values.
|
||||
|
||||
### 0.24.4 - Decorator Extraction
|
||||
### 0.24.4 - Decorator Extraction ⭐ ✅
|
||||
|
||||
**Problem:** LLM doesn't see decorators (important for NestJS, Angular)
|
||||
**Solution:** Show decorators in context
|
||||
@@ -1872,27 +1872,24 @@ Enhance initial context for LLM: add function signatures, interface field types,
|
||||
```
|
||||
|
||||
**Changes:**
|
||||
- [ ] Add `decorators: string[]` to FunctionInfo and ClassInfo
|
||||
- [ ] Update `ASTParser.ts` to extract decorators
|
||||
- [ ] Update context to display decorators
|
||||
- [x] Add `decorators: string[]` to FunctionInfo, MethodInfo, and ClassInfo
|
||||
- [x] Update `ASTParser.ts` to extract decorators via `extractNodeDecorators()` and `extractDecoratorsFromSiblings()`
|
||||
- [x] Update `prompts.ts` to display decorators via `formatDecoratorsPrefix()`
|
||||
|
||||
**Why:** LLM understands routing, DI, guards in NestJS/Angular.
|
||||
|
||||
**Tests:**
|
||||
- [ ] Unit tests for enhanced ASTParser
|
||||
- [ ] Unit tests for new context format
|
||||
- [ ] Integration tests for full flow
|
||||
- [x] Unit tests for ASTParser decorator extraction (14 tests)
|
||||
- [x] Unit tests for prompts decorator formatting (6 tests)
|
||||
|
||||
---
|
||||
|
||||
## Version 0.25.0 - Graph Metrics in Context 📊
|
||||
## Version 0.27.0 - Inline Dependency Graph 📊 ✅
|
||||
|
||||
**Priority:** MEDIUM
|
||||
**Status:** Planned
|
||||
**Status:** Complete (v0.27.0 released)
|
||||
|
||||
Add graph metrics to initial context: dependency graph, circular dependencies, impact score.
|
||||
|
||||
### 0.25.1 - Inline Dependency Graph
|
||||
### Description
|
||||
|
||||
**Problem:** LLM doesn't see file relationships without tool calls
|
||||
**Solution:** Show dependency graph in context
|
||||
@@ -1907,14 +1904,25 @@ Add graph metrics to initial context: dependency graph, circular dependencies, i
|
||||
```
|
||||
|
||||
**Changes:**
|
||||
- [ ] Add `formatDependencyGraph()` to prompts.ts
|
||||
- [ ] Use data from `FileMeta.dependencies` and `FileMeta.dependents`
|
||||
- [ ] Group by hub files (many connections)
|
||||
- [ ] Add `includeDepsGraph: boolean` option to config
|
||||
- [x] Add `formatDependencyGraph()` to prompts.ts
|
||||
- [x] Use data from `FileMeta.dependencies` and `FileMeta.dependents`
|
||||
- [x] Group by hub files (many connections)
|
||||
- [x] Add `includeDepsGraph: boolean` option to config
|
||||
|
||||
**Tests:**
|
||||
- [x] Unit tests for formatDependencyGraph() (16 tests)
|
||||
- [x] Unit tests for includeDepsGraph config option (5 tests)
|
||||
|
||||
**Why:** LLM sees architecture without tool call.
|
||||
|
||||
### 0.25.2 - Circular Dependencies in Context
|
||||
---
|
||||
|
||||
## Version 0.28.0 - Circular Dependencies in Context 🔄 ✅
|
||||
|
||||
**Priority:** MEDIUM
|
||||
**Status:** Complete (v0.28.0 released)
|
||||
|
||||
### Description
|
||||
|
||||
**Problem:** Circular deps are computed but not shown in context
|
||||
**Solution:** Show cycles immediately
|
||||
@@ -1928,13 +1936,26 @@ Add graph metrics to initial context: dependency graph, circular dependencies, i
|
||||
```
|
||||
|
||||
**Changes:**
|
||||
- [ ] Add `formatCircularDeps()` to prompts.ts
|
||||
- [ ] Get circular deps from IndexBuilder
|
||||
- [ ] Store in Redis as separate key or in meta
|
||||
- [x] Add `formatCircularDeps()` to prompts.ts
|
||||
- [x] Add `includeCircularDeps: boolean` config option (default: true)
|
||||
- [x] Add `circularDeps: string[][]` parameter to `BuildContextOptions`
|
||||
- [x] Integrate into `buildInitialContext()`
|
||||
|
||||
**Tests:**
|
||||
- [x] Unit tests for formatCircularDeps() (12 tests)
|
||||
- [x] Unit tests for buildInitialContext with includeCircularDeps (6 tests)
|
||||
- [x] Unit tests for includeCircularDeps config option (5 tests)
|
||||
|
||||
**Why:** LLM immediately sees architecture problems.
|
||||
|
||||
### 0.25.3 - Impact Score
|
||||
---
|
||||
|
||||
## Version 0.29.0 - Impact Score 📈 ✅
|
||||
|
||||
**Priority:** MEDIUM
|
||||
**Status:** Complete (v0.29.0 released)
|
||||
|
||||
### Description
|
||||
|
||||
**Problem:** LLM doesn't know which files are critical
|
||||
**Solution:** Show impact score (% of codebase that depends on file)
|
||||
@@ -1951,14 +1972,27 @@ Add graph metrics to initial context: dependency graph, circular dependencies, i
|
||||
```
|
||||
|
||||
**Changes:**
|
||||
- [ ] Add `impactScore: number` to FileMeta (0-100)
|
||||
- [ ] Compute in MetaAnalyzer: (transitiveDepByCount / totalFiles) * 100
|
||||
- [ ] Add `formatHighImpactFiles()` to prompts.ts
|
||||
- [ ] Show top-10 high impact files
|
||||
- [x] Add `impactScore: number` to FileMeta (0-100)
|
||||
- [x] Compute in MetaAnalyzer: (dependents.length / (totalFiles - 1)) * 100
|
||||
- [x] Add `formatHighImpactFiles()` to prompts.ts
|
||||
- [x] Show top-10 high impact files
|
||||
- [x] Add `includeHighImpactFiles` config option (default: true)
|
||||
|
||||
**Tests:**
|
||||
- [x] Unit tests for calculateImpactScore (9 tests)
|
||||
- [x] Unit tests for formatHighImpactFiles (14 tests)
|
||||
- [x] Unit tests for includeHighImpactFiles config (5 tests)
|
||||
|
||||
**Why:** LLM understands which files are critical for changes.
|
||||
|
||||
### 0.25.4 - Transitive Dependencies Count
|
||||
---
|
||||
|
||||
## Version 0.30.0 - Transitive Dependencies Count 🔢 ✅
|
||||
|
||||
**Priority:** MEDIUM
|
||||
**Status:** Complete (v0.30.0 released)
|
||||
|
||||
### Description
|
||||
|
||||
**Problem:** Currently only counting direct dependencies
|
||||
**Solution:** Add transitive dependencies to meta
|
||||
@@ -1973,14 +2007,19 @@ interface FileMeta {
|
||||
```
|
||||
|
||||
**Changes:**
|
||||
- [ ] Add `computeTransitiveDeps()` to MetaAnalyzer
|
||||
- [ ] Use DFS with memoization for efficiency
|
||||
- [ ] Store in FileMeta
|
||||
- [x] Add `transitiveDepCount` and `transitiveDepByCount` to FileMeta
|
||||
- [x] Add `computeTransitiveCounts()` to MetaAnalyzer
|
||||
- [x] Add `getTransitiveDependents()` with DFS and cycle detection
|
||||
- [x] Add `getTransitiveDependencies()` with DFS and cycle detection
|
||||
- [x] Use top-level caching for efficiency
|
||||
- [x] Handle circular dependencies gracefully (exclude self from count)
|
||||
|
||||
**Tests:**
|
||||
- [ ] Unit tests for graph metrics computation
|
||||
- [ ] Unit tests for new context sections
|
||||
- [ ] Performance tests for large codebases
|
||||
- [x] Unit tests for transitive dependencies computation (14 tests)
|
||||
- [x] Tests for circular dependencies
|
||||
- [x] Tests for diamond dependency patterns
|
||||
- [x] Tests for deep dependency chains
|
||||
- [x] Cache behavior tests
|
||||
|
||||
---
|
||||
|
||||
@@ -1995,12 +2034,12 @@ interface FileMeta {
|
||||
- [x] Error handling complete ✅ (v0.16.0)
|
||||
- [ ] Performance optimized
|
||||
- [x] Documentation complete ✅ (v0.17.0)
|
||||
- [x] Test coverage ≥91% branches, ≥95% lines/functions/statements ✅ (91.21% branches, 97.5% lines, 98.58% functions, 97.5% statements - 1687 tests)
|
||||
- [x] Test coverage ≥91% branches, ≥95% lines/functions/statements ✅ (91.5% branches, 97.58% lines, 98.64% functions, 97.58% statements - 1840 tests)
|
||||
- [x] 0 ESLint errors ✅
|
||||
- [x] Examples working ✅ (v0.18.0)
|
||||
- [x] CHANGELOG.md up to date ✅
|
||||
- [ ] Rich initial context (v0.24.0) — function signatures, interface fields, enum values
|
||||
- [ ] Graph metrics in context (v0.25.0) — dependency graph, circular deps, impact score
|
||||
- [x] Rich initial context (v0.24.0-v0.26.0) — function signatures, interface fields, enum values, decorators ✅
|
||||
- [x] Graph metrics in context (v0.27.0-v0.30.0) — dependency graph ✅, circular deps ✅, impact score ✅, transitive deps ✅
|
||||
|
||||
---
|
||||
|
||||
@@ -2077,9 +2116,9 @@ sessions:list # List<session_id>
|
||||
|
||||
---
|
||||
|
||||
**Last Updated:** 2025-12-04
|
||||
**Last Updated:** 2025-12-05
|
||||
**Target Version:** 1.0.0
|
||||
**Current Version:** 0.25.0
|
||||
**Next Milestones:** v0.24.0 (Rich Context - 2/4 complete), v0.25.0 (Graph Metrics)
|
||||
**Current Version:** 0.30.0
|
||||
**Next Milestones:** v1.0.0 (Production Ready)
|
||||
|
||||
> **Note:** v0.24.0 and v0.25.0 are required for 1.0.0 release. They enable LLM to answer questions about types, signatures, and architecture without tool calls.
|
||||
> **Note:** Rich Initial Context complete ✅ (v0.24.0-v0.26.0). Graph Metrics complete ✅ (v0.27.0-v0.30.0). All feature milestones done, ready for v1.0.0 stabilization.
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "@samiyev/ipuaro",
|
||||
"version": "0.25.0",
|
||||
"version": "0.30.1",
|
||||
"description": "Local AI agent for codebase operations with infinite context feeling",
|
||||
"author": "Fozilbek Samiyev <fozilbek.samiyev@gmail.com>",
|
||||
"license": "MIT",
|
||||
|
||||
@@ -18,6 +18,7 @@ import {
|
||||
buildInitialContext,
|
||||
type ProjectStructure,
|
||||
SYSTEM_PROMPT,
|
||||
TOOL_REMINDER,
|
||||
} from "../../infrastructure/llm/prompts.js"
|
||||
import { parseToolCalls } from "../../infrastructure/llm/ResponseParser.js"
|
||||
import type { IToolRegistry } from "../interfaces/IToolRegistry.js"
|
||||
@@ -277,6 +278,12 @@ export class HandleMessage {
|
||||
|
||||
messages.push(...session.history)
|
||||
|
||||
// Add tool reminder if last message is from user (first LLM call for this query)
|
||||
const lastMessage = session.history[session.history.length - 1]
|
||||
if (lastMessage?.role === "user") {
|
||||
messages.push(createSystemMessage(TOOL_REMINDER))
|
||||
}
|
||||
|
||||
return messages
|
||||
}
|
||||
|
||||
|
||||
@@ -52,6 +52,8 @@ export interface FunctionInfo {
|
||||
isExported: boolean
|
||||
/** Return type (if available) */
|
||||
returnType?: string
|
||||
/** Decorators applied to the function (e.g., ["@Get(':id')", "@Auth()"]) */
|
||||
decorators?: string[]
|
||||
}
|
||||
|
||||
export interface MethodInfo {
|
||||
@@ -69,6 +71,8 @@ export interface MethodInfo {
|
||||
visibility: "public" | "private" | "protected"
|
||||
/** Whether it's static */
|
||||
isStatic: boolean
|
||||
/** Decorators applied to the method (e.g., ["@Get(':id')", "@UseGuards(AuthGuard)"]) */
|
||||
decorators?: string[]
|
||||
}
|
||||
|
||||
export interface PropertyInfo {
|
||||
@@ -105,6 +109,8 @@ export interface ClassInfo {
|
||||
isExported: boolean
|
||||
/** Whether class is abstract */
|
||||
isAbstract: boolean
|
||||
/** Decorators applied to the class (e.g., ["@Controller('users')", "@Injectable()"]) */
|
||||
decorators?: string[]
|
||||
}
|
||||
|
||||
export interface InterfaceInfo {
|
||||
@@ -133,6 +139,28 @@ export interface TypeAliasInfo {
|
||||
definition?: string
|
||||
}
|
||||
|
||||
export interface EnumMemberInfo {
|
||||
/** Member name */
|
||||
name: string
|
||||
/** Member value (string or number, if specified) */
|
||||
value?: string | number
|
||||
}
|
||||
|
||||
export interface EnumInfo {
|
||||
/** Enum name */
|
||||
name: string
|
||||
/** Start line number */
|
||||
lineStart: number
|
||||
/** End line number */
|
||||
lineEnd: number
|
||||
/** Enum members with values */
|
||||
members: EnumMemberInfo[]
|
||||
/** Whether it's exported */
|
||||
isExported: boolean
|
||||
/** Whether it's a const enum */
|
||||
isConst: boolean
|
||||
}
|
||||
|
||||
export interface FileAST {
|
||||
/** Import statements */
|
||||
imports: ImportInfo[]
|
||||
@@ -146,6 +174,8 @@ export interface FileAST {
|
||||
interfaces: InterfaceInfo[]
|
||||
/** Type alias declarations */
|
||||
typeAliases: TypeAliasInfo[]
|
||||
/** Enum declarations */
|
||||
enums: EnumInfo[]
|
||||
/** Whether parsing encountered errors */
|
||||
parseError: boolean
|
||||
/** Parse error message if any */
|
||||
@@ -160,6 +190,7 @@ export function createEmptyFileAST(): FileAST {
|
||||
classes: [],
|
||||
interfaces: [],
|
||||
typeAliases: [],
|
||||
enums: [],
|
||||
parseError: false,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -26,6 +26,12 @@ export interface FileMeta {
|
||||
isEntryPoint: boolean
|
||||
/** File type classification */
|
||||
fileType: "source" | "test" | "config" | "types" | "unknown"
|
||||
/** Impact score (0-100): percentage of codebase that depends on this file */
|
||||
impactScore: number
|
||||
/** Count of files that depend on this file transitively (including indirect dependents) */
|
||||
transitiveDepCount: number
|
||||
/** Count of files this file depends on transitively (including indirect dependencies) */
|
||||
transitiveDepByCount: number
|
||||
}
|
||||
|
||||
export function createFileMeta(partial: Partial<FileMeta> = {}): FileMeta {
|
||||
@@ -41,6 +47,9 @@ export function createFileMeta(partial: Partial<FileMeta> = {}): FileMeta {
|
||||
isHub: false,
|
||||
isEntryPoint: false,
|
||||
fileType: "unknown",
|
||||
impactScore: 0,
|
||||
transitiveDepCount: 0,
|
||||
transitiveDepByCount: 0,
|
||||
...partial,
|
||||
}
|
||||
}
|
||||
@@ -48,3 +57,20 @@ export function createFileMeta(partial: Partial<FileMeta> = {}): FileMeta {
|
||||
export function isHubFile(dependentCount: number): boolean {
|
||||
return dependentCount > 5
|
||||
}
|
||||
|
||||
/**
|
||||
* Calculate impact score based on number of dependents and total files.
|
||||
* Impact score represents what percentage of the codebase depends on this file.
|
||||
* @param dependentCount - Number of files that depend on this file
|
||||
* @param totalFiles - Total number of files in the project
|
||||
* @returns Impact score from 0 to 100
|
||||
*/
|
||||
export function calculateImpactScore(dependentCount: number, totalFiles: number): number {
|
||||
if (totalFiles <= 1) {
|
||||
return 0
|
||||
}
|
||||
// Exclude the file itself from the total
|
||||
const maxPossibleDependents = totalFiles - 1
|
||||
const score = (dependentCount / maxPossibleDependents) * 100
|
||||
return Math.round(Math.min(100, score))
|
||||
}
|
||||
|
||||
@@ -6,6 +6,7 @@ import JSON from "tree-sitter-json"
|
||||
import * as yamlParser from "yaml"
|
||||
import {
|
||||
createEmptyFileAST,
|
||||
type EnumMemberInfo,
|
||||
type ExportInfo,
|
||||
type FileAST,
|
||||
type ImportInfo,
|
||||
@@ -192,6 +193,11 @@ export class ASTParser {
|
||||
this.extractTypeAlias(node, ast, false)
|
||||
}
|
||||
break
|
||||
case NodeType.ENUM_DECLARATION:
|
||||
if (isTypeScript) {
|
||||
this.extractEnum(node, ast, false)
|
||||
}
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
@@ -258,13 +264,15 @@ export class ASTParser {
|
||||
const declaration = node.childForFieldName(FieldName.DECLARATION)
|
||||
|
||||
if (declaration) {
|
||||
const decorators = this.extractDecoratorsFromSiblings(declaration)
|
||||
|
||||
switch (declaration.type) {
|
||||
case NodeType.FUNCTION_DECLARATION:
|
||||
this.extractFunction(declaration, ast, true)
|
||||
this.extractFunction(declaration, ast, true, decorators)
|
||||
this.addExportInfo(ast, declaration, "function", isDefault)
|
||||
break
|
||||
case NodeType.CLASS_DECLARATION:
|
||||
this.extractClass(declaration, ast, true)
|
||||
this.extractClass(declaration, ast, true, decorators)
|
||||
this.addExportInfo(ast, declaration, "class", isDefault)
|
||||
break
|
||||
case NodeType.INTERFACE_DECLARATION:
|
||||
@@ -275,6 +283,10 @@ export class ASTParser {
|
||||
this.extractTypeAlias(declaration, ast, true)
|
||||
this.addExportInfo(ast, declaration, "type", isDefault)
|
||||
break
|
||||
case NodeType.ENUM_DECLARATION:
|
||||
this.extractEnum(declaration, ast, true)
|
||||
this.addExportInfo(ast, declaration, "type", isDefault)
|
||||
break
|
||||
case NodeType.LEXICAL_DECLARATION:
|
||||
this.extractLexicalDeclaration(declaration, ast, true)
|
||||
break
|
||||
@@ -299,7 +311,12 @@ export class ASTParser {
|
||||
}
|
||||
}
|
||||
|
||||
private extractFunction(node: SyntaxNode, ast: FileAST, isExported: boolean): void {
|
||||
private extractFunction(
|
||||
node: SyntaxNode,
|
||||
ast: FileAST,
|
||||
isExported: boolean,
|
||||
externalDecorators: string[] = [],
|
||||
): void {
|
||||
const nameNode = node.childForFieldName(FieldName.NAME)
|
||||
if (!nameNode) {
|
||||
return
|
||||
@@ -309,6 +326,9 @@ export class ASTParser {
|
||||
const isAsync = node.children.some((c) => c.type === NodeType.ASYNC)
|
||||
const returnTypeNode = node.childForFieldName(FieldName.RETURN_TYPE)
|
||||
|
||||
const nodeDecorators = this.extractNodeDecorators(node)
|
||||
const decorators = [...externalDecorators, ...nodeDecorators]
|
||||
|
||||
ast.functions.push({
|
||||
name: nameNode.text,
|
||||
lineStart: node.startPosition.row + 1,
|
||||
@@ -317,6 +337,7 @@ export class ASTParser {
|
||||
isAsync,
|
||||
isExported,
|
||||
returnType: returnTypeNode?.text?.replace(/^:\s*/, ""),
|
||||
decorators,
|
||||
})
|
||||
}
|
||||
|
||||
@@ -342,6 +363,7 @@ export class ASTParser {
|
||||
isAsync,
|
||||
isExported,
|
||||
returnType: returnTypeNode?.text?.replace(/^:\s*/, ""),
|
||||
decorators: [],
|
||||
})
|
||||
|
||||
if (isExported) {
|
||||
@@ -364,7 +386,12 @@ export class ASTParser {
|
||||
}
|
||||
}
|
||||
|
||||
private extractClass(node: SyntaxNode, ast: FileAST, isExported: boolean): void {
|
||||
private extractClass(
|
||||
node: SyntaxNode,
|
||||
ast: FileAST,
|
||||
isExported: boolean,
|
||||
externalDecorators: string[] = [],
|
||||
): void {
|
||||
const nameNode = node.childForFieldName(FieldName.NAME)
|
||||
if (!nameNode) {
|
||||
return
|
||||
@@ -375,14 +402,19 @@ export class ASTParser {
|
||||
const properties: PropertyInfo[] = []
|
||||
|
||||
if (body) {
|
||||
let pendingDecorators: string[] = []
|
||||
for (const member of body.children) {
|
||||
if (member.type === NodeType.METHOD_DEFINITION) {
|
||||
methods.push(this.extractMethod(member))
|
||||
if (member.type === NodeType.DECORATOR) {
|
||||
pendingDecorators.push(this.formatDecorator(member))
|
||||
} else if (member.type === NodeType.METHOD_DEFINITION) {
|
||||
methods.push(this.extractMethod(member, pendingDecorators))
|
||||
pendingDecorators = []
|
||||
} else if (
|
||||
member.type === NodeType.PUBLIC_FIELD_DEFINITION ||
|
||||
member.type === NodeType.FIELD_DEFINITION
|
||||
) {
|
||||
properties.push(this.extractProperty(member))
|
||||
pendingDecorators = []
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -390,6 +422,9 @@ export class ASTParser {
|
||||
const { extendsName, implementsList } = this.extractClassHeritage(node)
|
||||
const isAbstract = node.children.some((c) => c.type === NodeType.ABSTRACT)
|
||||
|
||||
const nodeDecorators = this.extractNodeDecorators(node)
|
||||
const decorators = [...externalDecorators, ...nodeDecorators]
|
||||
|
||||
ast.classes.push({
|
||||
name: nameNode.text,
|
||||
lineStart: node.startPosition.row + 1,
|
||||
@@ -400,6 +435,7 @@ export class ASTParser {
|
||||
implements: implementsList,
|
||||
isExported,
|
||||
isAbstract,
|
||||
decorators,
|
||||
})
|
||||
}
|
||||
|
||||
@@ -453,7 +489,7 @@ export class ASTParser {
|
||||
}
|
||||
}
|
||||
|
||||
private extractMethod(node: SyntaxNode): MethodInfo {
|
||||
private extractMethod(node: SyntaxNode, decorators: string[] = []): MethodInfo {
|
||||
const nameNode = node.childForFieldName(FieldName.NAME)
|
||||
const params = this.extractParameters(node)
|
||||
const isAsync = node.children.some((c) => c.type === NodeType.ASYNC)
|
||||
@@ -475,6 +511,7 @@ export class ASTParser {
|
||||
isAsync,
|
||||
visibility,
|
||||
isStatic,
|
||||
decorators,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -565,6 +602,75 @@ export class ASTParser {
|
||||
})
|
||||
}
|
||||
|
||||
private extractEnum(node: SyntaxNode, ast: FileAST, isExported: boolean): void {
|
||||
const nameNode = node.childForFieldName(FieldName.NAME)
|
||||
if (!nameNode) {
|
||||
return
|
||||
}
|
||||
|
||||
const body = node.childForFieldName(FieldName.BODY)
|
||||
const members: EnumMemberInfo[] = []
|
||||
|
||||
if (body) {
|
||||
for (const child of body.children) {
|
||||
if (child.type === NodeType.ENUM_ASSIGNMENT) {
|
||||
const memberName = child.childForFieldName(FieldName.NAME)
|
||||
const memberValue = child.childForFieldName(FieldName.VALUE)
|
||||
if (memberName) {
|
||||
members.push({
|
||||
name: memberName.text,
|
||||
value: this.parseEnumValue(memberValue),
|
||||
})
|
||||
}
|
||||
} else if (
|
||||
child.type === NodeType.IDENTIFIER ||
|
||||
child.type === NodeType.PROPERTY_IDENTIFIER
|
||||
) {
|
||||
members.push({
|
||||
name: child.text,
|
||||
value: undefined,
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
const isConst = node.children.some((c) => c.text === "const")
|
||||
|
||||
ast.enums.push({
|
||||
name: nameNode.text,
|
||||
lineStart: node.startPosition.row + 1,
|
||||
lineEnd: node.endPosition.row + 1,
|
||||
members,
|
||||
isExported,
|
||||
isConst,
|
||||
})
|
||||
}
|
||||
|
||||
private parseEnumValue(valueNode: SyntaxNode | null): string | number | undefined {
|
||||
if (!valueNode) {
|
||||
return undefined
|
||||
}
|
||||
|
||||
const text = valueNode.text
|
||||
|
||||
if (valueNode.type === "number") {
|
||||
return Number(text)
|
||||
}
|
||||
|
||||
if (valueNode.type === "string") {
|
||||
return this.getStringValue(valueNode)
|
||||
}
|
||||
|
||||
if (valueNode.type === "unary_expression" && text.startsWith("-")) {
|
||||
const num = Number(text)
|
||||
if (!isNaN(num)) {
|
||||
return num
|
||||
}
|
||||
}
|
||||
|
||||
return text
|
||||
}
|
||||
|
||||
private extractParameters(node: SyntaxNode): ParameterInfo[] {
|
||||
const params: ParameterInfo[] = []
|
||||
const paramsNode = node.childForFieldName(FieldName.PARAMETERS)
|
||||
@@ -613,6 +719,49 @@ export class ASTParser {
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Format a decorator node to a string like "@Get(':id')" or "@Injectable()".
|
||||
*/
|
||||
private formatDecorator(node: SyntaxNode): string {
|
||||
return node.text.replace(/\s+/g, " ").trim()
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract decorators that are direct children of a node.
|
||||
* In tree-sitter, decorators are children of the class/function declaration.
|
||||
*/
|
||||
private extractNodeDecorators(node: SyntaxNode): string[] {
|
||||
const decorators: string[] = []
|
||||
for (const child of node.children) {
|
||||
if (child.type === NodeType.DECORATOR) {
|
||||
decorators.push(this.formatDecorator(child))
|
||||
}
|
||||
}
|
||||
return decorators
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract decorators from sibling nodes before the current node.
|
||||
* Decorators appear as children before the declaration in export statements.
|
||||
*/
|
||||
private extractDecoratorsFromSiblings(node: SyntaxNode): string[] {
|
||||
const decorators: string[] = []
|
||||
const parent = node.parent
|
||||
if (!parent) {
|
||||
return decorators
|
||||
}
|
||||
|
||||
for (const sibling of parent.children) {
|
||||
if (sibling.type === NodeType.DECORATOR) {
|
||||
decorators.push(this.formatDecorator(sibling))
|
||||
} else if (sibling === node) {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
return decorators
|
||||
}
|
||||
|
||||
private classifyImport(from: string): ImportInfo["type"] {
|
||||
if (from.startsWith(".") || from.startsWith("/")) {
|
||||
return "internal"
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
import * as path from "node:path"
|
||||
import {
|
||||
calculateImpactScore,
|
||||
type ComplexityMetrics,
|
||||
createFileMeta,
|
||||
type FileMeta,
|
||||
@@ -430,6 +431,7 @@ export class MetaAnalyzer {
|
||||
|
||||
/**
|
||||
* Batch analyze multiple files.
|
||||
* Computes impact scores and transitive dependencies after all files are analyzed.
|
||||
*/
|
||||
analyzeAll(files: Map<string, { ast: FileAST; content: string }>): Map<string, FileMeta> {
|
||||
const allASTs = new Map<string, FileAST>()
|
||||
@@ -443,6 +445,171 @@ export class MetaAnalyzer {
|
||||
results.set(filePath, meta)
|
||||
}
|
||||
|
||||
// Compute impact scores now that we know total file count
|
||||
const totalFiles = results.size
|
||||
for (const [, meta] of results) {
|
||||
meta.impactScore = calculateImpactScore(meta.dependents.length, totalFiles)
|
||||
}
|
||||
|
||||
// Compute transitive dependency counts
|
||||
this.computeTransitiveCounts(results)
|
||||
|
||||
return results
|
||||
}
|
||||
|
||||
/**
|
||||
* Compute transitive dependency counts for all files.
|
||||
* Uses DFS with memoization for efficiency.
|
||||
*/
|
||||
computeTransitiveCounts(metas: Map<string, FileMeta>): void {
|
||||
// Memoization caches
|
||||
const transitiveDepCache = new Map<string, Set<string>>()
|
||||
const transitiveDepByCache = new Map<string, Set<string>>()
|
||||
|
||||
// Compute transitive dependents (files that depend on this file, directly or transitively)
|
||||
for (const [filePath, meta] of metas) {
|
||||
const transitiveDeps = this.getTransitiveDependents(filePath, metas, transitiveDepCache)
|
||||
// Exclude the file itself from count (can happen in cycles)
|
||||
meta.transitiveDepCount = transitiveDeps.has(filePath)
|
||||
? transitiveDeps.size - 1
|
||||
: transitiveDeps.size
|
||||
}
|
||||
|
||||
// Compute transitive dependencies (files this file depends on, directly or transitively)
|
||||
for (const [filePath, meta] of metas) {
|
||||
const transitiveDepsBy = this.getTransitiveDependencies(
|
||||
filePath,
|
||||
metas,
|
||||
transitiveDepByCache,
|
||||
)
|
||||
// Exclude the file itself from count (can happen in cycles)
|
||||
meta.transitiveDepByCount = transitiveDepsBy.has(filePath)
|
||||
? transitiveDepsBy.size - 1
|
||||
: transitiveDepsBy.size
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get all files that depend on the given file transitively.
|
||||
* Uses DFS with cycle detection. Caching only at the top level.
|
||||
*/
|
||||
getTransitiveDependents(
|
||||
filePath: string,
|
||||
metas: Map<string, FileMeta>,
|
||||
cache: Map<string, Set<string>>,
|
||||
visited?: Set<string>,
|
||||
): Set<string> {
|
||||
// Return cached result if available (only valid for top-level calls)
|
||||
if (!visited) {
|
||||
const cached = cache.get(filePath)
|
||||
if (cached) {
|
||||
return cached
|
||||
}
|
||||
}
|
||||
|
||||
const isTopLevel = !visited
|
||||
if (!visited) {
|
||||
visited = new Set()
|
||||
}
|
||||
|
||||
// Detect cycles
|
||||
if (visited.has(filePath)) {
|
||||
return new Set()
|
||||
}
|
||||
|
||||
visited.add(filePath)
|
||||
const result = new Set<string>()
|
||||
|
||||
const meta = metas.get(filePath)
|
||||
if (!meta) {
|
||||
if (isTopLevel) {
|
||||
cache.set(filePath, result)
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
// Add direct dependents
|
||||
for (const dependent of meta.dependents) {
|
||||
result.add(dependent)
|
||||
|
||||
// Recursively add transitive dependents
|
||||
const transitive = this.getTransitiveDependents(
|
||||
dependent,
|
||||
metas,
|
||||
cache,
|
||||
new Set(visited),
|
||||
)
|
||||
for (const t of transitive) {
|
||||
result.add(t)
|
||||
}
|
||||
}
|
||||
|
||||
// Only cache top-level results (not intermediate results during recursion)
|
||||
if (isTopLevel) {
|
||||
cache.set(filePath, result)
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
/**
|
||||
* Get all files that the given file depends on transitively.
|
||||
* Uses DFS with cycle detection. Caching only at the top level.
|
||||
*/
|
||||
getTransitiveDependencies(
|
||||
filePath: string,
|
||||
metas: Map<string, FileMeta>,
|
||||
cache: Map<string, Set<string>>,
|
||||
visited?: Set<string>,
|
||||
): Set<string> {
|
||||
// Return cached result if available (only valid for top-level calls)
|
||||
if (!visited) {
|
||||
const cached = cache.get(filePath)
|
||||
if (cached) {
|
||||
return cached
|
||||
}
|
||||
}
|
||||
|
||||
const isTopLevel = !visited
|
||||
if (!visited) {
|
||||
visited = new Set()
|
||||
}
|
||||
|
||||
// Detect cycles
|
||||
if (visited.has(filePath)) {
|
||||
return new Set()
|
||||
}
|
||||
|
||||
visited.add(filePath)
|
||||
const result = new Set<string>()
|
||||
|
||||
const meta = metas.get(filePath)
|
||||
if (!meta) {
|
||||
if (isTopLevel) {
|
||||
cache.set(filePath, result)
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
// Add direct dependencies
|
||||
for (const dependency of meta.dependencies) {
|
||||
result.add(dependency)
|
||||
|
||||
// Recursively add transitive dependencies
|
||||
const transitive = this.getTransitiveDependencies(
|
||||
dependency,
|
||||
metas,
|
||||
cache,
|
||||
new Set(visited),
|
||||
)
|
||||
for (const t of transitive) {
|
||||
result.add(t)
|
||||
}
|
||||
}
|
||||
|
||||
// Only cache top-level results (not intermediate results during recursion)
|
||||
if (isTopLevel) {
|
||||
cache.set(filePath, result)
|
||||
}
|
||||
return result
|
||||
}
|
||||
}
|
||||
|
||||
@@ -16,6 +16,7 @@ export const NodeType = {
|
||||
CLASS_DECLARATION: "class_declaration",
|
||||
INTERFACE_DECLARATION: "interface_declaration",
|
||||
TYPE_ALIAS_DECLARATION: "type_alias_declaration",
|
||||
ENUM_DECLARATION: "enum_declaration",
|
||||
|
||||
// Clauses
|
||||
IMPORT_CLAUSE: "import_clause",
|
||||
@@ -37,6 +38,11 @@ export const NodeType = {
|
||||
FIELD_DEFINITION: "field_definition",
|
||||
PROPERTY_SIGNATURE: "property_signature",
|
||||
|
||||
// Enum members
|
||||
ENUM_BODY: "enum_body",
|
||||
ENUM_ASSIGNMENT: "enum_assignment",
|
||||
PROPERTY_IDENTIFIER: "property_identifier",
|
||||
|
||||
// Parameters
|
||||
REQUIRED_PARAMETER: "required_parameter",
|
||||
OPTIONAL_PARAMETER: "optional_parameter",
|
||||
@@ -57,6 +63,9 @@ export const NodeType = {
|
||||
DEFAULT: "default",
|
||||
ACCESSIBILITY_MODIFIER: "accessibility_modifier",
|
||||
READONLY: "readonly",
|
||||
|
||||
// Decorators
|
||||
DECORATOR: "decorator",
|
||||
} as const
|
||||
|
||||
export type NodeTypeValue = (typeof NodeType)[keyof typeof NodeType]
|
||||
|
||||
@@ -1,14 +1,17 @@
|
||||
import { type Message, Ollama } from "ollama"
|
||||
import { type Message, Ollama, type Tool } from "ollama"
|
||||
import type { ILLMClient, LLMResponse } from "../../domain/services/ILLMClient.js"
|
||||
import type { ChatMessage } from "../../domain/value-objects/ChatMessage.js"
|
||||
import { createToolCall, type ToolCall } from "../../domain/value-objects/ToolCall.js"
|
||||
import type { LLMConfig } from "../../shared/constants/config.js"
|
||||
import { IpuaroError } from "../../shared/errors/IpuaroError.js"
|
||||
import { estimateTokens } from "../../shared/utils/tokens.js"
|
||||
import { parseToolCalls } from "./ResponseParser.js"
|
||||
import { getOllamaNativeTools } from "./toolDefs.js"
|
||||
|
||||
/**
|
||||
* Ollama LLM client implementation.
|
||||
* Wraps the Ollama SDK for chat completions with tool support.
|
||||
* Supports both XML-based and native Ollama tool calling.
|
||||
*/
|
||||
export class OllamaClient implements ILLMClient {
|
||||
private readonly client: Ollama
|
||||
@@ -17,6 +20,7 @@ export class OllamaClient implements ILLMClient {
|
||||
private readonly contextWindow: number
|
||||
private readonly temperature: number
|
||||
private readonly timeout: number
|
||||
private readonly useNativeTools: boolean
|
||||
private abortController: AbortController | null = null
|
||||
|
||||
constructor(config: LLMConfig) {
|
||||
@@ -26,11 +30,12 @@ export class OllamaClient implements ILLMClient {
|
||||
this.contextWindow = config.contextWindow
|
||||
this.temperature = config.temperature
|
||||
this.timeout = config.timeout
|
||||
this.useNativeTools = config.useNativeTools ?? false
|
||||
}
|
||||
|
||||
/**
|
||||
* Send messages to LLM and get response.
|
||||
* Tool definitions should be included in the system prompt as XML format.
|
||||
* Supports both XML-based tool calling and native Ollama tools.
|
||||
*/
|
||||
async chat(messages: ChatMessage[]): Promise<LLMResponse> {
|
||||
const startTime = Date.now()
|
||||
@@ -39,6 +44,28 @@ export class OllamaClient implements ILLMClient {
|
||||
try {
|
||||
const ollamaMessages = this.convertMessages(messages)
|
||||
|
||||
if (this.useNativeTools) {
|
||||
return await this.chatWithNativeTools(ollamaMessages, startTime)
|
||||
}
|
||||
|
||||
return await this.chatWithXMLTools(ollamaMessages, startTime)
|
||||
} catch (error) {
|
||||
if (error instanceof Error && error.name === "AbortError") {
|
||||
throw IpuaroError.llm("Request was aborted")
|
||||
}
|
||||
throw this.handleError(error)
|
||||
} finally {
|
||||
this.abortController = null
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Chat using XML-based tool calling (legacy mode).
|
||||
*/
|
||||
private async chatWithXMLTools(
|
||||
ollamaMessages: Message[],
|
||||
startTime: number,
|
||||
): Promise<LLMResponse> {
|
||||
const response = await this.client.chat({
|
||||
model: this.model,
|
||||
messages: ollamaMessages,
|
||||
@@ -59,14 +86,102 @@ export class OllamaClient implements ILLMClient {
|
||||
truncated: false,
|
||||
stopReason: this.determineStopReason(response, parsed.toolCalls),
|
||||
}
|
||||
} catch (error) {
|
||||
if (error instanceof Error && error.name === "AbortError") {
|
||||
throw IpuaroError.llm("Request was aborted")
|
||||
}
|
||||
throw this.handleError(error)
|
||||
} finally {
|
||||
this.abortController = null
|
||||
|
||||
/**
|
||||
* Chat using native Ollama tool calling.
|
||||
*/
|
||||
private async chatWithNativeTools(
|
||||
ollamaMessages: Message[],
|
||||
startTime: number,
|
||||
): Promise<LLMResponse> {
|
||||
const nativeTools = getOllamaNativeTools() as Tool[]
|
||||
|
||||
const response = await this.client.chat({
|
||||
model: this.model,
|
||||
messages: ollamaMessages,
|
||||
tools: nativeTools,
|
||||
options: {
|
||||
temperature: this.temperature,
|
||||
},
|
||||
stream: false,
|
||||
})
|
||||
|
||||
const timeMs = Date.now() - startTime
|
||||
let toolCalls = this.parseNativeToolCalls(response.message.tool_calls)
|
||||
|
||||
// Fallback: some models return tool calls as JSON in content
|
||||
if (toolCalls.length === 0 && response.message.content) {
|
||||
toolCalls = this.parseToolCallsFromContent(response.message.content)
|
||||
}
|
||||
|
||||
const content = toolCalls.length > 0 ? "" : response.message.content || ""
|
||||
|
||||
return {
|
||||
content,
|
||||
toolCalls,
|
||||
tokens: response.eval_count ?? estimateTokens(response.message.content || ""),
|
||||
timeMs,
|
||||
truncated: false,
|
||||
stopReason: toolCalls.length > 0 ? "tool_use" : "end",
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse native Ollama tool calls into ToolCall format.
|
||||
*/
|
||||
private parseNativeToolCalls(
|
||||
nativeToolCalls?: { function: { name: string; arguments: Record<string, unknown> } }[],
|
||||
): ToolCall[] {
|
||||
if (!nativeToolCalls || nativeToolCalls.length === 0) {
|
||||
return []
|
||||
}
|
||||
|
||||
return nativeToolCalls.map((tc, index) =>
|
||||
createToolCall(
|
||||
`native_${String(Date.now())}_${String(index)}`,
|
||||
tc.function.name,
|
||||
tc.function.arguments,
|
||||
),
|
||||
)
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse tool calls from content (fallback for models that return JSON in content).
|
||||
* Supports format: {"name": "tool_name", "arguments": {...}}
|
||||
*/
|
||||
private parseToolCallsFromContent(content: string): ToolCall[] {
|
||||
const toolCalls: ToolCall[] = []
|
||||
|
||||
// Try to parse JSON objects from content
|
||||
const jsonRegex = /\{[\s\S]*?"name"[\s\S]*?"arguments"[\s\S]*?\}/g
|
||||
const matches = content.match(jsonRegex)
|
||||
|
||||
if (!matches) {
|
||||
return toolCalls
|
||||
}
|
||||
|
||||
for (const match of matches) {
|
||||
try {
|
||||
const parsed = JSON.parse(match) as {
|
||||
name?: string
|
||||
arguments?: Record<string, unknown>
|
||||
}
|
||||
if (parsed.name && typeof parsed.name === "string") {
|
||||
toolCalls.push(
|
||||
createToolCall(
|
||||
`json_${String(Date.now())}_${String(toolCalls.length)}`,
|
||||
parsed.name,
|
||||
parsed.arguments ?? {},
|
||||
),
|
||||
)
|
||||
}
|
||||
} catch {
|
||||
// Invalid JSON, skip
|
||||
}
|
||||
}
|
||||
|
||||
return toolCalls
|
||||
}
|
||||
|
||||
/**
|
||||
|
||||
@@ -58,9 +58,50 @@ const VALID_TOOL_NAMES = new Set([
|
||||
"run_tests",
|
||||
])
|
||||
|
||||
/**
|
||||
* Tool name aliases for common LLM typos/variations.
|
||||
* Maps incorrect names to correct tool names.
|
||||
*/
|
||||
const TOOL_ALIASES: Record<string, string> = {
|
||||
// get_lines aliases
|
||||
get_functions: "get_lines",
|
||||
read_file: "get_lines",
|
||||
read_lines: "get_lines",
|
||||
get_file: "get_lines",
|
||||
read: "get_lines",
|
||||
// get_function aliases
|
||||
getfunction: "get_function",
|
||||
// get_structure aliases
|
||||
list_files: "get_structure",
|
||||
get_files: "get_structure",
|
||||
list_structure: "get_structure",
|
||||
get_project_structure: "get_structure",
|
||||
// get_todos aliases
|
||||
find_todos: "get_todos",
|
||||
list_todos: "get_todos",
|
||||
// find_references aliases
|
||||
get_references: "find_references",
|
||||
// find_definition aliases
|
||||
get_definition: "find_definition",
|
||||
// edit_lines aliases
|
||||
edit_file: "edit_lines",
|
||||
modify_file: "edit_lines",
|
||||
update_file: "edit_lines",
|
||||
}
|
||||
|
||||
/**
|
||||
* Normalize tool name using aliases.
|
||||
*/
|
||||
function normalizeToolName(name: string): string {
|
||||
const lowerName = name.toLowerCase()
|
||||
return TOOL_ALIASES[lowerName] ?? name
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse tool calls from LLM response text.
|
||||
* Supports XML format: <tool_call name="get_lines"><path>src/index.ts</path></tool_call>
|
||||
* Supports both XML and JSON formats:
|
||||
* - XML: <tool_call name="get_lines"><path>src/index.ts</path></tool_call>
|
||||
* - JSON: {"name": "get_lines", "arguments": {"path": "src/index.ts"}}
|
||||
* Validates tool names and provides helpful error messages.
|
||||
*/
|
||||
export function parseToolCalls(response: string): ParsedResponse {
|
||||
@@ -68,14 +109,18 @@ export function parseToolCalls(response: string): ParsedResponse {
|
||||
const parseErrors: string[] = []
|
||||
let content = response
|
||||
|
||||
const matches = [...response.matchAll(TOOL_CALL_REGEX)]
|
||||
// First, try XML format
|
||||
const xmlMatches = [...response.matchAll(TOOL_CALL_REGEX)]
|
||||
|
||||
for (const match of matches) {
|
||||
const [fullMatch, toolName, paramsXml] = match
|
||||
for (const match of xmlMatches) {
|
||||
const [fullMatch, rawToolName, paramsXml] = match
|
||||
|
||||
// Normalize tool name (handle common LLM typos/variations)
|
||||
const toolName = normalizeToolName(rawToolName)
|
||||
|
||||
if (!VALID_TOOL_NAMES.has(toolName)) {
|
||||
parseErrors.push(
|
||||
`Unknown tool "${toolName}". Valid tools: ${[...VALID_TOOL_NAMES].join(", ")}`,
|
||||
`Unknown tool "${rawToolName}". Valid tools: ${[...VALID_TOOL_NAMES].join(", ")}`,
|
||||
)
|
||||
continue
|
||||
}
|
||||
@@ -91,7 +136,19 @@ export function parseToolCalls(response: string): ParsedResponse {
|
||||
content = content.replace(fullMatch, "")
|
||||
} catch (error) {
|
||||
const errorMsg = error instanceof Error ? error.message : String(error)
|
||||
parseErrors.push(`Failed to parse tool call "${toolName}": ${errorMsg}`)
|
||||
parseErrors.push(`Failed to parse tool call "${rawToolName}": ${errorMsg}`)
|
||||
}
|
||||
}
|
||||
|
||||
// If no XML tool calls found, try JSON format as fallback
|
||||
if (toolCalls.length === 0) {
|
||||
const jsonResult = parseJsonToolCalls(response)
|
||||
toolCalls.push(...jsonResult.toolCalls)
|
||||
parseErrors.push(...jsonResult.parseErrors)
|
||||
|
||||
// Remove JSON tool calls from content
|
||||
for (const jsonMatch of jsonResult.matchedStrings) {
|
||||
content = content.replace(jsonMatch, "")
|
||||
}
|
||||
}
|
||||
|
||||
@@ -105,6 +162,59 @@ export function parseToolCalls(response: string): ParsedResponse {
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* JSON tool call format pattern.
|
||||
* Matches: {"name": "tool_name", "arguments": {...}}
|
||||
*/
|
||||
const JSON_TOOL_CALL_REGEX =
|
||||
/\{\s*"name"\s*:\s*"([^"]+)"\s*,\s*"arguments"\s*:\s*(\{[^{}]*(?:\{[^{}]*\}[^{}]*)*\})\s*\}/g
|
||||
|
||||
/**
|
||||
* Parse tool calls from JSON format in response.
|
||||
* This is a fallback for LLMs that prefer JSON over XML.
|
||||
*/
|
||||
function parseJsonToolCalls(response: string): {
|
||||
toolCalls: ToolCall[]
|
||||
parseErrors: string[]
|
||||
matchedStrings: string[]
|
||||
} {
|
||||
const toolCalls: ToolCall[] = []
|
||||
const parseErrors: string[] = []
|
||||
const matchedStrings: string[] = []
|
||||
|
||||
const matches = [...response.matchAll(JSON_TOOL_CALL_REGEX)]
|
||||
|
||||
for (const match of matches) {
|
||||
const [fullMatch, rawToolName, argsJson] = match
|
||||
matchedStrings.push(fullMatch)
|
||||
|
||||
// Normalize tool name
|
||||
const toolName = normalizeToolName(rawToolName)
|
||||
|
||||
if (!VALID_TOOL_NAMES.has(toolName)) {
|
||||
parseErrors.push(
|
||||
`Unknown tool "${rawToolName}". Valid tools: ${[...VALID_TOOL_NAMES].join(", ")}`,
|
||||
)
|
||||
continue
|
||||
}
|
||||
|
||||
try {
|
||||
const args = JSON.parse(argsJson) as Record<string, unknown>
|
||||
const toolCall = createToolCall(
|
||||
`json_${String(Date.now())}_${String(toolCalls.length)}`,
|
||||
toolName,
|
||||
args,
|
||||
)
|
||||
toolCalls.push(toolCall)
|
||||
} catch (error) {
|
||||
const errorMsg = error instanceof Error ? error.message : String(error)
|
||||
parseErrors.push(`Failed to parse JSON tool call "${rawToolName}": ${errorMsg}`)
|
||||
}
|
||||
}
|
||||
|
||||
return { toolCalls, parseErrors, matchedStrings }
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse parameters from XML content.
|
||||
*/
|
||||
|
||||
@@ -16,104 +16,124 @@ export interface ProjectStructure {
|
||||
*/
|
||||
export interface BuildContextOptions {
|
||||
includeSignatures?: boolean
|
||||
includeDepsGraph?: boolean
|
||||
includeCircularDeps?: boolean
|
||||
includeHighImpactFiles?: boolean
|
||||
circularDeps?: string[][]
|
||||
}
|
||||
|
||||
/**
|
||||
* System prompt for the ipuaro AI agent.
|
||||
*/
|
||||
export const SYSTEM_PROMPT = `You are ipuaro, a local AI code assistant specialized in helping developers understand and modify their codebase. You operate within a single project directory and have access to powerful tools for reading, searching, analyzing, and editing code.
|
||||
export const SYSTEM_PROMPT = `You are ipuaro, a local AI code assistant with tools for reading, searching, analyzing, and editing code.
|
||||
|
||||
## Core Principles
|
||||
## When to Use Tools
|
||||
|
||||
1. **Lazy Loading**: You don't have the full code in context. Use tools to fetch exactly what you need.
|
||||
2. **Precision**: Always verify file paths and line numbers before making changes.
|
||||
3. **Safety**: Confirm destructive operations. Never execute dangerous commands.
|
||||
4. **Efficiency**: Minimize context usage. Request only necessary code sections.
|
||||
**Use tools** when the user asks about:
|
||||
- Code content (files, functions, classes)
|
||||
- Project structure
|
||||
- TODOs, complexity, dependencies
|
||||
- Git status, diffs, commits
|
||||
- Running commands or tests
|
||||
|
||||
## Tool Calling Format
|
||||
**Do NOT use tools** for:
|
||||
- Greetings ("Hello", "Hi", "Thanks")
|
||||
- General questions not about this codebase
|
||||
- Clarifying questions back to the user
|
||||
|
||||
When you need to use a tool, format your call as XML:
|
||||
## MANDATORY: Tools for Code Questions
|
||||
|
||||
<tool_call name="tool_name">
|
||||
<param_name>value</param_name>
|
||||
<another_param>value</another_param>
|
||||
</tool_call>
|
||||
**CRITICAL:** You have ZERO code in your context. To answer ANY question about code, you MUST first call a tool.
|
||||
|
||||
You can call multiple tools in one response. Always wait for tool results before making conclusions.
|
||||
|
||||
**Examples:**
|
||||
**WRONG:**
|
||||
User: "What's in src/index.ts?"
|
||||
Assistant: "The file likely contains..." ← WRONG! Call a tool!
|
||||
|
||||
**CORRECT:**
|
||||
User: "What's in src/index.ts?"
|
||||
<tool_call name="get_lines">
|
||||
<path>src/index.ts</path>
|
||||
<start>1</start>
|
||||
<end>50</end>
|
||||
<path>src/index.ts</path>
|
||||
</tool_call>
|
||||
|
||||
<tool_call name="edit_lines">
|
||||
<path>src/utils.ts</path>
|
||||
<start>10</start>
|
||||
<end>15</end>
|
||||
<content>const newCode = "hello";</content>
|
||||
## Tool Call Format
|
||||
|
||||
Output this XML format. Do NOT explain before calling - just output the XML:
|
||||
|
||||
<tool_call name="TOOL_NAME">
|
||||
<param1>value1</param1>
|
||||
<param2>value2</param2>
|
||||
</tool_call>
|
||||
|
||||
<tool_call name="find_references">
|
||||
<symbol>getUserById</symbol>
|
||||
## Example Interactions
|
||||
|
||||
**Example 1 - Reading a file:**
|
||||
User: "Show me the main function in src/app.ts"
|
||||
<tool_call name="get_function">
|
||||
<path>src/app.ts</path>
|
||||
<name>main</name>
|
||||
</tool_call>
|
||||
|
||||
**Example 2 - Finding TODOs:**
|
||||
User: "Are there any TODO comments?"
|
||||
<tool_call name="get_todos">
|
||||
</tool_call>
|
||||
|
||||
**Example 3 - Project structure:**
|
||||
User: "What files are in this project?"
|
||||
<tool_call name="get_structure">
|
||||
<path>.</path>
|
||||
</tool_call>
|
||||
|
||||
## Available Tools
|
||||
|
||||
### Reading Tools
|
||||
- \`get_lines(path, start?, end?)\`: Get specific lines from a file
|
||||
- \`get_function(path, name)\`: Get a function by name
|
||||
- \`get_class(path, name)\`: Get a class by name
|
||||
- \`get_structure(path?, depth?)\`: Get project directory structure
|
||||
### Reading
|
||||
- get_lines(path, start?, end?) - Read file lines
|
||||
- get_function(path, name) - Get function by name
|
||||
- get_class(path, name) - Get class by name
|
||||
- get_structure(path?, depth?) - List project files
|
||||
|
||||
### Editing Tools (require confirmation)
|
||||
- \`edit_lines(path, start, end, content)\`: Replace specific lines in a file
|
||||
- \`create_file(path, content)\`: Create a new file
|
||||
- \`delete_file(path)\`: Delete a file
|
||||
### Analysis
|
||||
- get_todos(path?, type?) - Find TODO/FIXME comments
|
||||
- get_dependencies(path) - What this file imports
|
||||
- get_dependents(path) - What imports this file
|
||||
- get_complexity(path?) - Code complexity metrics
|
||||
- find_references(symbol) - Find all usages of a symbol
|
||||
- find_definition(symbol) - Find where symbol is defined
|
||||
|
||||
### Search Tools
|
||||
- \`find_references(symbol, path?)\`: Find all usages of a symbol
|
||||
- \`find_definition(symbol)\`: Find where a symbol is defined
|
||||
### Editing (requires confirmation)
|
||||
- edit_lines(path, start, end, content) - Modify file lines
|
||||
- create_file(path, content) - Create new file
|
||||
- delete_file(path) - Delete a file
|
||||
|
||||
### Analysis Tools
|
||||
- \`get_dependencies(path)\`: Get files this file imports
|
||||
- \`get_dependents(path)\`: Get files that import this file
|
||||
- \`get_complexity(path?, limit?)\`: Get complexity metrics
|
||||
- \`get_todos(path?, type?)\`: Find TODO/FIXME comments
|
||||
### Git
|
||||
- git_status() - Repository status
|
||||
- git_diff(path?, staged?) - Show changes
|
||||
- git_commit(message, files?) - Create commit
|
||||
|
||||
### Git Tools
|
||||
- \`git_status()\`: Get repository status
|
||||
- \`git_diff(path?, staged?)\`: Get uncommitted changes
|
||||
- \`git_commit(message, files?)\`: Create a commit (requires confirmation)
|
||||
### Commands
|
||||
- run_command(command, timeout?) - Execute shell command
|
||||
- run_tests(path?, filter?) - Run test suite
|
||||
|
||||
### Run Tools
|
||||
- \`run_command(command, timeout?)\`: Execute a shell command (security checked)
|
||||
- \`run_tests(path?, filter?, watch?)\`: Run the test suite
|
||||
## Rules
|
||||
|
||||
## Response Guidelines
|
||||
1. **ALWAYS call a tool first** when asked about code - you cannot see any files
|
||||
2. **Output XML directly** - don't say "I will use..." just output the tool call
|
||||
3. **Wait for results** before making conclusions
|
||||
4. **Be concise** in your responses
|
||||
5. **Verify before editing** - always read code before modifying it
|
||||
6. **Stay safe** - never execute destructive commands without user confirmation`
|
||||
|
||||
1. **Be concise**: Don't repeat information already in context.
|
||||
2. **Show your work**: Explain what tools you're using and why.
|
||||
3. **Verify before editing**: Always read the target code before modifying it.
|
||||
4. **Handle errors gracefully**: If a tool fails, explain what went wrong and suggest alternatives.
|
||||
/**
|
||||
* Tool usage reminder - appended to messages to reinforce tool usage.
|
||||
* This is added as the last system message before LLM call.
|
||||
*/
|
||||
export const TOOL_REMINDER = `⚠️ REMINDER: To answer this question, you MUST use a tool first.
|
||||
Output the <tool_call> XML directly. Do NOT describe what you will do - just call the tool.
|
||||
|
||||
## Code Editing Rules
|
||||
|
||||
1. Always use \`get_lines\` or \`get_function\` before \`edit_lines\`.
|
||||
2. Provide exact line numbers for edits.
|
||||
3. For large changes, break into multiple small edits.
|
||||
4. After editing, suggest running tests if available.
|
||||
|
||||
## Safety Rules
|
||||
|
||||
1. Never execute commands that could harm the system.
|
||||
2. Never expose sensitive data (API keys, passwords).
|
||||
3. Always confirm file deletions and destructive git operations.
|
||||
4. Stay within the project directory.
|
||||
|
||||
When you need to perform an action, use the appropriate tool. Think step by step about what information you need and which tools will provide it most efficiently.`
|
||||
Example - if asked about a file, output:
|
||||
<tool_call name="get_lines">
|
||||
<path>the/file/path.ts</path>
|
||||
</tool_call>`
|
||||
|
||||
/**
|
||||
* Build initial context from project structure and AST metadata.
|
||||
@@ -127,11 +147,35 @@ export function buildInitialContext(
|
||||
): string {
|
||||
const sections: string[] = []
|
||||
const includeSignatures = options?.includeSignatures ?? true
|
||||
const includeDepsGraph = options?.includeDepsGraph ?? true
|
||||
const includeCircularDeps = options?.includeCircularDeps ?? true
|
||||
const includeHighImpactFiles = options?.includeHighImpactFiles ?? true
|
||||
|
||||
sections.push(formatProjectHeader(structure))
|
||||
sections.push(formatDirectoryTree(structure))
|
||||
sections.push(formatFileOverview(asts, metas, includeSignatures))
|
||||
|
||||
if (includeDepsGraph && metas && metas.size > 0) {
|
||||
const depsGraph = formatDependencyGraph(metas)
|
||||
if (depsGraph) {
|
||||
sections.push(depsGraph)
|
||||
}
|
||||
}
|
||||
|
||||
if (includeHighImpactFiles && metas && metas.size > 0) {
|
||||
const highImpactSection = formatHighImpactFiles(metas)
|
||||
if (highImpactSection) {
|
||||
sections.push(highImpactSection)
|
||||
}
|
||||
}
|
||||
|
||||
if (includeCircularDeps && options?.circularDeps && options.circularDeps.length > 0) {
|
||||
const circularDepsSection = formatCircularDeps(options.circularDeps)
|
||||
if (circularDepsSection) {
|
||||
sections.push(circularDepsSection)
|
||||
}
|
||||
}
|
||||
|
||||
return sections.join("\n\n")
|
||||
}
|
||||
|
||||
@@ -187,10 +231,22 @@ function formatFileOverview(
|
||||
return lines.join("\n")
|
||||
}
|
||||
|
||||
/**
|
||||
* Format decorators as a prefix string.
|
||||
* Example: "@Get(':id') @Auth() "
|
||||
*/
|
||||
function formatDecoratorsPrefix(decorators: string[] | undefined): string {
|
||||
if (!decorators || decorators.length === 0) {
|
||||
return ""
|
||||
}
|
||||
return `${decorators.join(" ")} `
|
||||
}
|
||||
|
||||
/**
|
||||
* Format a function signature.
|
||||
*/
|
||||
function formatFunctionSignature(fn: FileAST["functions"][0]): string {
|
||||
const decoratorsPrefix = formatDecoratorsPrefix(fn.decorators)
|
||||
const asyncPrefix = fn.isAsync ? "async " : ""
|
||||
const params = fn.params
|
||||
.map((p) => {
|
||||
@@ -200,7 +256,7 @@ function formatFunctionSignature(fn: FileAST["functions"][0]): string {
|
||||
})
|
||||
.join(", ")
|
||||
const returnType = fn.returnType ? `: ${fn.returnType}` : ""
|
||||
return `${asyncPrefix}${fn.name}(${params})${returnType}`
|
||||
return `${decoratorsPrefix}${asyncPrefix}${fn.name}(${params})${returnType}`
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -240,6 +296,37 @@ function formatTypeAliasSignature(type: FileAST["typeAliases"][0]): string {
|
||||
return `type ${type.name} = ${definition}`
|
||||
}
|
||||
|
||||
/**
|
||||
* Format an enum signature with members and values.
|
||||
* Example: "enum Status { Active=1, Inactive=0, Pending=2 }"
|
||||
* Example: "const enum Role { Admin="admin", User="user" }"
|
||||
*/
|
||||
function formatEnumSignature(enumInfo: FileAST["enums"][0]): string {
|
||||
const constPrefix = enumInfo.isConst ? "const " : ""
|
||||
|
||||
if (enumInfo.members.length === 0) {
|
||||
return `${constPrefix}enum ${enumInfo.name}`
|
||||
}
|
||||
|
||||
const membersStr = enumInfo.members
|
||||
.map((m) => {
|
||||
if (m.value === undefined) {
|
||||
return m.name
|
||||
}
|
||||
const valueStr = typeof m.value === "string" ? `"${m.value}"` : String(m.value)
|
||||
return `${m.name}=${valueStr}`
|
||||
})
|
||||
.join(", ")
|
||||
|
||||
const result = `${constPrefix}enum ${enumInfo.name} { ${membersStr} }`
|
||||
|
||||
if (result.length > 100) {
|
||||
return truncateDefinition(result, 100)
|
||||
}
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
/**
|
||||
* Truncate long type definitions for display.
|
||||
*/
|
||||
@@ -279,9 +366,10 @@ function formatFileSummary(
|
||||
|
||||
if (ast.classes.length > 0) {
|
||||
for (const cls of ast.classes) {
|
||||
const decoratorsPrefix = formatDecoratorsPrefix(cls.decorators)
|
||||
const ext = cls.extends ? ` extends ${cls.extends}` : ""
|
||||
const impl = cls.implements.length > 0 ? ` implements ${cls.implements.join(", ")}` : ""
|
||||
lines.push(`- class ${cls.name}${ext}${impl}`)
|
||||
lines.push(`- ${decoratorsPrefix}class ${cls.name}${ext}${impl}`)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -297,6 +385,12 @@ function formatFileSummary(
|
||||
}
|
||||
}
|
||||
|
||||
if (ast.enums && ast.enums.length > 0) {
|
||||
for (const enumInfo of ast.enums) {
|
||||
lines.push(`- ${formatEnumSignature(enumInfo)}`)
|
||||
}
|
||||
}
|
||||
|
||||
if (lines.length === 1) {
|
||||
return `- ${path}${flags}`
|
||||
}
|
||||
@@ -330,6 +424,11 @@ function formatFileSummaryCompact(path: string, ast: FileAST, flags: string): st
|
||||
parts.push(`type: ${names}`)
|
||||
}
|
||||
|
||||
if (ast.enums && ast.enums.length > 0) {
|
||||
const names = ast.enums.map((e) => e.name).join(", ")
|
||||
parts.push(`enum: ${names}`)
|
||||
}
|
||||
|
||||
const summary = parts.length > 0 ? ` [${parts.join(" | ")}]` : ""
|
||||
return `- ${path}${summary}${flags}`
|
||||
}
|
||||
@@ -359,6 +458,220 @@ function formatFileFlags(meta?: FileMeta): string {
|
||||
return flags.length > 0 ? ` (${flags.join(", ")})` : ""
|
||||
}
|
||||
|
||||
/**
|
||||
* Shorten a file path for display in dependency graph.
|
||||
* Removes common prefixes like "src/" and file extensions.
|
||||
*/
|
||||
function shortenPath(path: string): string {
|
||||
let short = path
|
||||
if (short.startsWith("src/")) {
|
||||
short = short.slice(4)
|
||||
}
|
||||
// Remove common extensions
|
||||
short = short.replace(/\.(ts|tsx|js|jsx)$/, "")
|
||||
// Remove /index suffix
|
||||
short = short.replace(/\/index$/, "")
|
||||
return short
|
||||
}
|
||||
|
||||
/**
|
||||
* Format a single dependency graph entry.
|
||||
* Format: "path: → dep1, dep2 ← dependent1, dependent2"
|
||||
*/
|
||||
function formatDepsEntry(path: string, dependencies: string[], dependents: string[]): string {
|
||||
const parts: string[] = []
|
||||
const shortPath = shortenPath(path)
|
||||
|
||||
if (dependencies.length > 0) {
|
||||
const deps = dependencies.map(shortenPath).join(", ")
|
||||
parts.push(`→ ${deps}`)
|
||||
}
|
||||
|
||||
if (dependents.length > 0) {
|
||||
const deps = dependents.map(shortenPath).join(", ")
|
||||
parts.push(`← ${deps}`)
|
||||
}
|
||||
|
||||
if (parts.length === 0) {
|
||||
return ""
|
||||
}
|
||||
|
||||
return `${shortPath}: ${parts.join(" ")}`
|
||||
}
|
||||
|
||||
/**
|
||||
* Format dependency graph for all files.
|
||||
* Shows hub files first, then files with dependencies/dependents.
|
||||
*
|
||||
* Format:
|
||||
* ## Dependency Graph
|
||||
* services/user: → types/user, utils/validation ← controllers/user
|
||||
* services/auth: → services/user, utils/jwt ← controllers/auth
|
||||
*/
|
||||
export function formatDependencyGraph(metas: Map<string, FileMeta>): string | null {
|
||||
if (metas.size === 0) {
|
||||
return null
|
||||
}
|
||||
|
||||
const entries: { path: string; deps: string[]; dependents: string[]; isHub: boolean }[] = []
|
||||
|
||||
for (const [path, meta] of metas) {
|
||||
// Only include files that have connections
|
||||
if (meta.dependencies.length > 0 || meta.dependents.length > 0) {
|
||||
entries.push({
|
||||
path,
|
||||
deps: meta.dependencies,
|
||||
dependents: meta.dependents,
|
||||
isHub: meta.isHub,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
if (entries.length === 0) {
|
||||
return null
|
||||
}
|
||||
|
||||
// Sort: hubs first, then by total connections (desc), then by path
|
||||
entries.sort((a, b) => {
|
||||
if (a.isHub !== b.isHub) {
|
||||
return a.isHub ? -1 : 1
|
||||
}
|
||||
const aTotal = a.deps.length + a.dependents.length
|
||||
const bTotal = b.deps.length + b.dependents.length
|
||||
if (aTotal !== bTotal) {
|
||||
return bTotal - aTotal
|
||||
}
|
||||
return a.path.localeCompare(b.path)
|
||||
})
|
||||
|
||||
const lines: string[] = ["## Dependency Graph", ""]
|
||||
|
||||
for (const entry of entries) {
|
||||
const line = formatDepsEntry(entry.path, entry.deps, entry.dependents)
|
||||
if (line) {
|
||||
lines.push(line)
|
||||
}
|
||||
}
|
||||
|
||||
// Return null if only header (no actual entries)
|
||||
if (lines.length <= 2) {
|
||||
return null
|
||||
}
|
||||
|
||||
return lines.join("\n")
|
||||
}
|
||||
|
||||
/**
|
||||
* Format circular dependencies for display in context.
|
||||
* Shows warning section with cycle chains.
|
||||
*
|
||||
* Format:
|
||||
* ## ⚠️ Circular Dependencies
|
||||
* - services/user → services/auth → services/user
|
||||
* - utils/a → utils/b → utils/c → utils/a
|
||||
*/
|
||||
export function formatCircularDeps(cycles: string[][]): string | null {
|
||||
if (!cycles || cycles.length === 0) {
|
||||
return null
|
||||
}
|
||||
|
||||
const lines: string[] = ["## ⚠️ Circular Dependencies", ""]
|
||||
|
||||
for (const cycle of cycles) {
|
||||
if (cycle.length === 0) {
|
||||
continue
|
||||
}
|
||||
const formattedCycle = cycle.map(shortenPath).join(" → ")
|
||||
lines.push(`- ${formattedCycle}`)
|
||||
}
|
||||
|
||||
// Return null if only header (no actual cycles)
|
||||
if (lines.length <= 2) {
|
||||
return null
|
||||
}
|
||||
|
||||
return lines.join("\n")
|
||||
}
|
||||
|
||||
/**
|
||||
* Format high impact files table for display in context.
|
||||
* Shows files with highest impact scores (most dependents).
|
||||
* Includes both direct and transitive dependent counts.
|
||||
*
|
||||
* Format:
|
||||
* ## High Impact Files
|
||||
* | File | Impact | Direct | Transitive |
|
||||
* |------|--------|--------|------------|
|
||||
* | src/utils/validation.ts | 67% | 12 | 24 |
|
||||
*
|
||||
* @param metas - Map of file paths to their metadata
|
||||
* @param limit - Maximum number of files to show (default: 10)
|
||||
* @param minImpact - Minimum impact score to include (default: 5)
|
||||
*/
|
||||
export function formatHighImpactFiles(
|
||||
metas: Map<string, FileMeta>,
|
||||
limit = 10,
|
||||
minImpact = 5,
|
||||
): string | null {
|
||||
if (metas.size === 0) {
|
||||
return null
|
||||
}
|
||||
|
||||
// Collect files with impact score >= minImpact
|
||||
const impactFiles: {
|
||||
path: string
|
||||
impact: number
|
||||
dependents: number
|
||||
transitive: number
|
||||
}[] = []
|
||||
|
||||
for (const [path, meta] of metas) {
|
||||
if (meta.impactScore >= minImpact) {
|
||||
impactFiles.push({
|
||||
path,
|
||||
impact: meta.impactScore,
|
||||
dependents: meta.dependents.length,
|
||||
transitive: meta.transitiveDepCount,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
if (impactFiles.length === 0) {
|
||||
return null
|
||||
}
|
||||
|
||||
// Sort by transitive count descending, then by impact, then by path
|
||||
impactFiles.sort((a, b) => {
|
||||
if (a.transitive !== b.transitive) {
|
||||
return b.transitive - a.transitive
|
||||
}
|
||||
if (a.impact !== b.impact) {
|
||||
return b.impact - a.impact
|
||||
}
|
||||
return a.path.localeCompare(b.path)
|
||||
})
|
||||
|
||||
// Take top N files
|
||||
const topFiles = impactFiles.slice(0, limit)
|
||||
|
||||
const lines: string[] = [
|
||||
"## High Impact Files",
|
||||
"",
|
||||
"| File | Impact | Direct | Transitive |",
|
||||
"|------|--------|--------|------------|",
|
||||
]
|
||||
|
||||
for (const file of topFiles) {
|
||||
const shortPath = shortenPath(file.path)
|
||||
const impact = `${String(file.impact)}%`
|
||||
const direct = String(file.dependents)
|
||||
const transitive = String(file.transitive)
|
||||
lines.push(`| ${shortPath} | ${impact} | ${direct} | ${transitive} |`)
|
||||
}
|
||||
|
||||
return lines.join("\n")
|
||||
}
|
||||
|
||||
/**
|
||||
* Format line range for display.
|
||||
*/
|
||||
|
||||
@@ -509,3 +509,87 @@ export function getToolsByCategory(category: string): ToolDef[] {
|
||||
return []
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* =============================================================================
|
||||
* Native Ollama Tools Format
|
||||
* =============================================================================
|
||||
*/
|
||||
|
||||
/**
|
||||
* Ollama native tool definition format.
|
||||
*/
|
||||
export interface OllamaTool {
|
||||
type: "function"
|
||||
function: {
|
||||
name: string
|
||||
description: string
|
||||
parameters: {
|
||||
type: "object"
|
||||
properties: Record<string, OllamaToolProperty>
|
||||
required: string[]
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
interface OllamaToolProperty {
|
||||
type: string
|
||||
description: string
|
||||
enum?: string[]
|
||||
items?: { type: string }
|
||||
}
|
||||
|
||||
/**
|
||||
* Convert ToolDef to Ollama native format.
|
||||
*/
|
||||
function convertToOllamaTool(tool: ToolDef): OllamaTool {
|
||||
const properties: Record<string, OllamaToolProperty> = {}
|
||||
const required: string[] = []
|
||||
|
||||
for (const param of tool.parameters) {
|
||||
const prop: OllamaToolProperty = {
|
||||
type: param.type === "array" ? "array" : param.type,
|
||||
description: param.description,
|
||||
}
|
||||
|
||||
if (param.enum) {
|
||||
prop.enum = param.enum
|
||||
}
|
||||
|
||||
if (param.type === "array") {
|
||||
prop.items = { type: "string" }
|
||||
}
|
||||
|
||||
properties[param.name] = prop
|
||||
|
||||
if (param.required) {
|
||||
required.push(param.name)
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
type: "function",
|
||||
function: {
|
||||
name: tool.name,
|
||||
description: tool.description,
|
||||
parameters: {
|
||||
type: "object",
|
||||
properties,
|
||||
required,
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* All tools in Ollama native format.
|
||||
* Used when useNativeTools is enabled.
|
||||
*/
|
||||
export const OLLAMA_NATIVE_TOOLS: OllamaTool[] = ALL_TOOLS.map(convertToOllamaTool)
|
||||
|
||||
/**
|
||||
* Get native tool definitions for Ollama.
|
||||
*/
|
||||
export function getOllamaNativeTools(): OllamaTool[] {
|
||||
return OLLAMA_NATIVE_TOOLS
|
||||
}
|
||||
|
||||
@@ -20,6 +20,7 @@ export const LLMConfigSchema = z.object({
|
||||
temperature: z.number().min(0).max(2).default(0.1),
|
||||
host: z.string().default("http://localhost:11434"),
|
||||
timeout: z.number().int().positive().default(120_000),
|
||||
useNativeTools: z.boolean().default(false),
|
||||
})
|
||||
|
||||
/**
|
||||
@@ -115,6 +116,9 @@ export const ContextConfigSchema = z.object({
|
||||
autoCompressAt: z.number().min(0).max(1).default(0.8),
|
||||
compressionMethod: z.enum(["llm-summary", "truncate"]).default("llm-summary"),
|
||||
includeSignatures: z.boolean().default(true),
|
||||
includeDepsGraph: z.boolean().default(true),
|
||||
includeCircularDeps: z.boolean().default(true),
|
||||
includeHighImpactFiles: z.boolean().default(true),
|
||||
})
|
||||
|
||||
/**
|
||||
|
||||
1506
packages/ipuaro/tests/e2e/full-workflow.test.ts
Normal file
1506
packages/ipuaro/tests/e2e/full-workflow.test.ts
Normal file
File diff suppressed because it is too large
Load Diff
351
packages/ipuaro/tests/e2e/test-helpers.ts
Normal file
351
packages/ipuaro/tests/e2e/test-helpers.ts
Normal file
@@ -0,0 +1,351 @@
|
||||
/**
|
||||
* E2E Test Helpers
|
||||
* Provides dependencies for testing the full flow with REAL LLM.
|
||||
*/
|
||||
|
||||
import { vi } from "vitest"
|
||||
import * as fs from "node:fs/promises"
|
||||
import * as path from "node:path"
|
||||
import * as os from "node:os"
|
||||
import type { IStorage, SymbolIndex, DepsGraph } from "../../src/domain/services/IStorage.js"
|
||||
import type { ISessionStorage, SessionListItem } from "../../src/domain/services/ISessionStorage.js"
|
||||
import type { FileData } from "../../src/domain/value-objects/FileData.js"
|
||||
import type { FileAST } from "../../src/domain/value-objects/FileAST.js"
|
||||
import type { FileMeta } from "../../src/domain/value-objects/FileMeta.js"
|
||||
import type { UndoEntry } from "../../src/domain/value-objects/UndoEntry.js"
|
||||
import { Session } from "../../src/domain/entities/Session.js"
|
||||
import { ToolRegistry } from "../../src/infrastructure/tools/registry.js"
|
||||
import { OllamaClient } from "../../src/infrastructure/llm/OllamaClient.js"
|
||||
import { registerAllTools } from "../../src/cli/commands/tools-setup.js"
|
||||
import type { LLMConfig } from "../../src/shared/constants/config.js"
|
||||
|
||||
/**
|
||||
* Default LLM config for tests.
|
||||
*/
|
||||
export const DEFAULT_TEST_LLM_CONFIG: LLMConfig = {
|
||||
model: "qwen2.5-coder:14b-instruct-q4_K_M",
|
||||
contextWindow: 128_000,
|
||||
temperature: 0.1,
|
||||
host: "http://localhost:11434",
|
||||
timeout: 180_000,
|
||||
useNativeTools: true,
|
||||
}
|
||||
|
||||
/**
|
||||
* In-memory storage implementation for testing.
|
||||
* Stores all data in Maps, no Redis required.
|
||||
*/
|
||||
export function createInMemoryStorage(): IStorage {
|
||||
const files = new Map<string, FileData>()
|
||||
const asts = new Map<string, FileAST>()
|
||||
const metas = new Map<string, FileMeta>()
|
||||
let symbolIndex: SymbolIndex = new Map()
|
||||
let depsGraph: DepsGraph = { imports: new Map(), importedBy: new Map() }
|
||||
const projectConfig = new Map<string, unknown>()
|
||||
let connected = false
|
||||
|
||||
return {
|
||||
getFile: vi.fn(async (filePath: string) => files.get(filePath) ?? null),
|
||||
setFile: vi.fn(async (filePath: string, data: FileData) => {
|
||||
files.set(filePath, data)
|
||||
}),
|
||||
deleteFile: vi.fn(async (filePath: string) => {
|
||||
files.delete(filePath)
|
||||
}),
|
||||
getAllFiles: vi.fn(async () => new Map(files)),
|
||||
getFileCount: vi.fn(async () => files.size),
|
||||
|
||||
getAST: vi.fn(async (filePath: string) => asts.get(filePath) ?? null),
|
||||
setAST: vi.fn(async (filePath: string, ast: FileAST) => {
|
||||
asts.set(filePath, ast)
|
||||
}),
|
||||
deleteAST: vi.fn(async (filePath: string) => {
|
||||
asts.delete(filePath)
|
||||
}),
|
||||
getAllASTs: vi.fn(async () => new Map(asts)),
|
||||
|
||||
getMeta: vi.fn(async (filePath: string) => metas.get(filePath) ?? null),
|
||||
setMeta: vi.fn(async (filePath: string, meta: FileMeta) => {
|
||||
metas.set(filePath, meta)
|
||||
}),
|
||||
deleteMeta: vi.fn(async (filePath: string) => {
|
||||
metas.delete(filePath)
|
||||
}),
|
||||
getAllMetas: vi.fn(async () => new Map(metas)),
|
||||
|
||||
getSymbolIndex: vi.fn(async () => symbolIndex),
|
||||
setSymbolIndex: vi.fn(async (index: SymbolIndex) => {
|
||||
symbolIndex = index
|
||||
}),
|
||||
getDepsGraph: vi.fn(async () => depsGraph),
|
||||
setDepsGraph: vi.fn(async (graph: DepsGraph) => {
|
||||
depsGraph = graph
|
||||
}),
|
||||
|
||||
getProjectConfig: vi.fn(async (key: string) => projectConfig.get(key) ?? null),
|
||||
setProjectConfig: vi.fn(async (key: string, value: unknown) => {
|
||||
projectConfig.set(key, value)
|
||||
}),
|
||||
|
||||
connect: vi.fn(async () => {
|
||||
connected = true
|
||||
}),
|
||||
disconnect: vi.fn(async () => {
|
||||
connected = false
|
||||
}),
|
||||
isConnected: vi.fn(() => connected),
|
||||
clear: vi.fn(async () => {
|
||||
files.clear()
|
||||
asts.clear()
|
||||
metas.clear()
|
||||
symbolIndex = new Map()
|
||||
depsGraph = { imports: new Map(), importedBy: new Map() }
|
||||
projectConfig.clear()
|
||||
}),
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* In-memory session storage for testing.
|
||||
*/
|
||||
export function createInMemorySessionStorage(): ISessionStorage {
|
||||
const sessions = new Map<string, Session>()
|
||||
const undoStacks = new Map<string, UndoEntry[]>()
|
||||
|
||||
return {
|
||||
saveSession: vi.fn(async (session: Session) => {
|
||||
sessions.set(session.id, session)
|
||||
}),
|
||||
loadSession: vi.fn(async (sessionId: string) => sessions.get(sessionId) ?? null),
|
||||
deleteSession: vi.fn(async (sessionId: string) => {
|
||||
sessions.delete(sessionId)
|
||||
undoStacks.delete(sessionId)
|
||||
}),
|
||||
listSessions: vi.fn(async (projectName?: string): Promise<SessionListItem[]> => {
|
||||
const items: SessionListItem[] = []
|
||||
for (const session of sessions.values()) {
|
||||
if (!projectName || session.projectName === projectName) {
|
||||
items.push({
|
||||
id: session.id,
|
||||
projectName: session.projectName,
|
||||
createdAt: session.createdAt,
|
||||
lastActivityAt: session.lastActivityAt,
|
||||
messageCount: session.history.length,
|
||||
})
|
||||
}
|
||||
}
|
||||
return items
|
||||
}),
|
||||
getLatestSession: vi.fn(async (projectName: string) => {
|
||||
let latest: Session | null = null
|
||||
for (const session of sessions.values()) {
|
||||
if (session.projectName === projectName) {
|
||||
if (!latest || session.lastActivityAt > latest.lastActivityAt) {
|
||||
latest = session
|
||||
}
|
||||
}
|
||||
}
|
||||
return latest
|
||||
}),
|
||||
sessionExists: vi.fn(async (sessionId: string) => sessions.has(sessionId)),
|
||||
pushUndoEntry: vi.fn(async (sessionId: string, entry: UndoEntry) => {
|
||||
const stack = undoStacks.get(sessionId) ?? []
|
||||
stack.push(entry)
|
||||
undoStacks.set(sessionId, stack)
|
||||
}),
|
||||
popUndoEntry: vi.fn(async (sessionId: string) => {
|
||||
const stack = undoStacks.get(sessionId) ?? []
|
||||
return stack.pop() ?? null
|
||||
}),
|
||||
getUndoStack: vi.fn(async (sessionId: string) => undoStacks.get(sessionId) ?? []),
|
||||
touchSession: vi.fn(async (sessionId: string) => {
|
||||
const session = sessions.get(sessionId)
|
||||
if (session) {
|
||||
session.lastActivityAt = Date.now()
|
||||
}
|
||||
}),
|
||||
clearAllSessions: vi.fn(async () => {
|
||||
sessions.clear()
|
||||
undoStacks.clear()
|
||||
}),
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Create REAL Ollama client for E2E tests.
|
||||
*/
|
||||
export function createRealOllamaClient(config?: Partial<LLMConfig>): OllamaClient {
|
||||
return new OllamaClient({
|
||||
...DEFAULT_TEST_LLM_CONFIG,
|
||||
...config,
|
||||
})
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a tool registry with all 18 tools registered.
|
||||
*/
|
||||
export function createRealToolRegistry(): ToolRegistry {
|
||||
const registry = new ToolRegistry()
|
||||
registerAllTools(registry)
|
||||
return registry
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a new test session.
|
||||
*/
|
||||
export function createTestSession(projectName = "test-project"): Session {
|
||||
return new Session(`test-${Date.now()}`, projectName)
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a temporary test project directory with sample files.
|
||||
*/
|
||||
export async function createTestProject(): Promise<string> {
|
||||
const tempDir = await fs.mkdtemp(path.join(os.tmpdir(), "ipuaro-e2e-"))
|
||||
|
||||
await fs.mkdir(path.join(tempDir, "src"), { recursive: true })
|
||||
|
||||
await fs.writeFile(
|
||||
path.join(tempDir, "src", "index.ts"),
|
||||
`/**
|
||||
* Main entry point
|
||||
*/
|
||||
export function main(): void {
|
||||
console.log("Hello, world!")
|
||||
}
|
||||
|
||||
export function add(a: number, b: number): number {
|
||||
return a + b
|
||||
}
|
||||
|
||||
export function multiply(a: number, b: number): number {
|
||||
return a * b
|
||||
}
|
||||
|
||||
// TODO: Add more math functions
|
||||
main()
|
||||
`,
|
||||
)
|
||||
|
||||
await fs.writeFile(
|
||||
path.join(tempDir, "src", "utils.ts"),
|
||||
`/**
|
||||
* Utility functions
|
||||
*/
|
||||
import { add } from "./index.js"
|
||||
|
||||
export function sum(numbers: number[]): number {
|
||||
return numbers.reduce((acc, n) => add(acc, n), 0)
|
||||
}
|
||||
|
||||
export class Calculator {
|
||||
private result: number = 0
|
||||
|
||||
add(n: number): this {
|
||||
this.result += n
|
||||
return this
|
||||
}
|
||||
|
||||
subtract(n: number): this {
|
||||
this.result -= n
|
||||
return this
|
||||
}
|
||||
|
||||
getResult(): number {
|
||||
return this.result
|
||||
}
|
||||
|
||||
reset(): void {
|
||||
this.result = 0
|
||||
}
|
||||
}
|
||||
|
||||
// FIXME: Handle edge cases for negative numbers
|
||||
`,
|
||||
)
|
||||
|
||||
await fs.writeFile(
|
||||
path.join(tempDir, "package.json"),
|
||||
JSON.stringify(
|
||||
{
|
||||
name: "test-project",
|
||||
version: "1.0.0",
|
||||
type: "module",
|
||||
scripts: {
|
||||
test: "echo 'Tests passed!'",
|
||||
},
|
||||
},
|
||||
null,
|
||||
4,
|
||||
),
|
||||
)
|
||||
|
||||
await fs.writeFile(
|
||||
path.join(tempDir, "README.md"),
|
||||
`# Test Project
|
||||
|
||||
A sample project for E2E testing.
|
||||
|
||||
## Features
|
||||
- Basic math functions
|
||||
- Calculator class
|
||||
`,
|
||||
)
|
||||
|
||||
return tempDir
|
||||
}
|
||||
|
||||
/**
|
||||
* Clean up test project directory.
|
||||
*/
|
||||
export async function cleanupTestProject(projectDir: string): Promise<void> {
|
||||
await fs.rm(projectDir, { recursive: true, force: true })
|
||||
}
|
||||
|
||||
/**
|
||||
* All test dependencies bundled together.
|
||||
*/
|
||||
export interface E2ETestDependencies {
|
||||
storage: IStorage
|
||||
sessionStorage: ISessionStorage
|
||||
llm: OllamaClient
|
||||
tools: ToolRegistry
|
||||
session: Session
|
||||
projectRoot: string
|
||||
}
|
||||
|
||||
/**
|
||||
* Create all dependencies for E2E testing with REAL Ollama.
|
||||
*/
|
||||
export async function createE2ETestDependencies(
|
||||
llmConfig?: Partial<LLMConfig>,
|
||||
): Promise<E2ETestDependencies> {
|
||||
const projectRoot = await createTestProject()
|
||||
|
||||
return {
|
||||
storage: createInMemoryStorage(),
|
||||
sessionStorage: createInMemorySessionStorage(),
|
||||
llm: createRealOllamaClient(llmConfig),
|
||||
tools: createRealToolRegistry(),
|
||||
session: createTestSession(),
|
||||
projectRoot,
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if Ollama is available.
|
||||
*/
|
||||
export async function isOllamaAvailable(): Promise<boolean> {
|
||||
const client = createRealOllamaClient()
|
||||
return client.isAvailable()
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if required model is available.
|
||||
*/
|
||||
export async function isModelAvailable(
|
||||
model = "qwen2.5-coder:14b-instruct-q4_K_M",
|
||||
): Promise<boolean> {
|
||||
const client = createRealOllamaClient()
|
||||
return client.hasModel(model)
|
||||
}
|
||||
@@ -1,5 +1,9 @@
|
||||
import { describe, it, expect } from "vitest"
|
||||
import { createFileMeta, isHubFile } from "../../../../src/domain/value-objects/FileMeta.js"
|
||||
import {
|
||||
calculateImpactScore,
|
||||
createFileMeta,
|
||||
isHubFile,
|
||||
} from "../../../../src/domain/value-objects/FileMeta.js"
|
||||
|
||||
describe("FileMeta", () => {
|
||||
describe("createFileMeta", () => {
|
||||
@@ -15,6 +19,7 @@ describe("FileMeta", () => {
|
||||
expect(meta.isHub).toBe(false)
|
||||
expect(meta.isEntryPoint).toBe(false)
|
||||
expect(meta.fileType).toBe("unknown")
|
||||
expect(meta.impactScore).toBe(0)
|
||||
})
|
||||
|
||||
it("should merge partial values", () => {
|
||||
@@ -42,4 +47,51 @@ describe("FileMeta", () => {
|
||||
expect(isHubFile(0)).toBe(false)
|
||||
})
|
||||
})
|
||||
|
||||
describe("calculateImpactScore", () => {
|
||||
it("should return 0 for file with 0 dependents", () => {
|
||||
expect(calculateImpactScore(0, 10)).toBe(0)
|
||||
})
|
||||
|
||||
it("should return 0 when totalFiles is 0", () => {
|
||||
expect(calculateImpactScore(5, 0)).toBe(0)
|
||||
})
|
||||
|
||||
it("should return 0 when totalFiles is 1", () => {
|
||||
expect(calculateImpactScore(0, 1)).toBe(0)
|
||||
})
|
||||
|
||||
it("should calculate correct percentage", () => {
|
||||
// 5 dependents out of 10 files (excluding itself = 9 possible)
|
||||
// 5/9 * 100 = 55.56 → rounded to 56
|
||||
expect(calculateImpactScore(5, 10)).toBe(56)
|
||||
})
|
||||
|
||||
it("should return 100 when all other files depend on it", () => {
|
||||
// 9 dependents out of 10 files (9 possible dependents)
|
||||
expect(calculateImpactScore(9, 10)).toBe(100)
|
||||
})
|
||||
|
||||
it("should cap at 100", () => {
|
||||
// Edge case: more dependents than possible (shouldn't happen normally)
|
||||
expect(calculateImpactScore(20, 10)).toBe(100)
|
||||
})
|
||||
|
||||
it("should round the percentage", () => {
|
||||
// 1 dependent out of 3 files (2 possible)
|
||||
// 1/2 * 100 = 50
|
||||
expect(calculateImpactScore(1, 3)).toBe(50)
|
||||
})
|
||||
|
||||
it("should calculate impact for small projects", () => {
|
||||
// 1 dependent out of 2 files (1 possible)
|
||||
expect(calculateImpactScore(1, 2)).toBe(100)
|
||||
})
|
||||
|
||||
it("should calculate impact for larger projects", () => {
|
||||
// 50 dependents out of 100 files (99 possible)
|
||||
// 50/99 * 100 = 50.51 → rounded to 51
|
||||
expect(calculateImpactScore(50, 100)).toBe(51)
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
@@ -562,4 +562,274 @@ third: value3`
|
||||
expect(ast.exports[2].line).toBe(3)
|
||||
})
|
||||
})
|
||||
|
||||
describe("enums (0.24.3)", () => {
|
||||
it("should extract enum with numeric values", () => {
|
||||
const code = `enum Status {
|
||||
Active = 1,
|
||||
Inactive = 0,
|
||||
Pending = 2
|
||||
}`
|
||||
const ast = parser.parse(code, "ts")
|
||||
|
||||
expect(ast.enums).toHaveLength(1)
|
||||
expect(ast.enums[0]).toMatchObject({
|
||||
name: "Status",
|
||||
isExported: false,
|
||||
isConst: false,
|
||||
})
|
||||
expect(ast.enums[0].members).toHaveLength(3)
|
||||
expect(ast.enums[0].members[0]).toMatchObject({ name: "Active", value: 1 })
|
||||
expect(ast.enums[0].members[1]).toMatchObject({ name: "Inactive", value: 0 })
|
||||
expect(ast.enums[0].members[2]).toMatchObject({ name: "Pending", value: 2 })
|
||||
})
|
||||
|
||||
it("should extract enum with string values", () => {
|
||||
const code = `enum Role {
|
||||
Admin = "admin",
|
||||
User = "user",
|
||||
Guest = "guest"
|
||||
}`
|
||||
const ast = parser.parse(code, "ts")
|
||||
|
||||
expect(ast.enums).toHaveLength(1)
|
||||
expect(ast.enums[0].members).toHaveLength(3)
|
||||
expect(ast.enums[0].members[0]).toMatchObject({ name: "Admin", value: "admin" })
|
||||
expect(ast.enums[0].members[1]).toMatchObject({ name: "User", value: "user" })
|
||||
expect(ast.enums[0].members[2]).toMatchObject({ name: "Guest", value: "guest" })
|
||||
})
|
||||
|
||||
it("should extract enum without explicit values", () => {
|
||||
const code = `enum Direction {
|
||||
Up,
|
||||
Down,
|
||||
Left,
|
||||
Right
|
||||
}`
|
||||
const ast = parser.parse(code, "ts")
|
||||
|
||||
expect(ast.enums).toHaveLength(1)
|
||||
expect(ast.enums[0].members).toHaveLength(4)
|
||||
expect(ast.enums[0].members[0]).toMatchObject({ name: "Up", value: undefined })
|
||||
expect(ast.enums[0].members[1]).toMatchObject({ name: "Down", value: undefined })
|
||||
})
|
||||
|
||||
it("should extract exported enum", () => {
|
||||
const code = `export enum Color {
|
||||
Red = "#FF0000",
|
||||
Green = "#00FF00",
|
||||
Blue = "#0000FF"
|
||||
}`
|
||||
const ast = parser.parse(code, "ts")
|
||||
|
||||
expect(ast.enums).toHaveLength(1)
|
||||
expect(ast.enums[0].isExported).toBe(true)
|
||||
expect(ast.exports).toHaveLength(1)
|
||||
expect(ast.exports[0].kind).toBe("type")
|
||||
})
|
||||
|
||||
it("should extract const enum", () => {
|
||||
const code = `const enum HttpStatus {
|
||||
OK = 200,
|
||||
NotFound = 404,
|
||||
InternalError = 500
|
||||
}`
|
||||
const ast = parser.parse(code, "ts")
|
||||
|
||||
expect(ast.enums).toHaveLength(1)
|
||||
expect(ast.enums[0].isConst).toBe(true)
|
||||
expect(ast.enums[0].members[0]).toMatchObject({ name: "OK", value: 200 })
|
||||
})
|
||||
|
||||
it("should extract exported const enum", () => {
|
||||
const code = `export const enum LogLevel {
|
||||
Debug = 0,
|
||||
Info = 1,
|
||||
Warn = 2,
|
||||
Error = 3
|
||||
}`
|
||||
const ast = parser.parse(code, "ts")
|
||||
|
||||
expect(ast.enums).toHaveLength(1)
|
||||
expect(ast.enums[0].isExported).toBe(true)
|
||||
expect(ast.enums[0].isConst).toBe(true)
|
||||
})
|
||||
|
||||
it("should extract line range for enum", () => {
|
||||
const code = `enum Test {
|
||||
A = 1,
|
||||
B = 2
|
||||
}`
|
||||
const ast = parser.parse(code, "ts")
|
||||
|
||||
expect(ast.enums[0].lineStart).toBe(1)
|
||||
expect(ast.enums[0].lineEnd).toBe(4)
|
||||
})
|
||||
|
||||
it("should handle enum with negative values", () => {
|
||||
const code = `enum Temperature {
|
||||
Cold = -10,
|
||||
Freezing = -20,
|
||||
Hot = 40
|
||||
}`
|
||||
const ast = parser.parse(code, "ts")
|
||||
|
||||
expect(ast.enums).toHaveLength(1)
|
||||
expect(ast.enums[0].members[0]).toMatchObject({ name: "Cold", value: -10 })
|
||||
expect(ast.enums[0].members[1]).toMatchObject({ name: "Freezing", value: -20 })
|
||||
expect(ast.enums[0].members[2]).toMatchObject({ name: "Hot", value: 40 })
|
||||
})
|
||||
|
||||
it("should handle empty enum", () => {
|
||||
const code = `enum Empty {}`
|
||||
const ast = parser.parse(code, "ts")
|
||||
|
||||
expect(ast.enums).toHaveLength(1)
|
||||
expect(ast.enums[0].name).toBe("Empty")
|
||||
expect(ast.enums[0].members).toHaveLength(0)
|
||||
})
|
||||
|
||||
it("should not extract enum from JavaScript", () => {
|
||||
const code = `enum Status { Active = 1 }`
|
||||
const ast = parser.parse(code, "js")
|
||||
|
||||
expect(ast.enums).toHaveLength(0)
|
||||
})
|
||||
})
|
||||
|
||||
describe("decorators (0.24.4)", () => {
|
||||
it("should extract class decorator", () => {
|
||||
const code = `@Controller('users')
|
||||
class UserController {}`
|
||||
const ast = parser.parse(code, "ts")
|
||||
|
||||
expect(ast.classes).toHaveLength(1)
|
||||
expect(ast.classes[0].decorators).toHaveLength(1)
|
||||
expect(ast.classes[0].decorators[0]).toBe("@Controller('users')")
|
||||
})
|
||||
|
||||
it("should extract multiple class decorators", () => {
|
||||
const code = `@Controller('api')
|
||||
@Injectable()
|
||||
@UseGuards(AuthGuard)
|
||||
class ApiController {}`
|
||||
const ast = parser.parse(code, "ts")
|
||||
|
||||
expect(ast.classes).toHaveLength(1)
|
||||
expect(ast.classes[0].decorators).toHaveLength(3)
|
||||
expect(ast.classes[0].decorators[0]).toBe("@Controller('api')")
|
||||
expect(ast.classes[0].decorators[1]).toBe("@Injectable()")
|
||||
expect(ast.classes[0].decorators[2]).toBe("@UseGuards(AuthGuard)")
|
||||
})
|
||||
|
||||
it("should extract method decorators", () => {
|
||||
const code = `class UserController {
|
||||
@Get(':id')
|
||||
@Auth()
|
||||
async getUser() {}
|
||||
}`
|
||||
const ast = parser.parse(code, "ts")
|
||||
|
||||
expect(ast.classes).toHaveLength(1)
|
||||
expect(ast.classes[0].methods).toHaveLength(1)
|
||||
expect(ast.classes[0].methods[0].decorators).toHaveLength(2)
|
||||
expect(ast.classes[0].methods[0].decorators[0]).toBe("@Get(':id')")
|
||||
expect(ast.classes[0].methods[0].decorators[1]).toBe("@Auth()")
|
||||
})
|
||||
|
||||
it("should extract exported decorated class", () => {
|
||||
const code = `@Injectable()
|
||||
export class UserService {}`
|
||||
const ast = parser.parse(code, "ts")
|
||||
|
||||
expect(ast.classes).toHaveLength(1)
|
||||
expect(ast.classes[0].isExported).toBe(true)
|
||||
expect(ast.classes[0].decorators).toHaveLength(1)
|
||||
expect(ast.classes[0].decorators[0]).toBe("@Injectable()")
|
||||
})
|
||||
|
||||
it("should extract decorator with complex arguments", () => {
|
||||
const code = `@Module({
|
||||
imports: [UserModule],
|
||||
controllers: [AppController],
|
||||
providers: [AppService]
|
||||
})
|
||||
class AppModule {}`
|
||||
const ast = parser.parse(code, "ts")
|
||||
|
||||
expect(ast.classes).toHaveLength(1)
|
||||
expect(ast.classes[0].decorators).toHaveLength(1)
|
||||
expect(ast.classes[0].decorators[0]).toContain("@Module")
|
||||
expect(ast.classes[0].decorators[0]).toContain("imports")
|
||||
})
|
||||
|
||||
it("should extract decorated class with extends", () => {
|
||||
const code = `@Entity()
|
||||
class User extends BaseEntity {}`
|
||||
const ast = parser.parse(code, "ts")
|
||||
|
||||
expect(ast.classes).toHaveLength(1)
|
||||
expect(ast.classes[0].extends).toBe("BaseEntity")
|
||||
expect(ast.classes[0].decorators).toHaveLength(1)
|
||||
expect(ast.classes[0].decorators![0]).toBe("@Entity()")
|
||||
})
|
||||
|
||||
it("should handle class without decorators", () => {
|
||||
const code = `class SimpleClass {}`
|
||||
const ast = parser.parse(code, "ts")
|
||||
|
||||
expect(ast.classes).toHaveLength(1)
|
||||
expect(ast.classes[0].decorators).toHaveLength(0)
|
||||
})
|
||||
|
||||
it("should handle method without decorators", () => {
|
||||
const code = `class SimpleClass {
|
||||
simpleMethod() {}
|
||||
}`
|
||||
const ast = parser.parse(code, "ts")
|
||||
|
||||
expect(ast.classes).toHaveLength(1)
|
||||
expect(ast.classes[0].methods).toHaveLength(1)
|
||||
expect(ast.classes[0].methods[0].decorators).toHaveLength(0)
|
||||
})
|
||||
|
||||
it("should handle function without decorators", () => {
|
||||
const code = `function simpleFunc() {}`
|
||||
const ast = parser.parse(code, "ts")
|
||||
|
||||
expect(ast.functions).toHaveLength(1)
|
||||
expect(ast.functions[0].decorators).toHaveLength(0)
|
||||
})
|
||||
|
||||
it("should handle arrow function without decorators", () => {
|
||||
const code = `const arrowFn = () => {}`
|
||||
const ast = parser.parse(code, "ts")
|
||||
|
||||
expect(ast.functions).toHaveLength(1)
|
||||
expect(ast.functions[0].decorators).toHaveLength(0)
|
||||
})
|
||||
|
||||
it("should extract NestJS controller pattern", () => {
|
||||
const code = `@Controller('users')
|
||||
export class UserController {
|
||||
@Get()
|
||||
findAll() {}
|
||||
|
||||
@Get(':id')
|
||||
findOne() {}
|
||||
|
||||
@Post()
|
||||
@Body()
|
||||
create() {}
|
||||
}`
|
||||
const ast = parser.parse(code, "ts")
|
||||
|
||||
expect(ast.classes).toHaveLength(1)
|
||||
expect(ast.classes[0].decorators).toContain("@Controller('users')")
|
||||
expect(ast.classes[0].methods).toHaveLength(3)
|
||||
expect(ast.classes[0].methods[0].decorators).toContain("@Get()")
|
||||
expect(ast.classes[0].methods[1].decorators).toContain("@Get(':id')")
|
||||
expect(ast.classes[0].methods[2].decorators).toContain("@Post()")
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
@@ -3,6 +3,7 @@ import { MetaAnalyzer } from "../../../../src/infrastructure/indexer/MetaAnalyze
|
||||
import { ASTParser } from "../../../../src/infrastructure/indexer/ASTParser.js"
|
||||
import type { FileAST } from "../../../../src/domain/value-objects/FileAST.js"
|
||||
import { createEmptyFileAST } from "../../../../src/domain/value-objects/FileAST.js"
|
||||
import { createFileMeta, type FileMeta } from "../../../../src/domain/value-objects/FileMeta.js"
|
||||
|
||||
describe("MetaAnalyzer", () => {
|
||||
let analyzer: MetaAnalyzer
|
||||
@@ -737,4 +738,368 @@ export function createComponent(): MyComponent {
|
||||
expect(meta.fileType).toBe("source")
|
||||
})
|
||||
})
|
||||
|
||||
describe("computeTransitiveCounts", () => {
|
||||
it("should compute transitive dependents for a simple chain", () => {
|
||||
// A -> B -> C (A depends on B, B depends on C)
|
||||
// So C has transitive dependents: B, A
|
||||
const metas = new Map<string, FileMeta>()
|
||||
metas.set(
|
||||
"/project/a.ts",
|
||||
createFileMeta({
|
||||
dependencies: ["/project/b.ts"],
|
||||
dependents: [],
|
||||
}),
|
||||
)
|
||||
metas.set(
|
||||
"/project/b.ts",
|
||||
createFileMeta({
|
||||
dependencies: ["/project/c.ts"],
|
||||
dependents: ["/project/a.ts"],
|
||||
}),
|
||||
)
|
||||
metas.set(
|
||||
"/project/c.ts",
|
||||
createFileMeta({
|
||||
dependencies: [],
|
||||
dependents: ["/project/b.ts"],
|
||||
}),
|
||||
)
|
||||
|
||||
analyzer.computeTransitiveCounts(metas)
|
||||
|
||||
expect(metas.get("/project/c.ts")!.transitiveDepCount).toBe(2) // B and A
|
||||
expect(metas.get("/project/b.ts")!.transitiveDepCount).toBe(1) // A
|
||||
expect(metas.get("/project/a.ts")!.transitiveDepCount).toBe(0) // none
|
||||
})
|
||||
|
||||
it("should compute transitive dependencies for a simple chain", () => {
|
||||
// A -> B -> C (A depends on B, B depends on C)
|
||||
// So A has transitive dependencies: B, C
|
||||
const metas = new Map<string, FileMeta>()
|
||||
metas.set(
|
||||
"/project/a.ts",
|
||||
createFileMeta({
|
||||
dependencies: ["/project/b.ts"],
|
||||
dependents: [],
|
||||
}),
|
||||
)
|
||||
metas.set(
|
||||
"/project/b.ts",
|
||||
createFileMeta({
|
||||
dependencies: ["/project/c.ts"],
|
||||
dependents: ["/project/a.ts"],
|
||||
}),
|
||||
)
|
||||
metas.set(
|
||||
"/project/c.ts",
|
||||
createFileMeta({
|
||||
dependencies: [],
|
||||
dependents: ["/project/b.ts"],
|
||||
}),
|
||||
)
|
||||
|
||||
analyzer.computeTransitiveCounts(metas)
|
||||
|
||||
expect(metas.get("/project/a.ts")!.transitiveDepByCount).toBe(2) // B and C
|
||||
expect(metas.get("/project/b.ts")!.transitiveDepByCount).toBe(1) // C
|
||||
expect(metas.get("/project/c.ts")!.transitiveDepByCount).toBe(0) // none
|
||||
})
|
||||
|
||||
it("should handle diamond dependency pattern", () => {
|
||||
// A
|
||||
// / \
|
||||
// B C
|
||||
// \ /
|
||||
// D
|
||||
// A depends on B and C, both depend on D
|
||||
const metas = new Map<string, FileMeta>()
|
||||
metas.set(
|
||||
"/project/a.ts",
|
||||
createFileMeta({
|
||||
dependencies: ["/project/b.ts", "/project/c.ts"],
|
||||
dependents: [],
|
||||
}),
|
||||
)
|
||||
metas.set(
|
||||
"/project/b.ts",
|
||||
createFileMeta({
|
||||
dependencies: ["/project/d.ts"],
|
||||
dependents: ["/project/a.ts"],
|
||||
}),
|
||||
)
|
||||
metas.set(
|
||||
"/project/c.ts",
|
||||
createFileMeta({
|
||||
dependencies: ["/project/d.ts"],
|
||||
dependents: ["/project/a.ts"],
|
||||
}),
|
||||
)
|
||||
metas.set(
|
||||
"/project/d.ts",
|
||||
createFileMeta({
|
||||
dependencies: [],
|
||||
dependents: ["/project/b.ts", "/project/c.ts"],
|
||||
}),
|
||||
)
|
||||
|
||||
analyzer.computeTransitiveCounts(metas)
|
||||
|
||||
// D is depended on by B, C, and transitively by A
|
||||
expect(metas.get("/project/d.ts")!.transitiveDepCount).toBe(3)
|
||||
// A depends on B, C, and transitively on D
|
||||
expect(metas.get("/project/a.ts")!.transitiveDepByCount).toBe(3)
|
||||
})
|
||||
|
||||
it("should handle circular dependencies gracefully", () => {
|
||||
// A -> B -> C -> A (circular)
|
||||
const metas = new Map<string, FileMeta>()
|
||||
metas.set(
|
||||
"/project/a.ts",
|
||||
createFileMeta({
|
||||
dependencies: ["/project/b.ts"],
|
||||
dependents: ["/project/c.ts"],
|
||||
}),
|
||||
)
|
||||
metas.set(
|
||||
"/project/b.ts",
|
||||
createFileMeta({
|
||||
dependencies: ["/project/c.ts"],
|
||||
dependents: ["/project/a.ts"],
|
||||
}),
|
||||
)
|
||||
metas.set(
|
||||
"/project/c.ts",
|
||||
createFileMeta({
|
||||
dependencies: ["/project/a.ts"],
|
||||
dependents: ["/project/b.ts"],
|
||||
}),
|
||||
)
|
||||
|
||||
// Should not throw, should handle cycles
|
||||
analyzer.computeTransitiveCounts(metas)
|
||||
|
||||
// Each file has the other 2 as transitive dependents
|
||||
expect(metas.get("/project/a.ts")!.transitiveDepCount).toBe(2)
|
||||
expect(metas.get("/project/b.ts")!.transitiveDepCount).toBe(2)
|
||||
expect(metas.get("/project/c.ts")!.transitiveDepCount).toBe(2)
|
||||
})
|
||||
|
||||
it("should return 0 for files with no dependencies", () => {
|
||||
const metas = new Map<string, FileMeta>()
|
||||
metas.set(
|
||||
"/project/a.ts",
|
||||
createFileMeta({
|
||||
dependencies: [],
|
||||
dependents: [],
|
||||
}),
|
||||
)
|
||||
|
||||
analyzer.computeTransitiveCounts(metas)
|
||||
|
||||
expect(metas.get("/project/a.ts")!.transitiveDepCount).toBe(0)
|
||||
expect(metas.get("/project/a.ts")!.transitiveDepByCount).toBe(0)
|
||||
})
|
||||
|
||||
it("should handle empty metas map", () => {
|
||||
const metas = new Map<string, FileMeta>()
|
||||
// Should not throw
|
||||
expect(() => analyzer.computeTransitiveCounts(metas)).not.toThrow()
|
||||
})
|
||||
|
||||
it("should handle single file", () => {
|
||||
const metas = new Map<string, FileMeta>()
|
||||
metas.set(
|
||||
"/project/a.ts",
|
||||
createFileMeta({
|
||||
dependencies: [],
|
||||
dependents: [],
|
||||
}),
|
||||
)
|
||||
|
||||
analyzer.computeTransitiveCounts(metas)
|
||||
|
||||
expect(metas.get("/project/a.ts")!.transitiveDepCount).toBe(0)
|
||||
expect(metas.get("/project/a.ts")!.transitiveDepByCount).toBe(0)
|
||||
})
|
||||
|
||||
it("should handle multiple roots depending on same leaf", () => {
|
||||
// A -> C, B -> C
|
||||
const metas = new Map<string, FileMeta>()
|
||||
metas.set(
|
||||
"/project/a.ts",
|
||||
createFileMeta({
|
||||
dependencies: ["/project/c.ts"],
|
||||
dependents: [],
|
||||
}),
|
||||
)
|
||||
metas.set(
|
||||
"/project/b.ts",
|
||||
createFileMeta({
|
||||
dependencies: ["/project/c.ts"],
|
||||
dependents: [],
|
||||
}),
|
||||
)
|
||||
metas.set(
|
||||
"/project/c.ts",
|
||||
createFileMeta({
|
||||
dependencies: [],
|
||||
dependents: ["/project/a.ts", "/project/b.ts"],
|
||||
}),
|
||||
)
|
||||
|
||||
analyzer.computeTransitiveCounts(metas)
|
||||
|
||||
expect(metas.get("/project/c.ts")!.transitiveDepCount).toBe(2) // A and B
|
||||
expect(metas.get("/project/a.ts")!.transitiveDepByCount).toBe(1) // C
|
||||
expect(metas.get("/project/b.ts")!.transitiveDepByCount).toBe(1) // C
|
||||
})
|
||||
|
||||
it("should handle deep dependency chains", () => {
|
||||
// A -> B -> C -> D -> E
|
||||
const metas = new Map<string, FileMeta>()
|
||||
metas.set(
|
||||
"/project/a.ts",
|
||||
createFileMeta({
|
||||
dependencies: ["/project/b.ts"],
|
||||
dependents: [],
|
||||
}),
|
||||
)
|
||||
metas.set(
|
||||
"/project/b.ts",
|
||||
createFileMeta({
|
||||
dependencies: ["/project/c.ts"],
|
||||
dependents: ["/project/a.ts"],
|
||||
}),
|
||||
)
|
||||
metas.set(
|
||||
"/project/c.ts",
|
||||
createFileMeta({
|
||||
dependencies: ["/project/d.ts"],
|
||||
dependents: ["/project/b.ts"],
|
||||
}),
|
||||
)
|
||||
metas.set(
|
||||
"/project/d.ts",
|
||||
createFileMeta({
|
||||
dependencies: ["/project/e.ts"],
|
||||
dependents: ["/project/c.ts"],
|
||||
}),
|
||||
)
|
||||
metas.set(
|
||||
"/project/e.ts",
|
||||
createFileMeta({
|
||||
dependencies: [],
|
||||
dependents: ["/project/d.ts"],
|
||||
}),
|
||||
)
|
||||
|
||||
analyzer.computeTransitiveCounts(metas)
|
||||
|
||||
// E has transitive dependents: D, C, B, A
|
||||
expect(metas.get("/project/e.ts")!.transitiveDepCount).toBe(4)
|
||||
// A has transitive dependencies: B, C, D, E
|
||||
expect(metas.get("/project/a.ts")!.transitiveDepByCount).toBe(4)
|
||||
})
|
||||
})
|
||||
|
||||
describe("getTransitiveDependents", () => {
|
||||
it("should return empty set for file not in metas", () => {
|
||||
const metas = new Map<string, FileMeta>()
|
||||
const cache = new Map<string, Set<string>>()
|
||||
|
||||
const result = analyzer.getTransitiveDependents("/project/unknown.ts", metas, cache)
|
||||
|
||||
expect(result.size).toBe(0)
|
||||
})
|
||||
|
||||
it("should use cache for repeated calls", () => {
|
||||
const metas = new Map<string, FileMeta>()
|
||||
metas.set(
|
||||
"/project/a.ts",
|
||||
createFileMeta({
|
||||
dependencies: [],
|
||||
dependents: ["/project/b.ts"],
|
||||
}),
|
||||
)
|
||||
metas.set(
|
||||
"/project/b.ts",
|
||||
createFileMeta({
|
||||
dependencies: ["/project/a.ts"],
|
||||
dependents: [],
|
||||
}),
|
||||
)
|
||||
|
||||
const cache = new Map<string, Set<string>>()
|
||||
const result1 = analyzer.getTransitiveDependents("/project/a.ts", metas, cache)
|
||||
const result2 = analyzer.getTransitiveDependents("/project/a.ts", metas, cache)
|
||||
|
||||
// Should return same instance from cache
|
||||
expect(result1).toBe(result2)
|
||||
expect(result1.size).toBe(1)
|
||||
})
|
||||
})
|
||||
|
||||
describe("getTransitiveDependencies", () => {
|
||||
it("should return empty set for file not in metas", () => {
|
||||
const metas = new Map<string, FileMeta>()
|
||||
const cache = new Map<string, Set<string>>()
|
||||
|
||||
const result = analyzer.getTransitiveDependencies("/project/unknown.ts", metas, cache)
|
||||
|
||||
expect(result.size).toBe(0)
|
||||
})
|
||||
|
||||
it("should use cache for repeated calls", () => {
|
||||
const metas = new Map<string, FileMeta>()
|
||||
metas.set(
|
||||
"/project/a.ts",
|
||||
createFileMeta({
|
||||
dependencies: ["/project/b.ts"],
|
||||
dependents: [],
|
||||
}),
|
||||
)
|
||||
metas.set(
|
||||
"/project/b.ts",
|
||||
createFileMeta({
|
||||
dependencies: [],
|
||||
dependents: ["/project/a.ts"],
|
||||
}),
|
||||
)
|
||||
|
||||
const cache = new Map<string, Set<string>>()
|
||||
const result1 = analyzer.getTransitiveDependencies("/project/a.ts", metas, cache)
|
||||
const result2 = analyzer.getTransitiveDependencies("/project/a.ts", metas, cache)
|
||||
|
||||
// Should return same instance from cache
|
||||
expect(result1).toBe(result2)
|
||||
expect(result1.size).toBe(1)
|
||||
})
|
||||
})
|
||||
|
||||
describe("analyzeAll with transitive counts", () => {
|
||||
it("should compute transitive counts in analyzeAll", () => {
|
||||
const files = new Map<string, { ast: FileAST; content: string }>()
|
||||
|
||||
// A imports B, B imports C
|
||||
const aContent = `import { b } from "./b"`
|
||||
const aAST = parser.parse(aContent, "ts")
|
||||
files.set("/project/src/a.ts", { ast: aAST, content: aContent })
|
||||
|
||||
const bContent = `import { c } from "./c"\nexport const b = () => c()`
|
||||
const bAST = parser.parse(bContent, "ts")
|
||||
files.set("/project/src/b.ts", { ast: bAST, content: bContent })
|
||||
|
||||
const cContent = `export const c = () => 42`
|
||||
const cAST = parser.parse(cContent, "ts")
|
||||
files.set("/project/src/c.ts", { ast: cAST, content: cContent })
|
||||
|
||||
const results = analyzer.analyzeAll(files)
|
||||
|
||||
// C has transitive dependents: B and A
|
||||
expect(results.get("/project/src/c.ts")!.transitiveDepCount).toBe(2)
|
||||
// A has transitive dependencies: B and C
|
||||
expect(results.get("/project/src/a.ts")!.transitiveDepByCount).toBe(2)
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
@@ -135,6 +135,108 @@ describe("ResponseParser", () => {
|
||||
expect(result.parseErrors[0]).toContain("unknown_tool")
|
||||
})
|
||||
|
||||
it("should normalize tool name aliases", () => {
|
||||
// get_functions -> get_lines (common LLM typo)
|
||||
const response1 = `<tool_call name="get_functions"><path>src/index.ts</path></tool_call>`
|
||||
const result1 = parseToolCalls(response1)
|
||||
expect(result1.toolCalls).toHaveLength(1)
|
||||
expect(result1.toolCalls[0].name).toBe("get_lines")
|
||||
expect(result1.hasParseErrors).toBe(false)
|
||||
|
||||
// read_file -> get_lines
|
||||
const response2 = `<tool_call name="read_file"><path>test.ts</path></tool_call>`
|
||||
const result2 = parseToolCalls(response2)
|
||||
expect(result2.toolCalls).toHaveLength(1)
|
||||
expect(result2.toolCalls[0].name).toBe("get_lines")
|
||||
|
||||
// find_todos -> get_todos
|
||||
const response3 = `<tool_call name="find_todos"></tool_call>`
|
||||
const result3 = parseToolCalls(response3)
|
||||
expect(result3.toolCalls).toHaveLength(1)
|
||||
expect(result3.toolCalls[0].name).toBe("get_todos")
|
||||
|
||||
// list_files -> get_structure
|
||||
const response4 = `<tool_call name="list_files"><path>.</path></tool_call>`
|
||||
const result4 = parseToolCalls(response4)
|
||||
expect(result4.toolCalls).toHaveLength(1)
|
||||
expect(result4.toolCalls[0].name).toBe("get_structure")
|
||||
})
|
||||
|
||||
// JSON format tests
|
||||
it("should parse JSON format tool calls as fallback", () => {
|
||||
const response = `{"name": "get_lines", "arguments": {"path": "src/index.ts"}}`
|
||||
const result = parseToolCalls(response)
|
||||
|
||||
expect(result.toolCalls).toHaveLength(1)
|
||||
expect(result.toolCalls[0].name).toBe("get_lines")
|
||||
expect(result.toolCalls[0].params).toEqual({ path: "src/index.ts" })
|
||||
expect(result.hasParseErrors).toBe(false)
|
||||
})
|
||||
|
||||
it("should parse JSON format with numeric arguments", () => {
|
||||
const response = `{"name": "get_lines", "arguments": {"path": "src/index.ts", "start": 1, "end": 50}}`
|
||||
const result = parseToolCalls(response)
|
||||
|
||||
expect(result.toolCalls).toHaveLength(1)
|
||||
expect(result.toolCalls[0].params).toEqual({
|
||||
path: "src/index.ts",
|
||||
start: 1,
|
||||
end: 50,
|
||||
})
|
||||
})
|
||||
|
||||
it("should parse JSON format with surrounding text", () => {
|
||||
const response = `I'll read the file for you:
|
||||
{"name": "get_lines", "arguments": {"path": "src/index.ts"}}
|
||||
Let me know if you need more.`
|
||||
|
||||
const result = parseToolCalls(response)
|
||||
|
||||
expect(result.toolCalls).toHaveLength(1)
|
||||
expect(result.toolCalls[0].name).toBe("get_lines")
|
||||
expect(result.content).toContain("I'll read the file for you:")
|
||||
expect(result.content).toContain("Let me know if you need more.")
|
||||
})
|
||||
|
||||
it("should normalize tool name aliases in JSON format", () => {
|
||||
// read_file -> get_lines
|
||||
const response = `{"name": "read_file", "arguments": {"path": "test.ts"}}`
|
||||
const result = parseToolCalls(response)
|
||||
|
||||
expect(result.toolCalls).toHaveLength(1)
|
||||
expect(result.toolCalls[0].name).toBe("get_lines")
|
||||
})
|
||||
|
||||
it("should reject unknown tool names in JSON format", () => {
|
||||
const response = `{"name": "unknown_tool", "arguments": {"path": "test.ts"}}`
|
||||
const result = parseToolCalls(response)
|
||||
|
||||
expect(result.toolCalls).toHaveLength(0)
|
||||
expect(result.hasParseErrors).toBe(true)
|
||||
expect(result.parseErrors[0]).toContain("unknown_tool")
|
||||
})
|
||||
|
||||
it("should prefer XML over JSON when both present", () => {
|
||||
const response = `<tool_call name="get_lines"><path>xml.ts</path></tool_call>
|
||||
{"name": "get_function", "arguments": {"path": "json.ts", "name": "foo"}}`
|
||||
|
||||
const result = parseToolCalls(response)
|
||||
|
||||
// Should only parse XML since it was found first
|
||||
expect(result.toolCalls).toHaveLength(1)
|
||||
expect(result.toolCalls[0].name).toBe("get_lines")
|
||||
expect(result.toolCalls[0].params.path).toBe("xml.ts")
|
||||
})
|
||||
|
||||
it("should parse JSON with empty arguments", () => {
|
||||
const response = `{"name": "git_status", "arguments": {}}`
|
||||
const result = parseToolCalls(response)
|
||||
|
||||
expect(result.toolCalls).toHaveLength(1)
|
||||
expect(result.toolCalls[0].name).toBe("git_status")
|
||||
expect(result.toolCalls[0].params).toEqual({})
|
||||
})
|
||||
|
||||
it("should support CDATA for multiline content", () => {
|
||||
const response = `<tool_call name="edit_lines">
|
||||
<path>src/index.ts</path>
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -16,6 +16,9 @@ describe("ContextConfigSchema", () => {
|
||||
autoCompressAt: 0.8,
|
||||
compressionMethod: "llm-summary",
|
||||
includeSignatures: true,
|
||||
includeDepsGraph: true,
|
||||
includeCircularDeps: true,
|
||||
includeHighImpactFiles: true,
|
||||
})
|
||||
})
|
||||
|
||||
@@ -28,6 +31,9 @@ describe("ContextConfigSchema", () => {
|
||||
autoCompressAt: 0.8,
|
||||
compressionMethod: "llm-summary",
|
||||
includeSignatures: true,
|
||||
includeDepsGraph: true,
|
||||
includeCircularDeps: true,
|
||||
includeHighImpactFiles: true,
|
||||
})
|
||||
})
|
||||
})
|
||||
@@ -165,6 +171,9 @@ describe("ContextConfigSchema", () => {
|
||||
autoCompressAt: 0.8,
|
||||
compressionMethod: "llm-summary",
|
||||
includeSignatures: true,
|
||||
includeDepsGraph: true,
|
||||
includeCircularDeps: true,
|
||||
includeHighImpactFiles: true,
|
||||
})
|
||||
})
|
||||
|
||||
@@ -179,6 +188,9 @@ describe("ContextConfigSchema", () => {
|
||||
autoCompressAt: 0.9,
|
||||
compressionMethod: "llm-summary",
|
||||
includeSignatures: true,
|
||||
includeDepsGraph: true,
|
||||
includeCircularDeps: true,
|
||||
includeHighImpactFiles: true,
|
||||
})
|
||||
})
|
||||
|
||||
@@ -194,6 +206,9 @@ describe("ContextConfigSchema", () => {
|
||||
autoCompressAt: 0.8,
|
||||
compressionMethod: "truncate",
|
||||
includeSignatures: true,
|
||||
includeDepsGraph: true,
|
||||
includeCircularDeps: true,
|
||||
includeHighImpactFiles: true,
|
||||
})
|
||||
})
|
||||
})
|
||||
@@ -206,6 +221,9 @@ describe("ContextConfigSchema", () => {
|
||||
autoCompressAt: 0.85,
|
||||
compressionMethod: "truncate" as const,
|
||||
includeSignatures: false,
|
||||
includeDepsGraph: false,
|
||||
includeCircularDeps: false,
|
||||
includeHighImpactFiles: false,
|
||||
}
|
||||
|
||||
const result = ContextConfigSchema.parse(config)
|
||||
@@ -219,6 +237,9 @@ describe("ContextConfigSchema", () => {
|
||||
autoCompressAt: 0.8,
|
||||
compressionMethod: "llm-summary" as const,
|
||||
includeSignatures: true,
|
||||
includeDepsGraph: true,
|
||||
includeCircularDeps: true,
|
||||
includeHighImpactFiles: true,
|
||||
}
|
||||
|
||||
const result = ContextConfigSchema.parse(config)
|
||||
@@ -250,4 +271,79 @@ describe("ContextConfigSchema", () => {
|
||||
expect(() => ContextConfigSchema.parse({ includeSignatures: 1 })).toThrow()
|
||||
})
|
||||
})
|
||||
|
||||
describe("includeDepsGraph", () => {
|
||||
it("should accept true", () => {
|
||||
const result = ContextConfigSchema.parse({ includeDepsGraph: true })
|
||||
expect(result.includeDepsGraph).toBe(true)
|
||||
})
|
||||
|
||||
it("should accept false", () => {
|
||||
const result = ContextConfigSchema.parse({ includeDepsGraph: false })
|
||||
expect(result.includeDepsGraph).toBe(false)
|
||||
})
|
||||
|
||||
it("should default to true", () => {
|
||||
const result = ContextConfigSchema.parse({})
|
||||
expect(result.includeDepsGraph).toBe(true)
|
||||
})
|
||||
|
||||
it("should reject non-boolean", () => {
|
||||
expect(() => ContextConfigSchema.parse({ includeDepsGraph: "true" })).toThrow()
|
||||
})
|
||||
|
||||
it("should reject number", () => {
|
||||
expect(() => ContextConfigSchema.parse({ includeDepsGraph: 1 })).toThrow()
|
||||
})
|
||||
})
|
||||
|
||||
describe("includeCircularDeps", () => {
|
||||
it("should accept true", () => {
|
||||
const result = ContextConfigSchema.parse({ includeCircularDeps: true })
|
||||
expect(result.includeCircularDeps).toBe(true)
|
||||
})
|
||||
|
||||
it("should accept false", () => {
|
||||
const result = ContextConfigSchema.parse({ includeCircularDeps: false })
|
||||
expect(result.includeCircularDeps).toBe(false)
|
||||
})
|
||||
|
||||
it("should default to true", () => {
|
||||
const result = ContextConfigSchema.parse({})
|
||||
expect(result.includeCircularDeps).toBe(true)
|
||||
})
|
||||
|
||||
it("should reject non-boolean", () => {
|
||||
expect(() => ContextConfigSchema.parse({ includeCircularDeps: "true" })).toThrow()
|
||||
})
|
||||
|
||||
it("should reject number", () => {
|
||||
expect(() => ContextConfigSchema.parse({ includeCircularDeps: 1 })).toThrow()
|
||||
})
|
||||
})
|
||||
|
||||
describe("includeHighImpactFiles", () => {
|
||||
it("should accept true", () => {
|
||||
const result = ContextConfigSchema.parse({ includeHighImpactFiles: true })
|
||||
expect(result.includeHighImpactFiles).toBe(true)
|
||||
})
|
||||
|
||||
it("should accept false", () => {
|
||||
const result = ContextConfigSchema.parse({ includeHighImpactFiles: false })
|
||||
expect(result.includeHighImpactFiles).toBe(false)
|
||||
})
|
||||
|
||||
it("should default to true", () => {
|
||||
const result = ContextConfigSchema.parse({})
|
||||
expect(result.includeHighImpactFiles).toBe(true)
|
||||
})
|
||||
|
||||
it("should reject non-boolean", () => {
|
||||
expect(() => ContextConfigSchema.parse({ includeHighImpactFiles: "true" })).toThrow()
|
||||
})
|
||||
|
||||
it("should reject number", () => {
|
||||
expect(() => ContextConfigSchema.parse({ includeHighImpactFiles: 1 })).toThrow()
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
Reference in New Issue
Block a user