mirror of
https://github.com/samiyev/puaros.git
synced 2025-12-27 23:06:54 +05:00
Compare commits
7 Commits
ipuaro-v0.
...
ipuaro-v0.
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
3e7762ec4e | ||
|
|
c82006bbda | ||
|
|
2e84472e49 | ||
|
|
17d75dbd54 | ||
|
|
fac5966678 | ||
|
|
92ba3fd9ba | ||
|
|
e9aaa708fe |
@@ -5,6 +5,150 @@ All notable changes to this project will be documented in this file.
|
|||||||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
|
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
|
||||||
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
||||||
|
|
||||||
|
## [0.30.2] - 2025-12-05 - JSON Tool Call Parsing & Improved Prompts
|
||||||
|
|
||||||
|
### Added
|
||||||
|
|
||||||
|
- **JSON Tool Call Fallback in ResponseParser**
|
||||||
|
- LLM responses with JSON format `{"name": "tool", "arguments": {...}}` are now parsed
|
||||||
|
- Fallback to JSON when XML format not found
|
||||||
|
- Works with models like qwen2.5-coder that prefer JSON over XML
|
||||||
|
|
||||||
|
- **Tool Name Aliases**
|
||||||
|
- `get_functions`, `read_file`, `read_lines` → `get_lines`
|
||||||
|
- `list_files`, `get_files` → `get_structure`
|
||||||
|
- `find_todos` → `get_todos`
|
||||||
|
- And more common LLM typos/variations
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
|
||||||
|
- **Improved System Prompt**
|
||||||
|
- Added clear "When to Use Tools" / "Do NOT use tools" sections
|
||||||
|
- More concise and directive instructions
|
||||||
|
- Better examples for tool usage
|
||||||
|
|
||||||
|
### Technical Details
|
||||||
|
|
||||||
|
- Total tests: 1848 passed (+8 new tests for JSON parsing)
|
||||||
|
- 0 ESLint errors, 3 warnings (pre-existing complexity)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## [0.30.1] - 2025-12-05 - Display Transitive Counts in Context
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
|
||||||
|
- **High Impact Files table now includes transitive counts**
|
||||||
|
- Table header changed from `| File | Impact | Dependents |` to `| File | Impact | Direct | Transitive |`
|
||||||
|
- Shows both direct dependent count and transitive dependent count
|
||||||
|
- Sorting changed: now sorts by transitive count first, then by impact score
|
||||||
|
- Example: `| utils/validation | 67% | 12 | 24 |`
|
||||||
|
|
||||||
|
### Technical Details
|
||||||
|
|
||||||
|
- Total tests: 1839 passed
|
||||||
|
- 0 ESLint errors, 3 warnings (pre-existing complexity)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## [0.30.0] - 2025-12-05 - Transitive Dependencies Count
|
||||||
|
|
||||||
|
### Added
|
||||||
|
|
||||||
|
- **Transitive Dependency Counts in FileMeta (v0.30.0)**
|
||||||
|
- New `transitiveDepCount: number` field - count of files that depend on this file transitively
|
||||||
|
- New `transitiveDepByCount: number` field - count of files this file depends on transitively
|
||||||
|
- Includes both direct and indirect dependencies/dependents
|
||||||
|
- Excludes the file itself from counts (handles circular dependencies)
|
||||||
|
|
||||||
|
- **Transitive Dependency Computation in MetaAnalyzer**
|
||||||
|
- New `computeTransitiveCounts()` method - computes transitive counts for all files
|
||||||
|
- New `getTransitiveDependents()` method - DFS with cycle detection for dependents
|
||||||
|
- New `getTransitiveDependencies()` method - DFS with cycle detection for dependencies
|
||||||
|
- Top-level caching for efficiency (avoids re-computing for each file)
|
||||||
|
- Graceful handling of circular dependencies
|
||||||
|
|
||||||
|
### Technical Details
|
||||||
|
|
||||||
|
- Total tests: 1840 passed (was 1826, +14 new tests)
|
||||||
|
- 9 new tests for computeTransitiveCounts()
|
||||||
|
- 2 new tests for getTransitiveDependents()
|
||||||
|
- 2 new tests for getTransitiveDependencies()
|
||||||
|
- 1 new test for analyzeAll with transitive counts
|
||||||
|
- Coverage: 97.58% lines, 91.5% branches, 98.64% functions
|
||||||
|
- 0 ESLint errors, 3 warnings (pre-existing complexity)
|
||||||
|
- Build successful
|
||||||
|
|
||||||
|
### Notes
|
||||||
|
|
||||||
|
This completes v0.30.0 - the final feature milestone before v1.0.0:
|
||||||
|
- ✅ 0.27.0 - Inline Dependency Graph
|
||||||
|
- ✅ 0.28.0 - Circular Dependencies in Context
|
||||||
|
- ✅ 0.29.0 - Impact Score
|
||||||
|
- ✅ 0.30.0 - Transitive Dependencies Count
|
||||||
|
|
||||||
|
Next milestone: v1.0.0 - Production Ready
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## [0.29.0] - 2025-12-05 - Impact Score
|
||||||
|
|
||||||
|
### Added
|
||||||
|
|
||||||
|
- **High Impact Files in Initial Context (v0.29.0)**
|
||||||
|
- New `## High Impact Files` section in initial context
|
||||||
|
- Shows files with highest impact scores (percentage of codebase depending on them)
|
||||||
|
- Table format with File, Impact %, and Dependents count
|
||||||
|
- Files sorted by impact score descending
|
||||||
|
- Default: shows top 10 files with impact score >= 5%
|
||||||
|
|
||||||
|
- **Impact Score Computation**
|
||||||
|
- New `impactScore: number` field in `FileMeta` (0-100)
|
||||||
|
- Formula: `(dependents.length / (totalFiles - 1)) * 100`
|
||||||
|
- Computed in `MetaAnalyzer.analyzeAll()` after all files analyzed
|
||||||
|
- New `calculateImpactScore()` helper function in FileMeta.ts
|
||||||
|
|
||||||
|
- **Configuration Option**
|
||||||
|
- `includeHighImpactFiles: boolean` in ContextConfigSchema (default: `true`)
|
||||||
|
- `includeHighImpactFiles` option in `BuildContextOptions`
|
||||||
|
- Users can disable to save tokens: `context.includeHighImpactFiles: false`
|
||||||
|
|
||||||
|
- **New Helper Function in prompts.ts**
|
||||||
|
- `formatHighImpactFiles()` - formats high impact files table for display
|
||||||
|
|
||||||
|
### New Context Format
|
||||||
|
|
||||||
|
```
|
||||||
|
## High Impact Files
|
||||||
|
|
||||||
|
| File | Impact | Dependents |
|
||||||
|
|------|--------|------------|
|
||||||
|
| utils/validation | 67% | 12 files |
|
||||||
|
| types/user | 45% | 8 files |
|
||||||
|
| services/user | 34% | 6 files |
|
||||||
|
```
|
||||||
|
|
||||||
|
### Technical Details
|
||||||
|
|
||||||
|
- Total tests: 1826 passed (was 1798, +28 new tests)
|
||||||
|
- 9 new tests for calculateImpactScore()
|
||||||
|
- 14 new tests for formatHighImpactFiles() and buildInitialContext
|
||||||
|
- 5 new tests for includeHighImpactFiles config option
|
||||||
|
- Coverage: 97.52% lines, 91.3% branches, 98.63% functions
|
||||||
|
- 0 ESLint errors, 3 warnings (pre-existing complexity)
|
||||||
|
- Build successful
|
||||||
|
|
||||||
|
### Notes
|
||||||
|
|
||||||
|
This completes v0.29.0 of the Graph Metrics milestone:
|
||||||
|
- ✅ 0.27.0 - Inline Dependency Graph
|
||||||
|
- ✅ 0.28.0 - Circular Dependencies in Context
|
||||||
|
- ✅ 0.29.0 - Impact Score
|
||||||
|
|
||||||
|
Next milestone: v0.30.0 - Transitive Dependencies Count
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
## [0.28.0] - 2025-12-05 - Circular Dependencies in Context
|
## [0.28.0] - 2025-12-05 - Circular Dependencies in Context
|
||||||
|
|
||||||
### Added
|
### Added
|
||||||
|
|||||||
@@ -1950,10 +1950,10 @@ Enhance initial context for LLM: add function signatures, interface field types,
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Version 0.29.0 - Impact Score 📈
|
## Version 0.29.0 - Impact Score 📈 ✅
|
||||||
|
|
||||||
**Priority:** MEDIUM
|
**Priority:** MEDIUM
|
||||||
**Status:** Planned
|
**Status:** Complete (v0.29.0 released)
|
||||||
|
|
||||||
### Description
|
### Description
|
||||||
|
|
||||||
@@ -1972,19 +1972,25 @@ Enhance initial context for LLM: add function signatures, interface field types,
|
|||||||
```
|
```
|
||||||
|
|
||||||
**Changes:**
|
**Changes:**
|
||||||
- [ ] Add `impactScore: number` to FileMeta (0-100)
|
- [x] Add `impactScore: number` to FileMeta (0-100)
|
||||||
- [ ] Compute in MetaAnalyzer: (transitiveDepByCount / totalFiles) * 100
|
- [x] Compute in MetaAnalyzer: (dependents.length / (totalFiles - 1)) * 100
|
||||||
- [ ] Add `formatHighImpactFiles()` to prompts.ts
|
- [x] Add `formatHighImpactFiles()` to prompts.ts
|
||||||
- [ ] Show top-10 high impact files
|
- [x] Show top-10 high impact files
|
||||||
|
- [x] Add `includeHighImpactFiles` config option (default: true)
|
||||||
|
|
||||||
|
**Tests:**
|
||||||
|
- [x] Unit tests for calculateImpactScore (9 tests)
|
||||||
|
- [x] Unit tests for formatHighImpactFiles (14 tests)
|
||||||
|
- [x] Unit tests for includeHighImpactFiles config (5 tests)
|
||||||
|
|
||||||
**Why:** LLM understands which files are critical for changes.
|
**Why:** LLM understands which files are critical for changes.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Version 0.30.0 - Transitive Dependencies Count 🔢
|
## Version 0.30.0 - Transitive Dependencies Count 🔢 ✅
|
||||||
|
|
||||||
**Priority:** MEDIUM
|
**Priority:** MEDIUM
|
||||||
**Status:** Planned
|
**Status:** Complete (v0.30.0 released)
|
||||||
|
|
||||||
### Description
|
### Description
|
||||||
|
|
||||||
@@ -2001,13 +2007,19 @@ interface FileMeta {
|
|||||||
```
|
```
|
||||||
|
|
||||||
**Changes:**
|
**Changes:**
|
||||||
- [ ] Add `computeTransitiveDeps()` to MetaAnalyzer
|
- [x] Add `transitiveDepCount` and `transitiveDepByCount` to FileMeta
|
||||||
- [ ] Use DFS with memoization for efficiency
|
- [x] Add `computeTransitiveCounts()` to MetaAnalyzer
|
||||||
- [ ] Store in FileMeta
|
- [x] Add `getTransitiveDependents()` with DFS and cycle detection
|
||||||
|
- [x] Add `getTransitiveDependencies()` with DFS and cycle detection
|
||||||
|
- [x] Use top-level caching for efficiency
|
||||||
|
- [x] Handle circular dependencies gracefully (exclude self from count)
|
||||||
|
|
||||||
**Tests:**
|
**Tests:**
|
||||||
- [ ] Unit tests for transitive dependencies computation
|
- [x] Unit tests for transitive dependencies computation (14 tests)
|
||||||
- [ ] Performance tests for large codebases
|
- [x] Tests for circular dependencies
|
||||||
|
- [x] Tests for diamond dependency patterns
|
||||||
|
- [x] Tests for deep dependency chains
|
||||||
|
- [x] Cache behavior tests
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -2022,12 +2034,12 @@ interface FileMeta {
|
|||||||
- [x] Error handling complete ✅ (v0.16.0)
|
- [x] Error handling complete ✅ (v0.16.0)
|
||||||
- [ ] Performance optimized
|
- [ ] Performance optimized
|
||||||
- [x] Documentation complete ✅ (v0.17.0)
|
- [x] Documentation complete ✅ (v0.17.0)
|
||||||
- [x] Test coverage ≥91% branches, ≥95% lines/functions/statements ✅ (91.13% branches, 97.48% lines, 98.63% functions, 97.48% statements - 1798 tests)
|
- [x] Test coverage ≥91% branches, ≥95% lines/functions/statements ✅ (91.5% branches, 97.58% lines, 98.64% functions, 97.58% statements - 1840 tests)
|
||||||
- [x] 0 ESLint errors ✅
|
- [x] 0 ESLint errors ✅
|
||||||
- [x] Examples working ✅ (v0.18.0)
|
- [x] Examples working ✅ (v0.18.0)
|
||||||
- [x] CHANGELOG.md up to date ✅
|
- [x] CHANGELOG.md up to date ✅
|
||||||
- [x] Rich initial context (v0.24.0-v0.26.0) — function signatures, interface fields, enum values, decorators ✅
|
- [x] Rich initial context (v0.24.0-v0.26.0) — function signatures, interface fields, enum values, decorators ✅
|
||||||
- [ ] Graph metrics in context (v0.27.0-v0.30.0) — dependency graph ✅, circular deps ✅, impact score, transitive deps
|
- [x] Graph metrics in context (v0.27.0-v0.30.0) — dependency graph ✅, circular deps ✅, impact score ✅, transitive deps ✅
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -2106,7 +2118,7 @@ sessions:list # List<session_id>
|
|||||||
|
|
||||||
**Last Updated:** 2025-12-05
|
**Last Updated:** 2025-12-05
|
||||||
**Target Version:** 1.0.0
|
**Target Version:** 1.0.0
|
||||||
**Current Version:** 0.28.0
|
**Current Version:** 0.30.0
|
||||||
**Next Milestones:** v0.29.0 (Impact Score), v0.30.0 (Transitive Deps)
|
**Next Milestones:** v1.0.0 (Production Ready)
|
||||||
|
|
||||||
> **Note:** Rich Initial Context complete ✅ (v0.24.0-v0.26.0). Graph Metrics in progress (v0.27.0 ✅, v0.28.0 ✅, v0.29.0-v0.30.0 pending) for 1.0.0 release.
|
> **Note:** Rich Initial Context complete ✅ (v0.24.0-v0.26.0). Graph Metrics complete ✅ (v0.27.0-v0.30.0). All feature milestones done, ready for v1.0.0 stabilization.
|
||||||
@@ -1,6 +1,6 @@
|
|||||||
{
|
{
|
||||||
"name": "@samiyev/ipuaro",
|
"name": "@samiyev/ipuaro",
|
||||||
"version": "0.28.0",
|
"version": "0.30.1",
|
||||||
"description": "Local AI agent for codebase operations with infinite context feeling",
|
"description": "Local AI agent for codebase operations with infinite context feeling",
|
||||||
"author": "Fozilbek Samiyev <fozilbek.samiyev@gmail.com>",
|
"author": "Fozilbek Samiyev <fozilbek.samiyev@gmail.com>",
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
|
|||||||
@@ -18,6 +18,7 @@ import {
|
|||||||
buildInitialContext,
|
buildInitialContext,
|
||||||
type ProjectStructure,
|
type ProjectStructure,
|
||||||
SYSTEM_PROMPT,
|
SYSTEM_PROMPT,
|
||||||
|
TOOL_REMINDER,
|
||||||
} from "../../infrastructure/llm/prompts.js"
|
} from "../../infrastructure/llm/prompts.js"
|
||||||
import { parseToolCalls } from "../../infrastructure/llm/ResponseParser.js"
|
import { parseToolCalls } from "../../infrastructure/llm/ResponseParser.js"
|
||||||
import type { IToolRegistry } from "../interfaces/IToolRegistry.js"
|
import type { IToolRegistry } from "../interfaces/IToolRegistry.js"
|
||||||
@@ -277,6 +278,12 @@ export class HandleMessage {
|
|||||||
|
|
||||||
messages.push(...session.history)
|
messages.push(...session.history)
|
||||||
|
|
||||||
|
// Add tool reminder if last message is from user (first LLM call for this query)
|
||||||
|
const lastMessage = session.history[session.history.length - 1]
|
||||||
|
if (lastMessage?.role === "user") {
|
||||||
|
messages.push(createSystemMessage(TOOL_REMINDER))
|
||||||
|
}
|
||||||
|
|
||||||
return messages
|
return messages
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -26,6 +26,12 @@ export interface FileMeta {
|
|||||||
isEntryPoint: boolean
|
isEntryPoint: boolean
|
||||||
/** File type classification */
|
/** File type classification */
|
||||||
fileType: "source" | "test" | "config" | "types" | "unknown"
|
fileType: "source" | "test" | "config" | "types" | "unknown"
|
||||||
|
/** Impact score (0-100): percentage of codebase that depends on this file */
|
||||||
|
impactScore: number
|
||||||
|
/** Count of files that depend on this file transitively (including indirect dependents) */
|
||||||
|
transitiveDepCount: number
|
||||||
|
/** Count of files this file depends on transitively (including indirect dependencies) */
|
||||||
|
transitiveDepByCount: number
|
||||||
}
|
}
|
||||||
|
|
||||||
export function createFileMeta(partial: Partial<FileMeta> = {}): FileMeta {
|
export function createFileMeta(partial: Partial<FileMeta> = {}): FileMeta {
|
||||||
@@ -41,6 +47,9 @@ export function createFileMeta(partial: Partial<FileMeta> = {}): FileMeta {
|
|||||||
isHub: false,
|
isHub: false,
|
||||||
isEntryPoint: false,
|
isEntryPoint: false,
|
||||||
fileType: "unknown",
|
fileType: "unknown",
|
||||||
|
impactScore: 0,
|
||||||
|
transitiveDepCount: 0,
|
||||||
|
transitiveDepByCount: 0,
|
||||||
...partial,
|
...partial,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -48,3 +57,20 @@ export function createFileMeta(partial: Partial<FileMeta> = {}): FileMeta {
|
|||||||
export function isHubFile(dependentCount: number): boolean {
|
export function isHubFile(dependentCount: number): boolean {
|
||||||
return dependentCount > 5
|
return dependentCount > 5
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Calculate impact score based on number of dependents and total files.
|
||||||
|
* Impact score represents what percentage of the codebase depends on this file.
|
||||||
|
* @param dependentCount - Number of files that depend on this file
|
||||||
|
* @param totalFiles - Total number of files in the project
|
||||||
|
* @returns Impact score from 0 to 100
|
||||||
|
*/
|
||||||
|
export function calculateImpactScore(dependentCount: number, totalFiles: number): number {
|
||||||
|
if (totalFiles <= 1) {
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
// Exclude the file itself from the total
|
||||||
|
const maxPossibleDependents = totalFiles - 1
|
||||||
|
const score = (dependentCount / maxPossibleDependents) * 100
|
||||||
|
return Math.round(Math.min(100, score))
|
||||||
|
}
|
||||||
|
|||||||
@@ -1,5 +1,6 @@
|
|||||||
import * as path from "node:path"
|
import * as path from "node:path"
|
||||||
import {
|
import {
|
||||||
|
calculateImpactScore,
|
||||||
type ComplexityMetrics,
|
type ComplexityMetrics,
|
||||||
createFileMeta,
|
createFileMeta,
|
||||||
type FileMeta,
|
type FileMeta,
|
||||||
@@ -430,6 +431,7 @@ export class MetaAnalyzer {
|
|||||||
|
|
||||||
/**
|
/**
|
||||||
* Batch analyze multiple files.
|
* Batch analyze multiple files.
|
||||||
|
* Computes impact scores and transitive dependencies after all files are analyzed.
|
||||||
*/
|
*/
|
||||||
analyzeAll(files: Map<string, { ast: FileAST; content: string }>): Map<string, FileMeta> {
|
analyzeAll(files: Map<string, { ast: FileAST; content: string }>): Map<string, FileMeta> {
|
||||||
const allASTs = new Map<string, FileAST>()
|
const allASTs = new Map<string, FileAST>()
|
||||||
@@ -443,6 +445,171 @@ export class MetaAnalyzer {
|
|||||||
results.set(filePath, meta)
|
results.set(filePath, meta)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Compute impact scores now that we know total file count
|
||||||
|
const totalFiles = results.size
|
||||||
|
for (const [, meta] of results) {
|
||||||
|
meta.impactScore = calculateImpactScore(meta.dependents.length, totalFiles)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Compute transitive dependency counts
|
||||||
|
this.computeTransitiveCounts(results)
|
||||||
|
|
||||||
return results
|
return results
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Compute transitive dependency counts for all files.
|
||||||
|
* Uses DFS with memoization for efficiency.
|
||||||
|
*/
|
||||||
|
computeTransitiveCounts(metas: Map<string, FileMeta>): void {
|
||||||
|
// Memoization caches
|
||||||
|
const transitiveDepCache = new Map<string, Set<string>>()
|
||||||
|
const transitiveDepByCache = new Map<string, Set<string>>()
|
||||||
|
|
||||||
|
// Compute transitive dependents (files that depend on this file, directly or transitively)
|
||||||
|
for (const [filePath, meta] of metas) {
|
||||||
|
const transitiveDeps = this.getTransitiveDependents(filePath, metas, transitiveDepCache)
|
||||||
|
// Exclude the file itself from count (can happen in cycles)
|
||||||
|
meta.transitiveDepCount = transitiveDeps.has(filePath)
|
||||||
|
? transitiveDeps.size - 1
|
||||||
|
: transitiveDeps.size
|
||||||
|
}
|
||||||
|
|
||||||
|
// Compute transitive dependencies (files this file depends on, directly or transitively)
|
||||||
|
for (const [filePath, meta] of metas) {
|
||||||
|
const transitiveDepsBy = this.getTransitiveDependencies(
|
||||||
|
filePath,
|
||||||
|
metas,
|
||||||
|
transitiveDepByCache,
|
||||||
|
)
|
||||||
|
// Exclude the file itself from count (can happen in cycles)
|
||||||
|
meta.transitiveDepByCount = transitiveDepsBy.has(filePath)
|
||||||
|
? transitiveDepsBy.size - 1
|
||||||
|
: transitiveDepsBy.size
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get all files that depend on the given file transitively.
|
||||||
|
* Uses DFS with cycle detection. Caching only at the top level.
|
||||||
|
*/
|
||||||
|
getTransitiveDependents(
|
||||||
|
filePath: string,
|
||||||
|
metas: Map<string, FileMeta>,
|
||||||
|
cache: Map<string, Set<string>>,
|
||||||
|
visited?: Set<string>,
|
||||||
|
): Set<string> {
|
||||||
|
// Return cached result if available (only valid for top-level calls)
|
||||||
|
if (!visited) {
|
||||||
|
const cached = cache.get(filePath)
|
||||||
|
if (cached) {
|
||||||
|
return cached
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
const isTopLevel = !visited
|
||||||
|
if (!visited) {
|
||||||
|
visited = new Set()
|
||||||
|
}
|
||||||
|
|
||||||
|
// Detect cycles
|
||||||
|
if (visited.has(filePath)) {
|
||||||
|
return new Set()
|
||||||
|
}
|
||||||
|
|
||||||
|
visited.add(filePath)
|
||||||
|
const result = new Set<string>()
|
||||||
|
|
||||||
|
const meta = metas.get(filePath)
|
||||||
|
if (!meta) {
|
||||||
|
if (isTopLevel) {
|
||||||
|
cache.set(filePath, result)
|
||||||
|
}
|
||||||
|
return result
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add direct dependents
|
||||||
|
for (const dependent of meta.dependents) {
|
||||||
|
result.add(dependent)
|
||||||
|
|
||||||
|
// Recursively add transitive dependents
|
||||||
|
const transitive = this.getTransitiveDependents(
|
||||||
|
dependent,
|
||||||
|
metas,
|
||||||
|
cache,
|
||||||
|
new Set(visited),
|
||||||
|
)
|
||||||
|
for (const t of transitive) {
|
||||||
|
result.add(t)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Only cache top-level results (not intermediate results during recursion)
|
||||||
|
if (isTopLevel) {
|
||||||
|
cache.set(filePath, result)
|
||||||
|
}
|
||||||
|
return result
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get all files that the given file depends on transitively.
|
||||||
|
* Uses DFS with cycle detection. Caching only at the top level.
|
||||||
|
*/
|
||||||
|
getTransitiveDependencies(
|
||||||
|
filePath: string,
|
||||||
|
metas: Map<string, FileMeta>,
|
||||||
|
cache: Map<string, Set<string>>,
|
||||||
|
visited?: Set<string>,
|
||||||
|
): Set<string> {
|
||||||
|
// Return cached result if available (only valid for top-level calls)
|
||||||
|
if (!visited) {
|
||||||
|
const cached = cache.get(filePath)
|
||||||
|
if (cached) {
|
||||||
|
return cached
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
const isTopLevel = !visited
|
||||||
|
if (!visited) {
|
||||||
|
visited = new Set()
|
||||||
|
}
|
||||||
|
|
||||||
|
// Detect cycles
|
||||||
|
if (visited.has(filePath)) {
|
||||||
|
return new Set()
|
||||||
|
}
|
||||||
|
|
||||||
|
visited.add(filePath)
|
||||||
|
const result = new Set<string>()
|
||||||
|
|
||||||
|
const meta = metas.get(filePath)
|
||||||
|
if (!meta) {
|
||||||
|
if (isTopLevel) {
|
||||||
|
cache.set(filePath, result)
|
||||||
|
}
|
||||||
|
return result
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add direct dependencies
|
||||||
|
for (const dependency of meta.dependencies) {
|
||||||
|
result.add(dependency)
|
||||||
|
|
||||||
|
// Recursively add transitive dependencies
|
||||||
|
const transitive = this.getTransitiveDependencies(
|
||||||
|
dependency,
|
||||||
|
metas,
|
||||||
|
cache,
|
||||||
|
new Set(visited),
|
||||||
|
)
|
||||||
|
for (const t of transitive) {
|
||||||
|
result.add(t)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Only cache top-level results (not intermediate results during recursion)
|
||||||
|
if (isTopLevel) {
|
||||||
|
cache.set(filePath, result)
|
||||||
|
}
|
||||||
|
return result
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,14 +1,17 @@
|
|||||||
import { type Message, Ollama } from "ollama"
|
import { type Message, Ollama, type Tool } from "ollama"
|
||||||
import type { ILLMClient, LLMResponse } from "../../domain/services/ILLMClient.js"
|
import type { ILLMClient, LLMResponse } from "../../domain/services/ILLMClient.js"
|
||||||
import type { ChatMessage } from "../../domain/value-objects/ChatMessage.js"
|
import type { ChatMessage } from "../../domain/value-objects/ChatMessage.js"
|
||||||
|
import { createToolCall, type ToolCall } from "../../domain/value-objects/ToolCall.js"
|
||||||
import type { LLMConfig } from "../../shared/constants/config.js"
|
import type { LLMConfig } from "../../shared/constants/config.js"
|
||||||
import { IpuaroError } from "../../shared/errors/IpuaroError.js"
|
import { IpuaroError } from "../../shared/errors/IpuaroError.js"
|
||||||
import { estimateTokens } from "../../shared/utils/tokens.js"
|
import { estimateTokens } from "../../shared/utils/tokens.js"
|
||||||
import { parseToolCalls } from "./ResponseParser.js"
|
import { parseToolCalls } from "./ResponseParser.js"
|
||||||
|
import { getOllamaNativeTools } from "./toolDefs.js"
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Ollama LLM client implementation.
|
* Ollama LLM client implementation.
|
||||||
* Wraps the Ollama SDK for chat completions with tool support.
|
* Wraps the Ollama SDK for chat completions with tool support.
|
||||||
|
* Supports both XML-based and native Ollama tool calling.
|
||||||
*/
|
*/
|
||||||
export class OllamaClient implements ILLMClient {
|
export class OllamaClient implements ILLMClient {
|
||||||
private readonly client: Ollama
|
private readonly client: Ollama
|
||||||
@@ -17,6 +20,7 @@ export class OllamaClient implements ILLMClient {
|
|||||||
private readonly contextWindow: number
|
private readonly contextWindow: number
|
||||||
private readonly temperature: number
|
private readonly temperature: number
|
||||||
private readonly timeout: number
|
private readonly timeout: number
|
||||||
|
private readonly useNativeTools: boolean
|
||||||
private abortController: AbortController | null = null
|
private abortController: AbortController | null = null
|
||||||
|
|
||||||
constructor(config: LLMConfig) {
|
constructor(config: LLMConfig) {
|
||||||
@@ -26,11 +30,12 @@ export class OllamaClient implements ILLMClient {
|
|||||||
this.contextWindow = config.contextWindow
|
this.contextWindow = config.contextWindow
|
||||||
this.temperature = config.temperature
|
this.temperature = config.temperature
|
||||||
this.timeout = config.timeout
|
this.timeout = config.timeout
|
||||||
|
this.useNativeTools = config.useNativeTools ?? false
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Send messages to LLM and get response.
|
* Send messages to LLM and get response.
|
||||||
* Tool definitions should be included in the system prompt as XML format.
|
* Supports both XML-based tool calling and native Ollama tools.
|
||||||
*/
|
*/
|
||||||
async chat(messages: ChatMessage[]): Promise<LLMResponse> {
|
async chat(messages: ChatMessage[]): Promise<LLMResponse> {
|
||||||
const startTime = Date.now()
|
const startTime = Date.now()
|
||||||
@@ -39,26 +44,11 @@ export class OllamaClient implements ILLMClient {
|
|||||||
try {
|
try {
|
||||||
const ollamaMessages = this.convertMessages(messages)
|
const ollamaMessages = this.convertMessages(messages)
|
||||||
|
|
||||||
const response = await this.client.chat({
|
if (this.useNativeTools) {
|
||||||
model: this.model,
|
return await this.chatWithNativeTools(ollamaMessages, startTime)
|
||||||
messages: ollamaMessages,
|
|
||||||
options: {
|
|
||||||
temperature: this.temperature,
|
|
||||||
},
|
|
||||||
stream: false,
|
|
||||||
})
|
|
||||||
|
|
||||||
const timeMs = Date.now() - startTime
|
|
||||||
const parsed = parseToolCalls(response.message.content)
|
|
||||||
|
|
||||||
return {
|
|
||||||
content: parsed.content,
|
|
||||||
toolCalls: parsed.toolCalls,
|
|
||||||
tokens: response.eval_count ?? estimateTokens(response.message.content),
|
|
||||||
timeMs,
|
|
||||||
truncated: false,
|
|
||||||
stopReason: this.determineStopReason(response, parsed.toolCalls),
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
return await this.chatWithXMLTools(ollamaMessages, startTime)
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
if (error instanceof Error && error.name === "AbortError") {
|
if (error instanceof Error && error.name === "AbortError") {
|
||||||
throw IpuaroError.llm("Request was aborted")
|
throw IpuaroError.llm("Request was aborted")
|
||||||
@@ -69,6 +59,131 @@ export class OllamaClient implements ILLMClient {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Chat using XML-based tool calling (legacy mode).
|
||||||
|
*/
|
||||||
|
private async chatWithXMLTools(
|
||||||
|
ollamaMessages: Message[],
|
||||||
|
startTime: number,
|
||||||
|
): Promise<LLMResponse> {
|
||||||
|
const response = await this.client.chat({
|
||||||
|
model: this.model,
|
||||||
|
messages: ollamaMessages,
|
||||||
|
options: {
|
||||||
|
temperature: this.temperature,
|
||||||
|
},
|
||||||
|
stream: false,
|
||||||
|
})
|
||||||
|
|
||||||
|
const timeMs = Date.now() - startTime
|
||||||
|
const parsed = parseToolCalls(response.message.content)
|
||||||
|
|
||||||
|
return {
|
||||||
|
content: parsed.content,
|
||||||
|
toolCalls: parsed.toolCalls,
|
||||||
|
tokens: response.eval_count ?? estimateTokens(response.message.content),
|
||||||
|
timeMs,
|
||||||
|
truncated: false,
|
||||||
|
stopReason: this.determineStopReason(response, parsed.toolCalls),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Chat using native Ollama tool calling.
|
||||||
|
*/
|
||||||
|
private async chatWithNativeTools(
|
||||||
|
ollamaMessages: Message[],
|
||||||
|
startTime: number,
|
||||||
|
): Promise<LLMResponse> {
|
||||||
|
const nativeTools = getOllamaNativeTools() as Tool[]
|
||||||
|
|
||||||
|
const response = await this.client.chat({
|
||||||
|
model: this.model,
|
||||||
|
messages: ollamaMessages,
|
||||||
|
tools: nativeTools,
|
||||||
|
options: {
|
||||||
|
temperature: this.temperature,
|
||||||
|
},
|
||||||
|
stream: false,
|
||||||
|
})
|
||||||
|
|
||||||
|
const timeMs = Date.now() - startTime
|
||||||
|
let toolCalls = this.parseNativeToolCalls(response.message.tool_calls)
|
||||||
|
|
||||||
|
// Fallback: some models return tool calls as JSON in content
|
||||||
|
if (toolCalls.length === 0 && response.message.content) {
|
||||||
|
toolCalls = this.parseToolCallsFromContent(response.message.content)
|
||||||
|
}
|
||||||
|
|
||||||
|
const content = toolCalls.length > 0 ? "" : response.message.content || ""
|
||||||
|
|
||||||
|
return {
|
||||||
|
content,
|
||||||
|
toolCalls,
|
||||||
|
tokens: response.eval_count ?? estimateTokens(response.message.content || ""),
|
||||||
|
timeMs,
|
||||||
|
truncated: false,
|
||||||
|
stopReason: toolCalls.length > 0 ? "tool_use" : "end",
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Parse native Ollama tool calls into ToolCall format.
|
||||||
|
*/
|
||||||
|
private parseNativeToolCalls(
|
||||||
|
nativeToolCalls?: { function: { name: string; arguments: Record<string, unknown> } }[],
|
||||||
|
): ToolCall[] {
|
||||||
|
if (!nativeToolCalls || nativeToolCalls.length === 0) {
|
||||||
|
return []
|
||||||
|
}
|
||||||
|
|
||||||
|
return nativeToolCalls.map((tc, index) =>
|
||||||
|
createToolCall(
|
||||||
|
`native_${String(Date.now())}_${String(index)}`,
|
||||||
|
tc.function.name,
|
||||||
|
tc.function.arguments,
|
||||||
|
),
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Parse tool calls from content (fallback for models that return JSON in content).
|
||||||
|
* Supports format: {"name": "tool_name", "arguments": {...}}
|
||||||
|
*/
|
||||||
|
private parseToolCallsFromContent(content: string): ToolCall[] {
|
||||||
|
const toolCalls: ToolCall[] = []
|
||||||
|
|
||||||
|
// Try to parse JSON objects from content
|
||||||
|
const jsonRegex = /\{[\s\S]*?"name"[\s\S]*?"arguments"[\s\S]*?\}/g
|
||||||
|
const matches = content.match(jsonRegex)
|
||||||
|
|
||||||
|
if (!matches) {
|
||||||
|
return toolCalls
|
||||||
|
}
|
||||||
|
|
||||||
|
for (const match of matches) {
|
||||||
|
try {
|
||||||
|
const parsed = JSON.parse(match) as {
|
||||||
|
name?: string
|
||||||
|
arguments?: Record<string, unknown>
|
||||||
|
}
|
||||||
|
if (parsed.name && typeof parsed.name === "string") {
|
||||||
|
toolCalls.push(
|
||||||
|
createToolCall(
|
||||||
|
`json_${String(Date.now())}_${String(toolCalls.length)}`,
|
||||||
|
parsed.name,
|
||||||
|
parsed.arguments ?? {},
|
||||||
|
),
|
||||||
|
)
|
||||||
|
}
|
||||||
|
} catch {
|
||||||
|
// Invalid JSON, skip
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return toolCalls
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Count tokens in text.
|
* Count tokens in text.
|
||||||
* Uses estimation since Ollama doesn't provide a tokenizer endpoint.
|
* Uses estimation since Ollama doesn't provide a tokenizer endpoint.
|
||||||
|
|||||||
@@ -58,9 +58,50 @@ const VALID_TOOL_NAMES = new Set([
|
|||||||
"run_tests",
|
"run_tests",
|
||||||
])
|
])
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Tool name aliases for common LLM typos/variations.
|
||||||
|
* Maps incorrect names to correct tool names.
|
||||||
|
*/
|
||||||
|
const TOOL_ALIASES: Record<string, string> = {
|
||||||
|
// get_lines aliases
|
||||||
|
get_functions: "get_lines",
|
||||||
|
read_file: "get_lines",
|
||||||
|
read_lines: "get_lines",
|
||||||
|
get_file: "get_lines",
|
||||||
|
read: "get_lines",
|
||||||
|
// get_function aliases
|
||||||
|
getfunction: "get_function",
|
||||||
|
// get_structure aliases
|
||||||
|
list_files: "get_structure",
|
||||||
|
get_files: "get_structure",
|
||||||
|
list_structure: "get_structure",
|
||||||
|
get_project_structure: "get_structure",
|
||||||
|
// get_todos aliases
|
||||||
|
find_todos: "get_todos",
|
||||||
|
list_todos: "get_todos",
|
||||||
|
// find_references aliases
|
||||||
|
get_references: "find_references",
|
||||||
|
// find_definition aliases
|
||||||
|
get_definition: "find_definition",
|
||||||
|
// edit_lines aliases
|
||||||
|
edit_file: "edit_lines",
|
||||||
|
modify_file: "edit_lines",
|
||||||
|
update_file: "edit_lines",
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Normalize tool name using aliases.
|
||||||
|
*/
|
||||||
|
function normalizeToolName(name: string): string {
|
||||||
|
const lowerName = name.toLowerCase()
|
||||||
|
return TOOL_ALIASES[lowerName] ?? name
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Parse tool calls from LLM response text.
|
* Parse tool calls from LLM response text.
|
||||||
* Supports XML format: <tool_call name="get_lines"><path>src/index.ts</path></tool_call>
|
* Supports both XML and JSON formats:
|
||||||
|
* - XML: <tool_call name="get_lines"><path>src/index.ts</path></tool_call>
|
||||||
|
* - JSON: {"name": "get_lines", "arguments": {"path": "src/index.ts"}}
|
||||||
* Validates tool names and provides helpful error messages.
|
* Validates tool names and provides helpful error messages.
|
||||||
*/
|
*/
|
||||||
export function parseToolCalls(response: string): ParsedResponse {
|
export function parseToolCalls(response: string): ParsedResponse {
|
||||||
@@ -68,14 +109,18 @@ export function parseToolCalls(response: string): ParsedResponse {
|
|||||||
const parseErrors: string[] = []
|
const parseErrors: string[] = []
|
||||||
let content = response
|
let content = response
|
||||||
|
|
||||||
const matches = [...response.matchAll(TOOL_CALL_REGEX)]
|
// First, try XML format
|
||||||
|
const xmlMatches = [...response.matchAll(TOOL_CALL_REGEX)]
|
||||||
|
|
||||||
for (const match of matches) {
|
for (const match of xmlMatches) {
|
||||||
const [fullMatch, toolName, paramsXml] = match
|
const [fullMatch, rawToolName, paramsXml] = match
|
||||||
|
|
||||||
|
// Normalize tool name (handle common LLM typos/variations)
|
||||||
|
const toolName = normalizeToolName(rawToolName)
|
||||||
|
|
||||||
if (!VALID_TOOL_NAMES.has(toolName)) {
|
if (!VALID_TOOL_NAMES.has(toolName)) {
|
||||||
parseErrors.push(
|
parseErrors.push(
|
||||||
`Unknown tool "${toolName}". Valid tools: ${[...VALID_TOOL_NAMES].join(", ")}`,
|
`Unknown tool "${rawToolName}". Valid tools: ${[...VALID_TOOL_NAMES].join(", ")}`,
|
||||||
)
|
)
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
@@ -91,7 +136,19 @@ export function parseToolCalls(response: string): ParsedResponse {
|
|||||||
content = content.replace(fullMatch, "")
|
content = content.replace(fullMatch, "")
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
const errorMsg = error instanceof Error ? error.message : String(error)
|
const errorMsg = error instanceof Error ? error.message : String(error)
|
||||||
parseErrors.push(`Failed to parse tool call "${toolName}": ${errorMsg}`)
|
parseErrors.push(`Failed to parse tool call "${rawToolName}": ${errorMsg}`)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// If no XML tool calls found, try JSON format as fallback
|
||||||
|
if (toolCalls.length === 0) {
|
||||||
|
const jsonResult = parseJsonToolCalls(response)
|
||||||
|
toolCalls.push(...jsonResult.toolCalls)
|
||||||
|
parseErrors.push(...jsonResult.parseErrors)
|
||||||
|
|
||||||
|
// Remove JSON tool calls from content
|
||||||
|
for (const jsonMatch of jsonResult.matchedStrings) {
|
||||||
|
content = content.replace(jsonMatch, "")
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -105,6 +162,59 @@ export function parseToolCalls(response: string): ParsedResponse {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* JSON tool call format pattern.
|
||||||
|
* Matches: {"name": "tool_name", "arguments": {...}}
|
||||||
|
*/
|
||||||
|
const JSON_TOOL_CALL_REGEX =
|
||||||
|
/\{\s*"name"\s*:\s*"([^"]+)"\s*,\s*"arguments"\s*:\s*(\{[^{}]*(?:\{[^{}]*\}[^{}]*)*\})\s*\}/g
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Parse tool calls from JSON format in response.
|
||||||
|
* This is a fallback for LLMs that prefer JSON over XML.
|
||||||
|
*/
|
||||||
|
function parseJsonToolCalls(response: string): {
|
||||||
|
toolCalls: ToolCall[]
|
||||||
|
parseErrors: string[]
|
||||||
|
matchedStrings: string[]
|
||||||
|
} {
|
||||||
|
const toolCalls: ToolCall[] = []
|
||||||
|
const parseErrors: string[] = []
|
||||||
|
const matchedStrings: string[] = []
|
||||||
|
|
||||||
|
const matches = [...response.matchAll(JSON_TOOL_CALL_REGEX)]
|
||||||
|
|
||||||
|
for (const match of matches) {
|
||||||
|
const [fullMatch, rawToolName, argsJson] = match
|
||||||
|
matchedStrings.push(fullMatch)
|
||||||
|
|
||||||
|
// Normalize tool name
|
||||||
|
const toolName = normalizeToolName(rawToolName)
|
||||||
|
|
||||||
|
if (!VALID_TOOL_NAMES.has(toolName)) {
|
||||||
|
parseErrors.push(
|
||||||
|
`Unknown tool "${rawToolName}". Valid tools: ${[...VALID_TOOL_NAMES].join(", ")}`,
|
||||||
|
)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
try {
|
||||||
|
const args = JSON.parse(argsJson) as Record<string, unknown>
|
||||||
|
const toolCall = createToolCall(
|
||||||
|
`json_${String(Date.now())}_${String(toolCalls.length)}`,
|
||||||
|
toolName,
|
||||||
|
args,
|
||||||
|
)
|
||||||
|
toolCalls.push(toolCall)
|
||||||
|
} catch (error) {
|
||||||
|
const errorMsg = error instanceof Error ? error.message : String(error)
|
||||||
|
parseErrors.push(`Failed to parse JSON tool call "${rawToolName}": ${errorMsg}`)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return { toolCalls, parseErrors, matchedStrings }
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Parse parameters from XML content.
|
* Parse parameters from XML content.
|
||||||
*/
|
*/
|
||||||
|
|||||||
@@ -18,105 +18,122 @@ export interface BuildContextOptions {
|
|||||||
includeSignatures?: boolean
|
includeSignatures?: boolean
|
||||||
includeDepsGraph?: boolean
|
includeDepsGraph?: boolean
|
||||||
includeCircularDeps?: boolean
|
includeCircularDeps?: boolean
|
||||||
|
includeHighImpactFiles?: boolean
|
||||||
circularDeps?: string[][]
|
circularDeps?: string[][]
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* System prompt for the ipuaro AI agent.
|
* System prompt for the ipuaro AI agent.
|
||||||
*/
|
*/
|
||||||
export const SYSTEM_PROMPT = `You are ipuaro, a local AI code assistant specialized in helping developers understand and modify their codebase. You operate within a single project directory and have access to powerful tools for reading, searching, analyzing, and editing code.
|
export const SYSTEM_PROMPT = `You are ipuaro, a local AI code assistant with tools for reading, searching, analyzing, and editing code.
|
||||||
|
|
||||||
## Core Principles
|
## When to Use Tools
|
||||||
|
|
||||||
1. **Lazy Loading**: You don't have the full code in context. Use tools to fetch exactly what you need.
|
**Use tools** when the user asks about:
|
||||||
2. **Precision**: Always verify file paths and line numbers before making changes.
|
- Code content (files, functions, classes)
|
||||||
3. **Safety**: Confirm destructive operations. Never execute dangerous commands.
|
- Project structure
|
||||||
4. **Efficiency**: Minimize context usage. Request only necessary code sections.
|
- TODOs, complexity, dependencies
|
||||||
|
- Git status, diffs, commits
|
||||||
|
- Running commands or tests
|
||||||
|
|
||||||
## Tool Calling Format
|
**Do NOT use tools** for:
|
||||||
|
- Greetings ("Hello", "Hi", "Thanks")
|
||||||
|
- General questions not about this codebase
|
||||||
|
- Clarifying questions back to the user
|
||||||
|
|
||||||
When you need to use a tool, format your call as XML:
|
## MANDATORY: Tools for Code Questions
|
||||||
|
|
||||||
<tool_call name="tool_name">
|
**CRITICAL:** You have ZERO code in your context. To answer ANY question about code, you MUST first call a tool.
|
||||||
<param_name>value</param_name>
|
|
||||||
<another_param>value</another_param>
|
|
||||||
</tool_call>
|
|
||||||
|
|
||||||
You can call multiple tools in one response. Always wait for tool results before making conclusions.
|
**WRONG:**
|
||||||
|
User: "What's in src/index.ts?"
|
||||||
**Examples:**
|
Assistant: "The file likely contains..." ← WRONG! Call a tool!
|
||||||
|
|
||||||
|
**CORRECT:**
|
||||||
|
User: "What's in src/index.ts?"
|
||||||
<tool_call name="get_lines">
|
<tool_call name="get_lines">
|
||||||
<path>src/index.ts</path>
|
<path>src/index.ts</path>
|
||||||
<start>1</start>
|
|
||||||
<end>50</end>
|
|
||||||
</tool_call>
|
</tool_call>
|
||||||
|
|
||||||
<tool_call name="edit_lines">
|
## Tool Call Format
|
||||||
<path>src/utils.ts</path>
|
|
||||||
<start>10</start>
|
Output this XML format. Do NOT explain before calling - just output the XML:
|
||||||
<end>15</end>
|
|
||||||
<content>const newCode = "hello";</content>
|
<tool_call name="TOOL_NAME">
|
||||||
|
<param1>value1</param1>
|
||||||
|
<param2>value2</param2>
|
||||||
</tool_call>
|
</tool_call>
|
||||||
|
|
||||||
<tool_call name="find_references">
|
## Example Interactions
|
||||||
<symbol>getUserById</symbol>
|
|
||||||
|
**Example 1 - Reading a file:**
|
||||||
|
User: "Show me the main function in src/app.ts"
|
||||||
|
<tool_call name="get_function">
|
||||||
|
<path>src/app.ts</path>
|
||||||
|
<name>main</name>
|
||||||
|
</tool_call>
|
||||||
|
|
||||||
|
**Example 2 - Finding TODOs:**
|
||||||
|
User: "Are there any TODO comments?"
|
||||||
|
<tool_call name="get_todos">
|
||||||
|
</tool_call>
|
||||||
|
|
||||||
|
**Example 3 - Project structure:**
|
||||||
|
User: "What files are in this project?"
|
||||||
|
<tool_call name="get_structure">
|
||||||
|
<path>.</path>
|
||||||
</tool_call>
|
</tool_call>
|
||||||
|
|
||||||
## Available Tools
|
## Available Tools
|
||||||
|
|
||||||
### Reading Tools
|
### Reading
|
||||||
- \`get_lines(path, start?, end?)\`: Get specific lines from a file
|
- get_lines(path, start?, end?) - Read file lines
|
||||||
- \`get_function(path, name)\`: Get a function by name
|
- get_function(path, name) - Get function by name
|
||||||
- \`get_class(path, name)\`: Get a class by name
|
- get_class(path, name) - Get class by name
|
||||||
- \`get_structure(path?, depth?)\`: Get project directory structure
|
- get_structure(path?, depth?) - List project files
|
||||||
|
|
||||||
### Editing Tools (require confirmation)
|
### Analysis
|
||||||
- \`edit_lines(path, start, end, content)\`: Replace specific lines in a file
|
- get_todos(path?, type?) - Find TODO/FIXME comments
|
||||||
- \`create_file(path, content)\`: Create a new file
|
- get_dependencies(path) - What this file imports
|
||||||
- \`delete_file(path)\`: Delete a file
|
- get_dependents(path) - What imports this file
|
||||||
|
- get_complexity(path?) - Code complexity metrics
|
||||||
|
- find_references(symbol) - Find all usages of a symbol
|
||||||
|
- find_definition(symbol) - Find where symbol is defined
|
||||||
|
|
||||||
### Search Tools
|
### Editing (requires confirmation)
|
||||||
- \`find_references(symbol, path?)\`: Find all usages of a symbol
|
- edit_lines(path, start, end, content) - Modify file lines
|
||||||
- \`find_definition(symbol)\`: Find where a symbol is defined
|
- create_file(path, content) - Create new file
|
||||||
|
- delete_file(path) - Delete a file
|
||||||
|
|
||||||
### Analysis Tools
|
### Git
|
||||||
- \`get_dependencies(path)\`: Get files this file imports
|
- git_status() - Repository status
|
||||||
- \`get_dependents(path)\`: Get files that import this file
|
- git_diff(path?, staged?) - Show changes
|
||||||
- \`get_complexity(path?, limit?)\`: Get complexity metrics
|
- git_commit(message, files?) - Create commit
|
||||||
- \`get_todos(path?, type?)\`: Find TODO/FIXME comments
|
|
||||||
|
|
||||||
### Git Tools
|
### Commands
|
||||||
- \`git_status()\`: Get repository status
|
- run_command(command, timeout?) - Execute shell command
|
||||||
- \`git_diff(path?, staged?)\`: Get uncommitted changes
|
- run_tests(path?, filter?) - Run test suite
|
||||||
- \`git_commit(message, files?)\`: Create a commit (requires confirmation)
|
|
||||||
|
|
||||||
### Run Tools
|
## Rules
|
||||||
- \`run_command(command, timeout?)\`: Execute a shell command (security checked)
|
|
||||||
- \`run_tests(path?, filter?, watch?)\`: Run the test suite
|
|
||||||
|
|
||||||
## Response Guidelines
|
1. **ALWAYS call a tool first** when asked about code - you cannot see any files
|
||||||
|
2. **Output XML directly** - don't say "I will use..." just output the tool call
|
||||||
|
3. **Wait for results** before making conclusions
|
||||||
|
4. **Be concise** in your responses
|
||||||
|
5. **Verify before editing** - always read code before modifying it
|
||||||
|
6. **Stay safe** - never execute destructive commands without user confirmation`
|
||||||
|
|
||||||
1. **Be concise**: Don't repeat information already in context.
|
/**
|
||||||
2. **Show your work**: Explain what tools you're using and why.
|
* Tool usage reminder - appended to messages to reinforce tool usage.
|
||||||
3. **Verify before editing**: Always read the target code before modifying it.
|
* This is added as the last system message before LLM call.
|
||||||
4. **Handle errors gracefully**: If a tool fails, explain what went wrong and suggest alternatives.
|
*/
|
||||||
|
export const TOOL_REMINDER = `⚠️ REMINDER: To answer this question, you MUST use a tool first.
|
||||||
|
Output the <tool_call> XML directly. Do NOT describe what you will do - just call the tool.
|
||||||
|
|
||||||
## Code Editing Rules
|
Example - if asked about a file, output:
|
||||||
|
<tool_call name="get_lines">
|
||||||
1. Always use \`get_lines\` or \`get_function\` before \`edit_lines\`.
|
<path>the/file/path.ts</path>
|
||||||
2. Provide exact line numbers for edits.
|
</tool_call>`
|
||||||
3. For large changes, break into multiple small edits.
|
|
||||||
4. After editing, suggest running tests if available.
|
|
||||||
|
|
||||||
## Safety Rules
|
|
||||||
|
|
||||||
1. Never execute commands that could harm the system.
|
|
||||||
2. Never expose sensitive data (API keys, passwords).
|
|
||||||
3. Always confirm file deletions and destructive git operations.
|
|
||||||
4. Stay within the project directory.
|
|
||||||
|
|
||||||
When you need to perform an action, use the appropriate tool. Think step by step about what information you need and which tools will provide it most efficiently.`
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Build initial context from project structure and AST metadata.
|
* Build initial context from project structure and AST metadata.
|
||||||
@@ -132,6 +149,7 @@ export function buildInitialContext(
|
|||||||
const includeSignatures = options?.includeSignatures ?? true
|
const includeSignatures = options?.includeSignatures ?? true
|
||||||
const includeDepsGraph = options?.includeDepsGraph ?? true
|
const includeDepsGraph = options?.includeDepsGraph ?? true
|
||||||
const includeCircularDeps = options?.includeCircularDeps ?? true
|
const includeCircularDeps = options?.includeCircularDeps ?? true
|
||||||
|
const includeHighImpactFiles = options?.includeHighImpactFiles ?? true
|
||||||
|
|
||||||
sections.push(formatProjectHeader(structure))
|
sections.push(formatProjectHeader(structure))
|
||||||
sections.push(formatDirectoryTree(structure))
|
sections.push(formatDirectoryTree(structure))
|
||||||
@@ -144,6 +162,13 @@ export function buildInitialContext(
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (includeHighImpactFiles && metas && metas.size > 0) {
|
||||||
|
const highImpactSection = formatHighImpactFiles(metas)
|
||||||
|
if (highImpactSection) {
|
||||||
|
sections.push(highImpactSection)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
if (includeCircularDeps && options?.circularDeps && options.circularDeps.length > 0) {
|
if (includeCircularDeps && options?.circularDeps && options.circularDeps.length > 0) {
|
||||||
const circularDepsSection = formatCircularDeps(options.circularDeps)
|
const circularDepsSection = formatCircularDeps(options.circularDeps)
|
||||||
if (circularDepsSection) {
|
if (circularDepsSection) {
|
||||||
@@ -568,6 +593,85 @@ export function formatCircularDeps(cycles: string[][]): string | null {
|
|||||||
return lines.join("\n")
|
return lines.join("\n")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Format high impact files table for display in context.
|
||||||
|
* Shows files with highest impact scores (most dependents).
|
||||||
|
* Includes both direct and transitive dependent counts.
|
||||||
|
*
|
||||||
|
* Format:
|
||||||
|
* ## High Impact Files
|
||||||
|
* | File | Impact | Direct | Transitive |
|
||||||
|
* |------|--------|--------|------------|
|
||||||
|
* | src/utils/validation.ts | 67% | 12 | 24 |
|
||||||
|
*
|
||||||
|
* @param metas - Map of file paths to their metadata
|
||||||
|
* @param limit - Maximum number of files to show (default: 10)
|
||||||
|
* @param minImpact - Minimum impact score to include (default: 5)
|
||||||
|
*/
|
||||||
|
export function formatHighImpactFiles(
|
||||||
|
metas: Map<string, FileMeta>,
|
||||||
|
limit = 10,
|
||||||
|
minImpact = 5,
|
||||||
|
): string | null {
|
||||||
|
if (metas.size === 0) {
|
||||||
|
return null
|
||||||
|
}
|
||||||
|
|
||||||
|
// Collect files with impact score >= minImpact
|
||||||
|
const impactFiles: {
|
||||||
|
path: string
|
||||||
|
impact: number
|
||||||
|
dependents: number
|
||||||
|
transitive: number
|
||||||
|
}[] = []
|
||||||
|
|
||||||
|
for (const [path, meta] of metas) {
|
||||||
|
if (meta.impactScore >= minImpact) {
|
||||||
|
impactFiles.push({
|
||||||
|
path,
|
||||||
|
impact: meta.impactScore,
|
||||||
|
dependents: meta.dependents.length,
|
||||||
|
transitive: meta.transitiveDepCount,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (impactFiles.length === 0) {
|
||||||
|
return null
|
||||||
|
}
|
||||||
|
|
||||||
|
// Sort by transitive count descending, then by impact, then by path
|
||||||
|
impactFiles.sort((a, b) => {
|
||||||
|
if (a.transitive !== b.transitive) {
|
||||||
|
return b.transitive - a.transitive
|
||||||
|
}
|
||||||
|
if (a.impact !== b.impact) {
|
||||||
|
return b.impact - a.impact
|
||||||
|
}
|
||||||
|
return a.path.localeCompare(b.path)
|
||||||
|
})
|
||||||
|
|
||||||
|
// Take top N files
|
||||||
|
const topFiles = impactFiles.slice(0, limit)
|
||||||
|
|
||||||
|
const lines: string[] = [
|
||||||
|
"## High Impact Files",
|
||||||
|
"",
|
||||||
|
"| File | Impact | Direct | Transitive |",
|
||||||
|
"|------|--------|--------|------------|",
|
||||||
|
]
|
||||||
|
|
||||||
|
for (const file of topFiles) {
|
||||||
|
const shortPath = shortenPath(file.path)
|
||||||
|
const impact = `${String(file.impact)}%`
|
||||||
|
const direct = String(file.dependents)
|
||||||
|
const transitive = String(file.transitive)
|
||||||
|
lines.push(`| ${shortPath} | ${impact} | ${direct} | ${transitive} |`)
|
||||||
|
}
|
||||||
|
|
||||||
|
return lines.join("\n")
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Format line range for display.
|
* Format line range for display.
|
||||||
*/
|
*/
|
||||||
|
|||||||
@@ -509,3 +509,87 @@ export function getToolsByCategory(category: string): ToolDef[] {
|
|||||||
return []
|
return []
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* =============================================================================
|
||||||
|
* Native Ollama Tools Format
|
||||||
|
* =============================================================================
|
||||||
|
*/
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Ollama native tool definition format.
|
||||||
|
*/
|
||||||
|
export interface OllamaTool {
|
||||||
|
type: "function"
|
||||||
|
function: {
|
||||||
|
name: string
|
||||||
|
description: string
|
||||||
|
parameters: {
|
||||||
|
type: "object"
|
||||||
|
properties: Record<string, OllamaToolProperty>
|
||||||
|
required: string[]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
interface OllamaToolProperty {
|
||||||
|
type: string
|
||||||
|
description: string
|
||||||
|
enum?: string[]
|
||||||
|
items?: { type: string }
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Convert ToolDef to Ollama native format.
|
||||||
|
*/
|
||||||
|
function convertToOllamaTool(tool: ToolDef): OllamaTool {
|
||||||
|
const properties: Record<string, OllamaToolProperty> = {}
|
||||||
|
const required: string[] = []
|
||||||
|
|
||||||
|
for (const param of tool.parameters) {
|
||||||
|
const prop: OllamaToolProperty = {
|
||||||
|
type: param.type === "array" ? "array" : param.type,
|
||||||
|
description: param.description,
|
||||||
|
}
|
||||||
|
|
||||||
|
if (param.enum) {
|
||||||
|
prop.enum = param.enum
|
||||||
|
}
|
||||||
|
|
||||||
|
if (param.type === "array") {
|
||||||
|
prop.items = { type: "string" }
|
||||||
|
}
|
||||||
|
|
||||||
|
properties[param.name] = prop
|
||||||
|
|
||||||
|
if (param.required) {
|
||||||
|
required.push(param.name)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return {
|
||||||
|
type: "function",
|
||||||
|
function: {
|
||||||
|
name: tool.name,
|
||||||
|
description: tool.description,
|
||||||
|
parameters: {
|
||||||
|
type: "object",
|
||||||
|
properties,
|
||||||
|
required,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* All tools in Ollama native format.
|
||||||
|
* Used when useNativeTools is enabled.
|
||||||
|
*/
|
||||||
|
export const OLLAMA_NATIVE_TOOLS: OllamaTool[] = ALL_TOOLS.map(convertToOllamaTool)
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get native tool definitions for Ollama.
|
||||||
|
*/
|
||||||
|
export function getOllamaNativeTools(): OllamaTool[] {
|
||||||
|
return OLLAMA_NATIVE_TOOLS
|
||||||
|
}
|
||||||
|
|||||||
@@ -20,6 +20,7 @@ export const LLMConfigSchema = z.object({
|
|||||||
temperature: z.number().min(0).max(2).default(0.1),
|
temperature: z.number().min(0).max(2).default(0.1),
|
||||||
host: z.string().default("http://localhost:11434"),
|
host: z.string().default("http://localhost:11434"),
|
||||||
timeout: z.number().int().positive().default(120_000),
|
timeout: z.number().int().positive().default(120_000),
|
||||||
|
useNativeTools: z.boolean().default(false),
|
||||||
})
|
})
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@@ -117,6 +118,7 @@ export const ContextConfigSchema = z.object({
|
|||||||
includeSignatures: z.boolean().default(true),
|
includeSignatures: z.boolean().default(true),
|
||||||
includeDepsGraph: z.boolean().default(true),
|
includeDepsGraph: z.boolean().default(true),
|
||||||
includeCircularDeps: z.boolean().default(true),
|
includeCircularDeps: z.boolean().default(true),
|
||||||
|
includeHighImpactFiles: z.boolean().default(true),
|
||||||
})
|
})
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
|||||||
1506
packages/ipuaro/tests/e2e/full-workflow.test.ts
Normal file
1506
packages/ipuaro/tests/e2e/full-workflow.test.ts
Normal file
File diff suppressed because it is too large
Load Diff
351
packages/ipuaro/tests/e2e/test-helpers.ts
Normal file
351
packages/ipuaro/tests/e2e/test-helpers.ts
Normal file
@@ -0,0 +1,351 @@
|
|||||||
|
/**
|
||||||
|
* E2E Test Helpers
|
||||||
|
* Provides dependencies for testing the full flow with REAL LLM.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { vi } from "vitest"
|
||||||
|
import * as fs from "node:fs/promises"
|
||||||
|
import * as path from "node:path"
|
||||||
|
import * as os from "node:os"
|
||||||
|
import type { IStorage, SymbolIndex, DepsGraph } from "../../src/domain/services/IStorage.js"
|
||||||
|
import type { ISessionStorage, SessionListItem } from "../../src/domain/services/ISessionStorage.js"
|
||||||
|
import type { FileData } from "../../src/domain/value-objects/FileData.js"
|
||||||
|
import type { FileAST } from "../../src/domain/value-objects/FileAST.js"
|
||||||
|
import type { FileMeta } from "../../src/domain/value-objects/FileMeta.js"
|
||||||
|
import type { UndoEntry } from "../../src/domain/value-objects/UndoEntry.js"
|
||||||
|
import { Session } from "../../src/domain/entities/Session.js"
|
||||||
|
import { ToolRegistry } from "../../src/infrastructure/tools/registry.js"
|
||||||
|
import { OllamaClient } from "../../src/infrastructure/llm/OllamaClient.js"
|
||||||
|
import { registerAllTools } from "../../src/cli/commands/tools-setup.js"
|
||||||
|
import type { LLMConfig } from "../../src/shared/constants/config.js"
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Default LLM config for tests.
|
||||||
|
*/
|
||||||
|
export const DEFAULT_TEST_LLM_CONFIG: LLMConfig = {
|
||||||
|
model: "qwen2.5-coder:14b-instruct-q4_K_M",
|
||||||
|
contextWindow: 128_000,
|
||||||
|
temperature: 0.1,
|
||||||
|
host: "http://localhost:11434",
|
||||||
|
timeout: 180_000,
|
||||||
|
useNativeTools: true,
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* In-memory storage implementation for testing.
|
||||||
|
* Stores all data in Maps, no Redis required.
|
||||||
|
*/
|
||||||
|
export function createInMemoryStorage(): IStorage {
|
||||||
|
const files = new Map<string, FileData>()
|
||||||
|
const asts = new Map<string, FileAST>()
|
||||||
|
const metas = new Map<string, FileMeta>()
|
||||||
|
let symbolIndex: SymbolIndex = new Map()
|
||||||
|
let depsGraph: DepsGraph = { imports: new Map(), importedBy: new Map() }
|
||||||
|
const projectConfig = new Map<string, unknown>()
|
||||||
|
let connected = false
|
||||||
|
|
||||||
|
return {
|
||||||
|
getFile: vi.fn(async (filePath: string) => files.get(filePath) ?? null),
|
||||||
|
setFile: vi.fn(async (filePath: string, data: FileData) => {
|
||||||
|
files.set(filePath, data)
|
||||||
|
}),
|
||||||
|
deleteFile: vi.fn(async (filePath: string) => {
|
||||||
|
files.delete(filePath)
|
||||||
|
}),
|
||||||
|
getAllFiles: vi.fn(async () => new Map(files)),
|
||||||
|
getFileCount: vi.fn(async () => files.size),
|
||||||
|
|
||||||
|
getAST: vi.fn(async (filePath: string) => asts.get(filePath) ?? null),
|
||||||
|
setAST: vi.fn(async (filePath: string, ast: FileAST) => {
|
||||||
|
asts.set(filePath, ast)
|
||||||
|
}),
|
||||||
|
deleteAST: vi.fn(async (filePath: string) => {
|
||||||
|
asts.delete(filePath)
|
||||||
|
}),
|
||||||
|
getAllASTs: vi.fn(async () => new Map(asts)),
|
||||||
|
|
||||||
|
getMeta: vi.fn(async (filePath: string) => metas.get(filePath) ?? null),
|
||||||
|
setMeta: vi.fn(async (filePath: string, meta: FileMeta) => {
|
||||||
|
metas.set(filePath, meta)
|
||||||
|
}),
|
||||||
|
deleteMeta: vi.fn(async (filePath: string) => {
|
||||||
|
metas.delete(filePath)
|
||||||
|
}),
|
||||||
|
getAllMetas: vi.fn(async () => new Map(metas)),
|
||||||
|
|
||||||
|
getSymbolIndex: vi.fn(async () => symbolIndex),
|
||||||
|
setSymbolIndex: vi.fn(async (index: SymbolIndex) => {
|
||||||
|
symbolIndex = index
|
||||||
|
}),
|
||||||
|
getDepsGraph: vi.fn(async () => depsGraph),
|
||||||
|
setDepsGraph: vi.fn(async (graph: DepsGraph) => {
|
||||||
|
depsGraph = graph
|
||||||
|
}),
|
||||||
|
|
||||||
|
getProjectConfig: vi.fn(async (key: string) => projectConfig.get(key) ?? null),
|
||||||
|
setProjectConfig: vi.fn(async (key: string, value: unknown) => {
|
||||||
|
projectConfig.set(key, value)
|
||||||
|
}),
|
||||||
|
|
||||||
|
connect: vi.fn(async () => {
|
||||||
|
connected = true
|
||||||
|
}),
|
||||||
|
disconnect: vi.fn(async () => {
|
||||||
|
connected = false
|
||||||
|
}),
|
||||||
|
isConnected: vi.fn(() => connected),
|
||||||
|
clear: vi.fn(async () => {
|
||||||
|
files.clear()
|
||||||
|
asts.clear()
|
||||||
|
metas.clear()
|
||||||
|
symbolIndex = new Map()
|
||||||
|
depsGraph = { imports: new Map(), importedBy: new Map() }
|
||||||
|
projectConfig.clear()
|
||||||
|
}),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* In-memory session storage for testing.
|
||||||
|
*/
|
||||||
|
export function createInMemorySessionStorage(): ISessionStorage {
|
||||||
|
const sessions = new Map<string, Session>()
|
||||||
|
const undoStacks = new Map<string, UndoEntry[]>()
|
||||||
|
|
||||||
|
return {
|
||||||
|
saveSession: vi.fn(async (session: Session) => {
|
||||||
|
sessions.set(session.id, session)
|
||||||
|
}),
|
||||||
|
loadSession: vi.fn(async (sessionId: string) => sessions.get(sessionId) ?? null),
|
||||||
|
deleteSession: vi.fn(async (sessionId: string) => {
|
||||||
|
sessions.delete(sessionId)
|
||||||
|
undoStacks.delete(sessionId)
|
||||||
|
}),
|
||||||
|
listSessions: vi.fn(async (projectName?: string): Promise<SessionListItem[]> => {
|
||||||
|
const items: SessionListItem[] = []
|
||||||
|
for (const session of sessions.values()) {
|
||||||
|
if (!projectName || session.projectName === projectName) {
|
||||||
|
items.push({
|
||||||
|
id: session.id,
|
||||||
|
projectName: session.projectName,
|
||||||
|
createdAt: session.createdAt,
|
||||||
|
lastActivityAt: session.lastActivityAt,
|
||||||
|
messageCount: session.history.length,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return items
|
||||||
|
}),
|
||||||
|
getLatestSession: vi.fn(async (projectName: string) => {
|
||||||
|
let latest: Session | null = null
|
||||||
|
for (const session of sessions.values()) {
|
||||||
|
if (session.projectName === projectName) {
|
||||||
|
if (!latest || session.lastActivityAt > latest.lastActivityAt) {
|
||||||
|
latest = session
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return latest
|
||||||
|
}),
|
||||||
|
sessionExists: vi.fn(async (sessionId: string) => sessions.has(sessionId)),
|
||||||
|
pushUndoEntry: vi.fn(async (sessionId: string, entry: UndoEntry) => {
|
||||||
|
const stack = undoStacks.get(sessionId) ?? []
|
||||||
|
stack.push(entry)
|
||||||
|
undoStacks.set(sessionId, stack)
|
||||||
|
}),
|
||||||
|
popUndoEntry: vi.fn(async (sessionId: string) => {
|
||||||
|
const stack = undoStacks.get(sessionId) ?? []
|
||||||
|
return stack.pop() ?? null
|
||||||
|
}),
|
||||||
|
getUndoStack: vi.fn(async (sessionId: string) => undoStacks.get(sessionId) ?? []),
|
||||||
|
touchSession: vi.fn(async (sessionId: string) => {
|
||||||
|
const session = sessions.get(sessionId)
|
||||||
|
if (session) {
|
||||||
|
session.lastActivityAt = Date.now()
|
||||||
|
}
|
||||||
|
}),
|
||||||
|
clearAllSessions: vi.fn(async () => {
|
||||||
|
sessions.clear()
|
||||||
|
undoStacks.clear()
|
||||||
|
}),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Create REAL Ollama client for E2E tests.
|
||||||
|
*/
|
||||||
|
export function createRealOllamaClient(config?: Partial<LLMConfig>): OllamaClient {
|
||||||
|
return new OllamaClient({
|
||||||
|
...DEFAULT_TEST_LLM_CONFIG,
|
||||||
|
...config,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Create a tool registry with all 18 tools registered.
|
||||||
|
*/
|
||||||
|
export function createRealToolRegistry(): ToolRegistry {
|
||||||
|
const registry = new ToolRegistry()
|
||||||
|
registerAllTools(registry)
|
||||||
|
return registry
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Create a new test session.
|
||||||
|
*/
|
||||||
|
export function createTestSession(projectName = "test-project"): Session {
|
||||||
|
return new Session(`test-${Date.now()}`, projectName)
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Create a temporary test project directory with sample files.
|
||||||
|
*/
|
||||||
|
export async function createTestProject(): Promise<string> {
|
||||||
|
const tempDir = await fs.mkdtemp(path.join(os.tmpdir(), "ipuaro-e2e-"))
|
||||||
|
|
||||||
|
await fs.mkdir(path.join(tempDir, "src"), { recursive: true })
|
||||||
|
|
||||||
|
await fs.writeFile(
|
||||||
|
path.join(tempDir, "src", "index.ts"),
|
||||||
|
`/**
|
||||||
|
* Main entry point
|
||||||
|
*/
|
||||||
|
export function main(): void {
|
||||||
|
console.log("Hello, world!")
|
||||||
|
}
|
||||||
|
|
||||||
|
export function add(a: number, b: number): number {
|
||||||
|
return a + b
|
||||||
|
}
|
||||||
|
|
||||||
|
export function multiply(a: number, b: number): number {
|
||||||
|
return a * b
|
||||||
|
}
|
||||||
|
|
||||||
|
// TODO: Add more math functions
|
||||||
|
main()
|
||||||
|
`,
|
||||||
|
)
|
||||||
|
|
||||||
|
await fs.writeFile(
|
||||||
|
path.join(tempDir, "src", "utils.ts"),
|
||||||
|
`/**
|
||||||
|
* Utility functions
|
||||||
|
*/
|
||||||
|
import { add } from "./index.js"
|
||||||
|
|
||||||
|
export function sum(numbers: number[]): number {
|
||||||
|
return numbers.reduce((acc, n) => add(acc, n), 0)
|
||||||
|
}
|
||||||
|
|
||||||
|
export class Calculator {
|
||||||
|
private result: number = 0
|
||||||
|
|
||||||
|
add(n: number): this {
|
||||||
|
this.result += n
|
||||||
|
return this
|
||||||
|
}
|
||||||
|
|
||||||
|
subtract(n: number): this {
|
||||||
|
this.result -= n
|
||||||
|
return this
|
||||||
|
}
|
||||||
|
|
||||||
|
getResult(): number {
|
||||||
|
return this.result
|
||||||
|
}
|
||||||
|
|
||||||
|
reset(): void {
|
||||||
|
this.result = 0
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// FIXME: Handle edge cases for negative numbers
|
||||||
|
`,
|
||||||
|
)
|
||||||
|
|
||||||
|
await fs.writeFile(
|
||||||
|
path.join(tempDir, "package.json"),
|
||||||
|
JSON.stringify(
|
||||||
|
{
|
||||||
|
name: "test-project",
|
||||||
|
version: "1.0.0",
|
||||||
|
type: "module",
|
||||||
|
scripts: {
|
||||||
|
test: "echo 'Tests passed!'",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
null,
|
||||||
|
4,
|
||||||
|
),
|
||||||
|
)
|
||||||
|
|
||||||
|
await fs.writeFile(
|
||||||
|
path.join(tempDir, "README.md"),
|
||||||
|
`# Test Project
|
||||||
|
|
||||||
|
A sample project for E2E testing.
|
||||||
|
|
||||||
|
## Features
|
||||||
|
- Basic math functions
|
||||||
|
- Calculator class
|
||||||
|
`,
|
||||||
|
)
|
||||||
|
|
||||||
|
return tempDir
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Clean up test project directory.
|
||||||
|
*/
|
||||||
|
export async function cleanupTestProject(projectDir: string): Promise<void> {
|
||||||
|
await fs.rm(projectDir, { recursive: true, force: true })
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* All test dependencies bundled together.
|
||||||
|
*/
|
||||||
|
export interface E2ETestDependencies {
|
||||||
|
storage: IStorage
|
||||||
|
sessionStorage: ISessionStorage
|
||||||
|
llm: OllamaClient
|
||||||
|
tools: ToolRegistry
|
||||||
|
session: Session
|
||||||
|
projectRoot: string
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Create all dependencies for E2E testing with REAL Ollama.
|
||||||
|
*/
|
||||||
|
export async function createE2ETestDependencies(
|
||||||
|
llmConfig?: Partial<LLMConfig>,
|
||||||
|
): Promise<E2ETestDependencies> {
|
||||||
|
const projectRoot = await createTestProject()
|
||||||
|
|
||||||
|
return {
|
||||||
|
storage: createInMemoryStorage(),
|
||||||
|
sessionStorage: createInMemorySessionStorage(),
|
||||||
|
llm: createRealOllamaClient(llmConfig),
|
||||||
|
tools: createRealToolRegistry(),
|
||||||
|
session: createTestSession(),
|
||||||
|
projectRoot,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Check if Ollama is available.
|
||||||
|
*/
|
||||||
|
export async function isOllamaAvailable(): Promise<boolean> {
|
||||||
|
const client = createRealOllamaClient()
|
||||||
|
return client.isAvailable()
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Check if required model is available.
|
||||||
|
*/
|
||||||
|
export async function isModelAvailable(
|
||||||
|
model = "qwen2.5-coder:14b-instruct-q4_K_M",
|
||||||
|
): Promise<boolean> {
|
||||||
|
const client = createRealOllamaClient()
|
||||||
|
return client.hasModel(model)
|
||||||
|
}
|
||||||
@@ -1,5 +1,9 @@
|
|||||||
import { describe, it, expect } from "vitest"
|
import { describe, it, expect } from "vitest"
|
||||||
import { createFileMeta, isHubFile } from "../../../../src/domain/value-objects/FileMeta.js"
|
import {
|
||||||
|
calculateImpactScore,
|
||||||
|
createFileMeta,
|
||||||
|
isHubFile,
|
||||||
|
} from "../../../../src/domain/value-objects/FileMeta.js"
|
||||||
|
|
||||||
describe("FileMeta", () => {
|
describe("FileMeta", () => {
|
||||||
describe("createFileMeta", () => {
|
describe("createFileMeta", () => {
|
||||||
@@ -15,6 +19,7 @@ describe("FileMeta", () => {
|
|||||||
expect(meta.isHub).toBe(false)
|
expect(meta.isHub).toBe(false)
|
||||||
expect(meta.isEntryPoint).toBe(false)
|
expect(meta.isEntryPoint).toBe(false)
|
||||||
expect(meta.fileType).toBe("unknown")
|
expect(meta.fileType).toBe("unknown")
|
||||||
|
expect(meta.impactScore).toBe(0)
|
||||||
})
|
})
|
||||||
|
|
||||||
it("should merge partial values", () => {
|
it("should merge partial values", () => {
|
||||||
@@ -42,4 +47,51 @@ describe("FileMeta", () => {
|
|||||||
expect(isHubFile(0)).toBe(false)
|
expect(isHubFile(0)).toBe(false)
|
||||||
})
|
})
|
||||||
})
|
})
|
||||||
|
|
||||||
|
describe("calculateImpactScore", () => {
|
||||||
|
it("should return 0 for file with 0 dependents", () => {
|
||||||
|
expect(calculateImpactScore(0, 10)).toBe(0)
|
||||||
|
})
|
||||||
|
|
||||||
|
it("should return 0 when totalFiles is 0", () => {
|
||||||
|
expect(calculateImpactScore(5, 0)).toBe(0)
|
||||||
|
})
|
||||||
|
|
||||||
|
it("should return 0 when totalFiles is 1", () => {
|
||||||
|
expect(calculateImpactScore(0, 1)).toBe(0)
|
||||||
|
})
|
||||||
|
|
||||||
|
it("should calculate correct percentage", () => {
|
||||||
|
// 5 dependents out of 10 files (excluding itself = 9 possible)
|
||||||
|
// 5/9 * 100 = 55.56 → rounded to 56
|
||||||
|
expect(calculateImpactScore(5, 10)).toBe(56)
|
||||||
|
})
|
||||||
|
|
||||||
|
it("should return 100 when all other files depend on it", () => {
|
||||||
|
// 9 dependents out of 10 files (9 possible dependents)
|
||||||
|
expect(calculateImpactScore(9, 10)).toBe(100)
|
||||||
|
})
|
||||||
|
|
||||||
|
it("should cap at 100", () => {
|
||||||
|
// Edge case: more dependents than possible (shouldn't happen normally)
|
||||||
|
expect(calculateImpactScore(20, 10)).toBe(100)
|
||||||
|
})
|
||||||
|
|
||||||
|
it("should round the percentage", () => {
|
||||||
|
// 1 dependent out of 3 files (2 possible)
|
||||||
|
// 1/2 * 100 = 50
|
||||||
|
expect(calculateImpactScore(1, 3)).toBe(50)
|
||||||
|
})
|
||||||
|
|
||||||
|
it("should calculate impact for small projects", () => {
|
||||||
|
// 1 dependent out of 2 files (1 possible)
|
||||||
|
expect(calculateImpactScore(1, 2)).toBe(100)
|
||||||
|
})
|
||||||
|
|
||||||
|
it("should calculate impact for larger projects", () => {
|
||||||
|
// 50 dependents out of 100 files (99 possible)
|
||||||
|
// 50/99 * 100 = 50.51 → rounded to 51
|
||||||
|
expect(calculateImpactScore(50, 100)).toBe(51)
|
||||||
|
})
|
||||||
|
})
|
||||||
})
|
})
|
||||||
|
|||||||
@@ -3,6 +3,7 @@ import { MetaAnalyzer } from "../../../../src/infrastructure/indexer/MetaAnalyze
|
|||||||
import { ASTParser } from "../../../../src/infrastructure/indexer/ASTParser.js"
|
import { ASTParser } from "../../../../src/infrastructure/indexer/ASTParser.js"
|
||||||
import type { FileAST } from "../../../../src/domain/value-objects/FileAST.js"
|
import type { FileAST } from "../../../../src/domain/value-objects/FileAST.js"
|
||||||
import { createEmptyFileAST } from "../../../../src/domain/value-objects/FileAST.js"
|
import { createEmptyFileAST } from "../../../../src/domain/value-objects/FileAST.js"
|
||||||
|
import { createFileMeta, type FileMeta } from "../../../../src/domain/value-objects/FileMeta.js"
|
||||||
|
|
||||||
describe("MetaAnalyzer", () => {
|
describe("MetaAnalyzer", () => {
|
||||||
let analyzer: MetaAnalyzer
|
let analyzer: MetaAnalyzer
|
||||||
@@ -737,4 +738,368 @@ export function createComponent(): MyComponent {
|
|||||||
expect(meta.fileType).toBe("source")
|
expect(meta.fileType).toBe("source")
|
||||||
})
|
})
|
||||||
})
|
})
|
||||||
|
|
||||||
|
describe("computeTransitiveCounts", () => {
|
||||||
|
it("should compute transitive dependents for a simple chain", () => {
|
||||||
|
// A -> B -> C (A depends on B, B depends on C)
|
||||||
|
// So C has transitive dependents: B, A
|
||||||
|
const metas = new Map<string, FileMeta>()
|
||||||
|
metas.set(
|
||||||
|
"/project/a.ts",
|
||||||
|
createFileMeta({
|
||||||
|
dependencies: ["/project/b.ts"],
|
||||||
|
dependents: [],
|
||||||
|
}),
|
||||||
|
)
|
||||||
|
metas.set(
|
||||||
|
"/project/b.ts",
|
||||||
|
createFileMeta({
|
||||||
|
dependencies: ["/project/c.ts"],
|
||||||
|
dependents: ["/project/a.ts"],
|
||||||
|
}),
|
||||||
|
)
|
||||||
|
metas.set(
|
||||||
|
"/project/c.ts",
|
||||||
|
createFileMeta({
|
||||||
|
dependencies: [],
|
||||||
|
dependents: ["/project/b.ts"],
|
||||||
|
}),
|
||||||
|
)
|
||||||
|
|
||||||
|
analyzer.computeTransitiveCounts(metas)
|
||||||
|
|
||||||
|
expect(metas.get("/project/c.ts")!.transitiveDepCount).toBe(2) // B and A
|
||||||
|
expect(metas.get("/project/b.ts")!.transitiveDepCount).toBe(1) // A
|
||||||
|
expect(metas.get("/project/a.ts")!.transitiveDepCount).toBe(0) // none
|
||||||
|
})
|
||||||
|
|
||||||
|
it("should compute transitive dependencies for a simple chain", () => {
|
||||||
|
// A -> B -> C (A depends on B, B depends on C)
|
||||||
|
// So A has transitive dependencies: B, C
|
||||||
|
const metas = new Map<string, FileMeta>()
|
||||||
|
metas.set(
|
||||||
|
"/project/a.ts",
|
||||||
|
createFileMeta({
|
||||||
|
dependencies: ["/project/b.ts"],
|
||||||
|
dependents: [],
|
||||||
|
}),
|
||||||
|
)
|
||||||
|
metas.set(
|
||||||
|
"/project/b.ts",
|
||||||
|
createFileMeta({
|
||||||
|
dependencies: ["/project/c.ts"],
|
||||||
|
dependents: ["/project/a.ts"],
|
||||||
|
}),
|
||||||
|
)
|
||||||
|
metas.set(
|
||||||
|
"/project/c.ts",
|
||||||
|
createFileMeta({
|
||||||
|
dependencies: [],
|
||||||
|
dependents: ["/project/b.ts"],
|
||||||
|
}),
|
||||||
|
)
|
||||||
|
|
||||||
|
analyzer.computeTransitiveCounts(metas)
|
||||||
|
|
||||||
|
expect(metas.get("/project/a.ts")!.transitiveDepByCount).toBe(2) // B and C
|
||||||
|
expect(metas.get("/project/b.ts")!.transitiveDepByCount).toBe(1) // C
|
||||||
|
expect(metas.get("/project/c.ts")!.transitiveDepByCount).toBe(0) // none
|
||||||
|
})
|
||||||
|
|
||||||
|
it("should handle diamond dependency pattern", () => {
|
||||||
|
// A
|
||||||
|
// / \
|
||||||
|
// B C
|
||||||
|
// \ /
|
||||||
|
// D
|
||||||
|
// A depends on B and C, both depend on D
|
||||||
|
const metas = new Map<string, FileMeta>()
|
||||||
|
metas.set(
|
||||||
|
"/project/a.ts",
|
||||||
|
createFileMeta({
|
||||||
|
dependencies: ["/project/b.ts", "/project/c.ts"],
|
||||||
|
dependents: [],
|
||||||
|
}),
|
||||||
|
)
|
||||||
|
metas.set(
|
||||||
|
"/project/b.ts",
|
||||||
|
createFileMeta({
|
||||||
|
dependencies: ["/project/d.ts"],
|
||||||
|
dependents: ["/project/a.ts"],
|
||||||
|
}),
|
||||||
|
)
|
||||||
|
metas.set(
|
||||||
|
"/project/c.ts",
|
||||||
|
createFileMeta({
|
||||||
|
dependencies: ["/project/d.ts"],
|
||||||
|
dependents: ["/project/a.ts"],
|
||||||
|
}),
|
||||||
|
)
|
||||||
|
metas.set(
|
||||||
|
"/project/d.ts",
|
||||||
|
createFileMeta({
|
||||||
|
dependencies: [],
|
||||||
|
dependents: ["/project/b.ts", "/project/c.ts"],
|
||||||
|
}),
|
||||||
|
)
|
||||||
|
|
||||||
|
analyzer.computeTransitiveCounts(metas)
|
||||||
|
|
||||||
|
// D is depended on by B, C, and transitively by A
|
||||||
|
expect(metas.get("/project/d.ts")!.transitiveDepCount).toBe(3)
|
||||||
|
// A depends on B, C, and transitively on D
|
||||||
|
expect(metas.get("/project/a.ts")!.transitiveDepByCount).toBe(3)
|
||||||
|
})
|
||||||
|
|
||||||
|
it("should handle circular dependencies gracefully", () => {
|
||||||
|
// A -> B -> C -> A (circular)
|
||||||
|
const metas = new Map<string, FileMeta>()
|
||||||
|
metas.set(
|
||||||
|
"/project/a.ts",
|
||||||
|
createFileMeta({
|
||||||
|
dependencies: ["/project/b.ts"],
|
||||||
|
dependents: ["/project/c.ts"],
|
||||||
|
}),
|
||||||
|
)
|
||||||
|
metas.set(
|
||||||
|
"/project/b.ts",
|
||||||
|
createFileMeta({
|
||||||
|
dependencies: ["/project/c.ts"],
|
||||||
|
dependents: ["/project/a.ts"],
|
||||||
|
}),
|
||||||
|
)
|
||||||
|
metas.set(
|
||||||
|
"/project/c.ts",
|
||||||
|
createFileMeta({
|
||||||
|
dependencies: ["/project/a.ts"],
|
||||||
|
dependents: ["/project/b.ts"],
|
||||||
|
}),
|
||||||
|
)
|
||||||
|
|
||||||
|
// Should not throw, should handle cycles
|
||||||
|
analyzer.computeTransitiveCounts(metas)
|
||||||
|
|
||||||
|
// Each file has the other 2 as transitive dependents
|
||||||
|
expect(metas.get("/project/a.ts")!.transitiveDepCount).toBe(2)
|
||||||
|
expect(metas.get("/project/b.ts")!.transitiveDepCount).toBe(2)
|
||||||
|
expect(metas.get("/project/c.ts")!.transitiveDepCount).toBe(2)
|
||||||
|
})
|
||||||
|
|
||||||
|
it("should return 0 for files with no dependencies", () => {
|
||||||
|
const metas = new Map<string, FileMeta>()
|
||||||
|
metas.set(
|
||||||
|
"/project/a.ts",
|
||||||
|
createFileMeta({
|
||||||
|
dependencies: [],
|
||||||
|
dependents: [],
|
||||||
|
}),
|
||||||
|
)
|
||||||
|
|
||||||
|
analyzer.computeTransitiveCounts(metas)
|
||||||
|
|
||||||
|
expect(metas.get("/project/a.ts")!.transitiveDepCount).toBe(0)
|
||||||
|
expect(metas.get("/project/a.ts")!.transitiveDepByCount).toBe(0)
|
||||||
|
})
|
||||||
|
|
||||||
|
it("should handle empty metas map", () => {
|
||||||
|
const metas = new Map<string, FileMeta>()
|
||||||
|
// Should not throw
|
||||||
|
expect(() => analyzer.computeTransitiveCounts(metas)).not.toThrow()
|
||||||
|
})
|
||||||
|
|
||||||
|
it("should handle single file", () => {
|
||||||
|
const metas = new Map<string, FileMeta>()
|
||||||
|
metas.set(
|
||||||
|
"/project/a.ts",
|
||||||
|
createFileMeta({
|
||||||
|
dependencies: [],
|
||||||
|
dependents: [],
|
||||||
|
}),
|
||||||
|
)
|
||||||
|
|
||||||
|
analyzer.computeTransitiveCounts(metas)
|
||||||
|
|
||||||
|
expect(metas.get("/project/a.ts")!.transitiveDepCount).toBe(0)
|
||||||
|
expect(metas.get("/project/a.ts")!.transitiveDepByCount).toBe(0)
|
||||||
|
})
|
||||||
|
|
||||||
|
it("should handle multiple roots depending on same leaf", () => {
|
||||||
|
// A -> C, B -> C
|
||||||
|
const metas = new Map<string, FileMeta>()
|
||||||
|
metas.set(
|
||||||
|
"/project/a.ts",
|
||||||
|
createFileMeta({
|
||||||
|
dependencies: ["/project/c.ts"],
|
||||||
|
dependents: [],
|
||||||
|
}),
|
||||||
|
)
|
||||||
|
metas.set(
|
||||||
|
"/project/b.ts",
|
||||||
|
createFileMeta({
|
||||||
|
dependencies: ["/project/c.ts"],
|
||||||
|
dependents: [],
|
||||||
|
}),
|
||||||
|
)
|
||||||
|
metas.set(
|
||||||
|
"/project/c.ts",
|
||||||
|
createFileMeta({
|
||||||
|
dependencies: [],
|
||||||
|
dependents: ["/project/a.ts", "/project/b.ts"],
|
||||||
|
}),
|
||||||
|
)
|
||||||
|
|
||||||
|
analyzer.computeTransitiveCounts(metas)
|
||||||
|
|
||||||
|
expect(metas.get("/project/c.ts")!.transitiveDepCount).toBe(2) // A and B
|
||||||
|
expect(metas.get("/project/a.ts")!.transitiveDepByCount).toBe(1) // C
|
||||||
|
expect(metas.get("/project/b.ts")!.transitiveDepByCount).toBe(1) // C
|
||||||
|
})
|
||||||
|
|
||||||
|
it("should handle deep dependency chains", () => {
|
||||||
|
// A -> B -> C -> D -> E
|
||||||
|
const metas = new Map<string, FileMeta>()
|
||||||
|
metas.set(
|
||||||
|
"/project/a.ts",
|
||||||
|
createFileMeta({
|
||||||
|
dependencies: ["/project/b.ts"],
|
||||||
|
dependents: [],
|
||||||
|
}),
|
||||||
|
)
|
||||||
|
metas.set(
|
||||||
|
"/project/b.ts",
|
||||||
|
createFileMeta({
|
||||||
|
dependencies: ["/project/c.ts"],
|
||||||
|
dependents: ["/project/a.ts"],
|
||||||
|
}),
|
||||||
|
)
|
||||||
|
metas.set(
|
||||||
|
"/project/c.ts",
|
||||||
|
createFileMeta({
|
||||||
|
dependencies: ["/project/d.ts"],
|
||||||
|
dependents: ["/project/b.ts"],
|
||||||
|
}),
|
||||||
|
)
|
||||||
|
metas.set(
|
||||||
|
"/project/d.ts",
|
||||||
|
createFileMeta({
|
||||||
|
dependencies: ["/project/e.ts"],
|
||||||
|
dependents: ["/project/c.ts"],
|
||||||
|
}),
|
||||||
|
)
|
||||||
|
metas.set(
|
||||||
|
"/project/e.ts",
|
||||||
|
createFileMeta({
|
||||||
|
dependencies: [],
|
||||||
|
dependents: ["/project/d.ts"],
|
||||||
|
}),
|
||||||
|
)
|
||||||
|
|
||||||
|
analyzer.computeTransitiveCounts(metas)
|
||||||
|
|
||||||
|
// E has transitive dependents: D, C, B, A
|
||||||
|
expect(metas.get("/project/e.ts")!.transitiveDepCount).toBe(4)
|
||||||
|
// A has transitive dependencies: B, C, D, E
|
||||||
|
expect(metas.get("/project/a.ts")!.transitiveDepByCount).toBe(4)
|
||||||
|
})
|
||||||
|
})
|
||||||
|
|
||||||
|
describe("getTransitiveDependents", () => {
|
||||||
|
it("should return empty set for file not in metas", () => {
|
||||||
|
const metas = new Map<string, FileMeta>()
|
||||||
|
const cache = new Map<string, Set<string>>()
|
||||||
|
|
||||||
|
const result = analyzer.getTransitiveDependents("/project/unknown.ts", metas, cache)
|
||||||
|
|
||||||
|
expect(result.size).toBe(0)
|
||||||
|
})
|
||||||
|
|
||||||
|
it("should use cache for repeated calls", () => {
|
||||||
|
const metas = new Map<string, FileMeta>()
|
||||||
|
metas.set(
|
||||||
|
"/project/a.ts",
|
||||||
|
createFileMeta({
|
||||||
|
dependencies: [],
|
||||||
|
dependents: ["/project/b.ts"],
|
||||||
|
}),
|
||||||
|
)
|
||||||
|
metas.set(
|
||||||
|
"/project/b.ts",
|
||||||
|
createFileMeta({
|
||||||
|
dependencies: ["/project/a.ts"],
|
||||||
|
dependents: [],
|
||||||
|
}),
|
||||||
|
)
|
||||||
|
|
||||||
|
const cache = new Map<string, Set<string>>()
|
||||||
|
const result1 = analyzer.getTransitiveDependents("/project/a.ts", metas, cache)
|
||||||
|
const result2 = analyzer.getTransitiveDependents("/project/a.ts", metas, cache)
|
||||||
|
|
||||||
|
// Should return same instance from cache
|
||||||
|
expect(result1).toBe(result2)
|
||||||
|
expect(result1.size).toBe(1)
|
||||||
|
})
|
||||||
|
})
|
||||||
|
|
||||||
|
describe("getTransitiveDependencies", () => {
|
||||||
|
it("should return empty set for file not in metas", () => {
|
||||||
|
const metas = new Map<string, FileMeta>()
|
||||||
|
const cache = new Map<string, Set<string>>()
|
||||||
|
|
||||||
|
const result = analyzer.getTransitiveDependencies("/project/unknown.ts", metas, cache)
|
||||||
|
|
||||||
|
expect(result.size).toBe(0)
|
||||||
|
})
|
||||||
|
|
||||||
|
it("should use cache for repeated calls", () => {
|
||||||
|
const metas = new Map<string, FileMeta>()
|
||||||
|
metas.set(
|
||||||
|
"/project/a.ts",
|
||||||
|
createFileMeta({
|
||||||
|
dependencies: ["/project/b.ts"],
|
||||||
|
dependents: [],
|
||||||
|
}),
|
||||||
|
)
|
||||||
|
metas.set(
|
||||||
|
"/project/b.ts",
|
||||||
|
createFileMeta({
|
||||||
|
dependencies: [],
|
||||||
|
dependents: ["/project/a.ts"],
|
||||||
|
}),
|
||||||
|
)
|
||||||
|
|
||||||
|
const cache = new Map<string, Set<string>>()
|
||||||
|
const result1 = analyzer.getTransitiveDependencies("/project/a.ts", metas, cache)
|
||||||
|
const result2 = analyzer.getTransitiveDependencies("/project/a.ts", metas, cache)
|
||||||
|
|
||||||
|
// Should return same instance from cache
|
||||||
|
expect(result1).toBe(result2)
|
||||||
|
expect(result1.size).toBe(1)
|
||||||
|
})
|
||||||
|
})
|
||||||
|
|
||||||
|
describe("analyzeAll with transitive counts", () => {
|
||||||
|
it("should compute transitive counts in analyzeAll", () => {
|
||||||
|
const files = new Map<string, { ast: FileAST; content: string }>()
|
||||||
|
|
||||||
|
// A imports B, B imports C
|
||||||
|
const aContent = `import { b } from "./b"`
|
||||||
|
const aAST = parser.parse(aContent, "ts")
|
||||||
|
files.set("/project/src/a.ts", { ast: aAST, content: aContent })
|
||||||
|
|
||||||
|
const bContent = `import { c } from "./c"\nexport const b = () => c()`
|
||||||
|
const bAST = parser.parse(bContent, "ts")
|
||||||
|
files.set("/project/src/b.ts", { ast: bAST, content: bContent })
|
||||||
|
|
||||||
|
const cContent = `export const c = () => 42`
|
||||||
|
const cAST = parser.parse(cContent, "ts")
|
||||||
|
files.set("/project/src/c.ts", { ast: cAST, content: cContent })
|
||||||
|
|
||||||
|
const results = analyzer.analyzeAll(files)
|
||||||
|
|
||||||
|
// C has transitive dependents: B and A
|
||||||
|
expect(results.get("/project/src/c.ts")!.transitiveDepCount).toBe(2)
|
||||||
|
// A has transitive dependencies: B and C
|
||||||
|
expect(results.get("/project/src/a.ts")!.transitiveDepByCount).toBe(2)
|
||||||
|
})
|
||||||
|
})
|
||||||
})
|
})
|
||||||
|
|||||||
@@ -135,6 +135,108 @@ describe("ResponseParser", () => {
|
|||||||
expect(result.parseErrors[0]).toContain("unknown_tool")
|
expect(result.parseErrors[0]).toContain("unknown_tool")
|
||||||
})
|
})
|
||||||
|
|
||||||
|
it("should normalize tool name aliases", () => {
|
||||||
|
// get_functions -> get_lines (common LLM typo)
|
||||||
|
const response1 = `<tool_call name="get_functions"><path>src/index.ts</path></tool_call>`
|
||||||
|
const result1 = parseToolCalls(response1)
|
||||||
|
expect(result1.toolCalls).toHaveLength(1)
|
||||||
|
expect(result1.toolCalls[0].name).toBe("get_lines")
|
||||||
|
expect(result1.hasParseErrors).toBe(false)
|
||||||
|
|
||||||
|
// read_file -> get_lines
|
||||||
|
const response2 = `<tool_call name="read_file"><path>test.ts</path></tool_call>`
|
||||||
|
const result2 = parseToolCalls(response2)
|
||||||
|
expect(result2.toolCalls).toHaveLength(1)
|
||||||
|
expect(result2.toolCalls[0].name).toBe("get_lines")
|
||||||
|
|
||||||
|
// find_todos -> get_todos
|
||||||
|
const response3 = `<tool_call name="find_todos"></tool_call>`
|
||||||
|
const result3 = parseToolCalls(response3)
|
||||||
|
expect(result3.toolCalls).toHaveLength(1)
|
||||||
|
expect(result3.toolCalls[0].name).toBe("get_todos")
|
||||||
|
|
||||||
|
// list_files -> get_structure
|
||||||
|
const response4 = `<tool_call name="list_files"><path>.</path></tool_call>`
|
||||||
|
const result4 = parseToolCalls(response4)
|
||||||
|
expect(result4.toolCalls).toHaveLength(1)
|
||||||
|
expect(result4.toolCalls[0].name).toBe("get_structure")
|
||||||
|
})
|
||||||
|
|
||||||
|
// JSON format tests
|
||||||
|
it("should parse JSON format tool calls as fallback", () => {
|
||||||
|
const response = `{"name": "get_lines", "arguments": {"path": "src/index.ts"}}`
|
||||||
|
const result = parseToolCalls(response)
|
||||||
|
|
||||||
|
expect(result.toolCalls).toHaveLength(1)
|
||||||
|
expect(result.toolCalls[0].name).toBe("get_lines")
|
||||||
|
expect(result.toolCalls[0].params).toEqual({ path: "src/index.ts" })
|
||||||
|
expect(result.hasParseErrors).toBe(false)
|
||||||
|
})
|
||||||
|
|
||||||
|
it("should parse JSON format with numeric arguments", () => {
|
||||||
|
const response = `{"name": "get_lines", "arguments": {"path": "src/index.ts", "start": 1, "end": 50}}`
|
||||||
|
const result = parseToolCalls(response)
|
||||||
|
|
||||||
|
expect(result.toolCalls).toHaveLength(1)
|
||||||
|
expect(result.toolCalls[0].params).toEqual({
|
||||||
|
path: "src/index.ts",
|
||||||
|
start: 1,
|
||||||
|
end: 50,
|
||||||
|
})
|
||||||
|
})
|
||||||
|
|
||||||
|
it("should parse JSON format with surrounding text", () => {
|
||||||
|
const response = `I'll read the file for you:
|
||||||
|
{"name": "get_lines", "arguments": {"path": "src/index.ts"}}
|
||||||
|
Let me know if you need more.`
|
||||||
|
|
||||||
|
const result = parseToolCalls(response)
|
||||||
|
|
||||||
|
expect(result.toolCalls).toHaveLength(1)
|
||||||
|
expect(result.toolCalls[0].name).toBe("get_lines")
|
||||||
|
expect(result.content).toContain("I'll read the file for you:")
|
||||||
|
expect(result.content).toContain("Let me know if you need more.")
|
||||||
|
})
|
||||||
|
|
||||||
|
it("should normalize tool name aliases in JSON format", () => {
|
||||||
|
// read_file -> get_lines
|
||||||
|
const response = `{"name": "read_file", "arguments": {"path": "test.ts"}}`
|
||||||
|
const result = parseToolCalls(response)
|
||||||
|
|
||||||
|
expect(result.toolCalls).toHaveLength(1)
|
||||||
|
expect(result.toolCalls[0].name).toBe("get_lines")
|
||||||
|
})
|
||||||
|
|
||||||
|
it("should reject unknown tool names in JSON format", () => {
|
||||||
|
const response = `{"name": "unknown_tool", "arguments": {"path": "test.ts"}}`
|
||||||
|
const result = parseToolCalls(response)
|
||||||
|
|
||||||
|
expect(result.toolCalls).toHaveLength(0)
|
||||||
|
expect(result.hasParseErrors).toBe(true)
|
||||||
|
expect(result.parseErrors[0]).toContain("unknown_tool")
|
||||||
|
})
|
||||||
|
|
||||||
|
it("should prefer XML over JSON when both present", () => {
|
||||||
|
const response = `<tool_call name="get_lines"><path>xml.ts</path></tool_call>
|
||||||
|
{"name": "get_function", "arguments": {"path": "json.ts", "name": "foo"}}`
|
||||||
|
|
||||||
|
const result = parseToolCalls(response)
|
||||||
|
|
||||||
|
// Should only parse XML since it was found first
|
||||||
|
expect(result.toolCalls).toHaveLength(1)
|
||||||
|
expect(result.toolCalls[0].name).toBe("get_lines")
|
||||||
|
expect(result.toolCalls[0].params.path).toBe("xml.ts")
|
||||||
|
})
|
||||||
|
|
||||||
|
it("should parse JSON with empty arguments", () => {
|
||||||
|
const response = `{"name": "git_status", "arguments": {}}`
|
||||||
|
const result = parseToolCalls(response)
|
||||||
|
|
||||||
|
expect(result.toolCalls).toHaveLength(1)
|
||||||
|
expect(result.toolCalls[0].name).toBe("git_status")
|
||||||
|
expect(result.toolCalls[0].params).toEqual({})
|
||||||
|
})
|
||||||
|
|
||||||
it("should support CDATA for multiline content", () => {
|
it("should support CDATA for multiline content", () => {
|
||||||
const response = `<tool_call name="edit_lines">
|
const response = `<tool_call name="edit_lines">
|
||||||
<path>src/index.ts</path>
|
<path>src/index.ts</path>
|
||||||
|
|||||||
@@ -6,6 +6,7 @@ import {
|
|||||||
truncateContext,
|
truncateContext,
|
||||||
formatDependencyGraph,
|
formatDependencyGraph,
|
||||||
formatCircularDeps,
|
formatCircularDeps,
|
||||||
|
formatHighImpactFiles,
|
||||||
type ProjectStructure,
|
type ProjectStructure,
|
||||||
} from "../../../../src/infrastructure/llm/prompts.js"
|
} from "../../../../src/infrastructure/llm/prompts.js"
|
||||||
import type { FileAST } from "../../../../src/domain/value-objects/FileAST.js"
|
import type { FileAST } from "../../../../src/domain/value-objects/FileAST.js"
|
||||||
@@ -18,10 +19,16 @@ describe("prompts", () => {
|
|||||||
expect(SYSTEM_PROMPT.length).toBeGreaterThan(100)
|
expect(SYSTEM_PROMPT.length).toBeGreaterThan(100)
|
||||||
})
|
})
|
||||||
|
|
||||||
it("should contain core principles", () => {
|
it("should contain mandatory tool usage instructions", () => {
|
||||||
expect(SYSTEM_PROMPT).toContain("Lazy Loading")
|
expect(SYSTEM_PROMPT).toContain("MANDATORY")
|
||||||
expect(SYSTEM_PROMPT).toContain("Precision")
|
expect(SYSTEM_PROMPT).toContain("Tools for Code Questions")
|
||||||
expect(SYSTEM_PROMPT).toContain("Safety")
|
expect(SYSTEM_PROMPT).toContain("ZERO code in your context")
|
||||||
|
})
|
||||||
|
|
||||||
|
it("should contain when to use and when not to use tools", () => {
|
||||||
|
expect(SYSTEM_PROMPT).toContain("When to Use Tools")
|
||||||
|
expect(SYSTEM_PROMPT).toContain("Do NOT use tools")
|
||||||
|
expect(SYSTEM_PROMPT).toContain("Greetings")
|
||||||
})
|
})
|
||||||
|
|
||||||
it("should list available tools", () => {
|
it("should list available tools", () => {
|
||||||
@@ -33,8 +40,9 @@ describe("prompts", () => {
|
|||||||
})
|
})
|
||||||
|
|
||||||
it("should include safety rules", () => {
|
it("should include safety rules", () => {
|
||||||
expect(SYSTEM_PROMPT).toContain("Safety Rules")
|
expect(SYSTEM_PROMPT).toContain("Stay safe")
|
||||||
expect(SYSTEM_PROMPT).toContain("Never execute commands that could harm")
|
expect(SYSTEM_PROMPT).toContain("destructive commands")
|
||||||
|
expect(SYSTEM_PROMPT).toContain("Verify before editing")
|
||||||
})
|
})
|
||||||
})
|
})
|
||||||
|
|
||||||
@@ -2395,6 +2403,345 @@ describe("prompts", () => {
|
|||||||
})
|
})
|
||||||
})
|
})
|
||||||
|
|
||||||
|
describe("high impact files (0.29.0)", () => {
|
||||||
|
describe("formatHighImpactFiles", () => {
|
||||||
|
it("should return null for empty metas", () => {
|
||||||
|
const metas = new Map<string, FileMeta>()
|
||||||
|
|
||||||
|
const result = formatHighImpactFiles(metas)
|
||||||
|
|
||||||
|
expect(result).toBeNull()
|
||||||
|
})
|
||||||
|
|
||||||
|
it("should return null when no files have impact score >= minImpact", () => {
|
||||||
|
const metas = new Map<string, FileMeta>([
|
||||||
|
[
|
||||||
|
"src/low.ts",
|
||||||
|
{
|
||||||
|
complexity: { loc: 10, nesting: 1, cyclomaticComplexity: 1, score: 10 },
|
||||||
|
dependencies: [],
|
||||||
|
dependents: [],
|
||||||
|
isHub: false,
|
||||||
|
isEntryPoint: false,
|
||||||
|
fileType: "source",
|
||||||
|
impactScore: 2,
|
||||||
|
transitiveDepCount: 0,
|
||||||
|
transitiveDepByCount: 0,
|
||||||
|
},
|
||||||
|
],
|
||||||
|
])
|
||||||
|
|
||||||
|
const result = formatHighImpactFiles(metas)
|
||||||
|
|
||||||
|
expect(result).toBeNull()
|
||||||
|
})
|
||||||
|
|
||||||
|
it("should format file with high impact score and transitive counts", () => {
|
||||||
|
const metas = new Map<string, FileMeta>([
|
||||||
|
[
|
||||||
|
"src/utils/validation.ts",
|
||||||
|
{
|
||||||
|
complexity: { loc: 50, nesting: 2, cyclomaticComplexity: 5, score: 30 },
|
||||||
|
dependencies: [],
|
||||||
|
dependents: [
|
||||||
|
"a.ts",
|
||||||
|
"b.ts",
|
||||||
|
"c.ts",
|
||||||
|
"d.ts",
|
||||||
|
"e.ts",
|
||||||
|
"f.ts",
|
||||||
|
"g.ts",
|
||||||
|
"h.ts",
|
||||||
|
"i.ts",
|
||||||
|
"j.ts",
|
||||||
|
"k.ts",
|
||||||
|
"l.ts",
|
||||||
|
],
|
||||||
|
isHub: true,
|
||||||
|
isEntryPoint: false,
|
||||||
|
fileType: "source",
|
||||||
|
impactScore: 67,
|
||||||
|
transitiveDepCount: 24,
|
||||||
|
transitiveDepByCount: 0,
|
||||||
|
},
|
||||||
|
],
|
||||||
|
])
|
||||||
|
|
||||||
|
const result = formatHighImpactFiles(metas)
|
||||||
|
|
||||||
|
expect(result).toContain("## High Impact Files")
|
||||||
|
expect(result).toContain("| File | Impact | Direct | Transitive |")
|
||||||
|
expect(result).toContain("| utils/validation | 67% | 12 | 24 |")
|
||||||
|
})
|
||||||
|
|
||||||
|
it("should sort by transitive count descending, then by impact", () => {
|
||||||
|
const metas = new Map<string, FileMeta>([
|
||||||
|
[
|
||||||
|
"src/low.ts",
|
||||||
|
{
|
||||||
|
complexity: { loc: 10, nesting: 1, cyclomaticComplexity: 1, score: 10 },
|
||||||
|
dependencies: [],
|
||||||
|
dependents: ["a.ts"],
|
||||||
|
isHub: false,
|
||||||
|
isEntryPoint: false,
|
||||||
|
fileType: "source",
|
||||||
|
impactScore: 10,
|
||||||
|
transitiveDepCount: 5,
|
||||||
|
transitiveDepByCount: 0,
|
||||||
|
},
|
||||||
|
],
|
||||||
|
[
|
||||||
|
"src/high.ts",
|
||||||
|
{
|
||||||
|
complexity: { loc: 20, nesting: 1, cyclomaticComplexity: 1, score: 10 },
|
||||||
|
dependencies: [],
|
||||||
|
dependents: ["a.ts", "b.ts", "c.ts", "d.ts", "e.ts"],
|
||||||
|
isHub: false,
|
||||||
|
isEntryPoint: false,
|
||||||
|
fileType: "source",
|
||||||
|
impactScore: 50,
|
||||||
|
transitiveDepCount: 15,
|
||||||
|
transitiveDepByCount: 0,
|
||||||
|
},
|
||||||
|
],
|
||||||
|
])
|
||||||
|
|
||||||
|
const result = formatHighImpactFiles(metas)
|
||||||
|
|
||||||
|
expect(result).not.toBeNull()
|
||||||
|
const lines = result!.split("\n")
|
||||||
|
const highIndex = lines.findIndex((l) => l.includes("high"))
|
||||||
|
const lowIndex = lines.findIndex((l) => l.includes("low"))
|
||||||
|
expect(highIndex).toBeLessThan(lowIndex)
|
||||||
|
})
|
||||||
|
|
||||||
|
it("should limit to top N files", () => {
|
||||||
|
const metas = new Map<string, FileMeta>()
|
||||||
|
for (let i = 0; i < 20; i++) {
|
||||||
|
metas.set(`src/file${String(i)}.ts`, {
|
||||||
|
complexity: { loc: 10, nesting: 1, cyclomaticComplexity: 1, score: 10 },
|
||||||
|
dependencies: [],
|
||||||
|
dependents: ["a.ts"],
|
||||||
|
isHub: false,
|
||||||
|
isEntryPoint: false,
|
||||||
|
fileType: "source",
|
||||||
|
impactScore: 10 + i,
|
||||||
|
transitiveDepCount: i,
|
||||||
|
transitiveDepByCount: 0,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
const result = formatHighImpactFiles(metas, 5)
|
||||||
|
|
||||||
|
expect(result).not.toBeNull()
|
||||||
|
const dataLines = result!
|
||||||
|
.split("\n")
|
||||||
|
.filter((l) => l.startsWith("| ") && l.includes("%"))
|
||||||
|
expect(dataLines).toHaveLength(5)
|
||||||
|
})
|
||||||
|
|
||||||
|
it("should filter by minImpact", () => {
|
||||||
|
const metas = new Map<string, FileMeta>([
|
||||||
|
[
|
||||||
|
"src/high.ts",
|
||||||
|
{
|
||||||
|
complexity: { loc: 10, nesting: 1, cyclomaticComplexity: 1, score: 10 },
|
||||||
|
dependencies: [],
|
||||||
|
dependents: ["a.ts", "b.ts", "c.ts"],
|
||||||
|
isHub: false,
|
||||||
|
isEntryPoint: false,
|
||||||
|
fileType: "source",
|
||||||
|
impactScore: 30,
|
||||||
|
transitiveDepCount: 5,
|
||||||
|
transitiveDepByCount: 0,
|
||||||
|
},
|
||||||
|
],
|
||||||
|
[
|
||||||
|
"src/low.ts",
|
||||||
|
{
|
||||||
|
complexity: { loc: 10, nesting: 1, cyclomaticComplexity: 1, score: 10 },
|
||||||
|
dependencies: [],
|
||||||
|
dependents: ["a.ts"],
|
||||||
|
isHub: false,
|
||||||
|
isEntryPoint: false,
|
||||||
|
fileType: "source",
|
||||||
|
impactScore: 5,
|
||||||
|
transitiveDepCount: 1,
|
||||||
|
transitiveDepByCount: 0,
|
||||||
|
},
|
||||||
|
],
|
||||||
|
])
|
||||||
|
|
||||||
|
const result = formatHighImpactFiles(metas, 10, 20)
|
||||||
|
|
||||||
|
expect(result).not.toBeNull()
|
||||||
|
expect(result).toContain("high")
|
||||||
|
expect(result).not.toContain("low")
|
||||||
|
})
|
||||||
|
|
||||||
|
it("should shorten src/ prefix", () => {
|
||||||
|
const metas = new Map<string, FileMeta>([
|
||||||
|
[
|
||||||
|
"src/services/user.ts",
|
||||||
|
{
|
||||||
|
complexity: { loc: 10, nesting: 1, cyclomaticComplexity: 1, score: 10 },
|
||||||
|
dependencies: [],
|
||||||
|
dependents: ["a.ts", "b.ts"],
|
||||||
|
isHub: false,
|
||||||
|
isEntryPoint: false,
|
||||||
|
fileType: "source",
|
||||||
|
impactScore: 20,
|
||||||
|
transitiveDepCount: 5,
|
||||||
|
transitiveDepByCount: 0,
|
||||||
|
},
|
||||||
|
],
|
||||||
|
])
|
||||||
|
|
||||||
|
const result = formatHighImpactFiles(metas)
|
||||||
|
|
||||||
|
expect(result).toContain("services/user")
|
||||||
|
expect(result).not.toContain("src/")
|
||||||
|
})
|
||||||
|
|
||||||
|
it("should remove file extensions", () => {
|
||||||
|
const metas = new Map<string, FileMeta>([
|
||||||
|
[
|
||||||
|
"lib/utils.ts",
|
||||||
|
{
|
||||||
|
complexity: { loc: 10, nesting: 1, cyclomaticComplexity: 1, score: 10 },
|
||||||
|
dependencies: [],
|
||||||
|
dependents: ["a.ts", "b.ts"],
|
||||||
|
isHub: false,
|
||||||
|
isEntryPoint: false,
|
||||||
|
fileType: "source",
|
||||||
|
impactScore: 20,
|
||||||
|
transitiveDepCount: 3,
|
||||||
|
transitiveDepByCount: 0,
|
||||||
|
},
|
||||||
|
],
|
||||||
|
])
|
||||||
|
|
||||||
|
const result = formatHighImpactFiles(metas)
|
||||||
|
|
||||||
|
expect(result).toContain("lib/utils")
|
||||||
|
expect(result).not.toContain(".ts")
|
||||||
|
})
|
||||||
|
})
|
||||||
|
|
||||||
|
describe("buildInitialContext with includeHighImpactFiles", () => {
|
||||||
|
const structure: ProjectStructure = {
|
||||||
|
name: "test-project",
|
||||||
|
rootPath: "/test",
|
||||||
|
files: ["src/index.ts"],
|
||||||
|
directories: ["src"],
|
||||||
|
}
|
||||||
|
|
||||||
|
const asts = new Map<string, FileAST>([
|
||||||
|
[
|
||||||
|
"src/index.ts",
|
||||||
|
{
|
||||||
|
imports: [],
|
||||||
|
exports: [],
|
||||||
|
functions: [],
|
||||||
|
classes: [],
|
||||||
|
interfaces: [],
|
||||||
|
typeAliases: [],
|
||||||
|
parseError: false,
|
||||||
|
},
|
||||||
|
],
|
||||||
|
])
|
||||||
|
|
||||||
|
it("should include high impact files by default", () => {
|
||||||
|
const metas = new Map<string, FileMeta>([
|
||||||
|
[
|
||||||
|
"src/index.ts",
|
||||||
|
{
|
||||||
|
complexity: { loc: 10, nesting: 1, cyclomaticComplexity: 1, score: 10 },
|
||||||
|
dependencies: [],
|
||||||
|
dependents: ["a.ts", "b.ts"],
|
||||||
|
isHub: false,
|
||||||
|
isEntryPoint: true,
|
||||||
|
fileType: "source",
|
||||||
|
impactScore: 20,
|
||||||
|
transitiveDepCount: 5,
|
||||||
|
transitiveDepByCount: 0,
|
||||||
|
},
|
||||||
|
],
|
||||||
|
])
|
||||||
|
|
||||||
|
const context = buildInitialContext(structure, asts, metas)
|
||||||
|
|
||||||
|
expect(context).toContain("## High Impact Files")
|
||||||
|
})
|
||||||
|
|
||||||
|
it("should exclude high impact files when includeHighImpactFiles is false", () => {
|
||||||
|
const metas = new Map<string, FileMeta>([
|
||||||
|
[
|
||||||
|
"src/index.ts",
|
||||||
|
{
|
||||||
|
complexity: { loc: 10, nesting: 1, cyclomaticComplexity: 1, score: 10 },
|
||||||
|
dependencies: [],
|
||||||
|
dependents: ["a.ts", "b.ts"],
|
||||||
|
isHub: false,
|
||||||
|
isEntryPoint: true,
|
||||||
|
fileType: "source",
|
||||||
|
impactScore: 20,
|
||||||
|
transitiveDepCount: 5,
|
||||||
|
transitiveDepByCount: 0,
|
||||||
|
},
|
||||||
|
],
|
||||||
|
])
|
||||||
|
|
||||||
|
const context = buildInitialContext(structure, asts, metas, {
|
||||||
|
includeHighImpactFiles: false,
|
||||||
|
})
|
||||||
|
|
||||||
|
expect(context).not.toContain("## High Impact Files")
|
||||||
|
})
|
||||||
|
|
||||||
|
it("should not include high impact files when metas is undefined", () => {
|
||||||
|
const context = buildInitialContext(structure, asts, undefined, {
|
||||||
|
includeHighImpactFiles: true,
|
||||||
|
})
|
||||||
|
|
||||||
|
expect(context).not.toContain("## High Impact Files")
|
||||||
|
})
|
||||||
|
|
||||||
|
it("should not include high impact files when metas is empty", () => {
|
||||||
|
const emptyMetas = new Map<string, FileMeta>()
|
||||||
|
|
||||||
|
const context = buildInitialContext(structure, asts, emptyMetas, {
|
||||||
|
includeHighImpactFiles: true,
|
||||||
|
})
|
||||||
|
|
||||||
|
expect(context).not.toContain("## High Impact Files")
|
||||||
|
})
|
||||||
|
|
||||||
|
it("should not include high impact files when no files have high impact", () => {
|
||||||
|
const metas = new Map<string, FileMeta>([
|
||||||
|
[
|
||||||
|
"src/index.ts",
|
||||||
|
{
|
||||||
|
complexity: { loc: 10, nesting: 1, cyclomaticComplexity: 1, score: 10 },
|
||||||
|
dependencies: [],
|
||||||
|
dependents: [],
|
||||||
|
isHub: false,
|
||||||
|
isEntryPoint: true,
|
||||||
|
fileType: "source",
|
||||||
|
impactScore: 0,
|
||||||
|
},
|
||||||
|
],
|
||||||
|
])
|
||||||
|
|
||||||
|
const context = buildInitialContext(structure, asts, metas, {
|
||||||
|
includeHighImpactFiles: true,
|
||||||
|
})
|
||||||
|
|
||||||
|
expect(context).not.toContain("## High Impact Files")
|
||||||
|
})
|
||||||
|
})
|
||||||
|
})
|
||||||
|
|
||||||
describe("circular dependencies (0.28.0)", () => {
|
describe("circular dependencies (0.28.0)", () => {
|
||||||
describe("formatCircularDeps", () => {
|
describe("formatCircularDeps", () => {
|
||||||
it("should return null for empty array", () => {
|
it("should return null for empty array", () => {
|
||||||
|
|||||||
@@ -18,6 +18,7 @@ describe("ContextConfigSchema", () => {
|
|||||||
includeSignatures: true,
|
includeSignatures: true,
|
||||||
includeDepsGraph: true,
|
includeDepsGraph: true,
|
||||||
includeCircularDeps: true,
|
includeCircularDeps: true,
|
||||||
|
includeHighImpactFiles: true,
|
||||||
})
|
})
|
||||||
})
|
})
|
||||||
|
|
||||||
@@ -32,6 +33,7 @@ describe("ContextConfigSchema", () => {
|
|||||||
includeSignatures: true,
|
includeSignatures: true,
|
||||||
includeDepsGraph: true,
|
includeDepsGraph: true,
|
||||||
includeCircularDeps: true,
|
includeCircularDeps: true,
|
||||||
|
includeHighImpactFiles: true,
|
||||||
})
|
})
|
||||||
})
|
})
|
||||||
})
|
})
|
||||||
@@ -171,6 +173,7 @@ describe("ContextConfigSchema", () => {
|
|||||||
includeSignatures: true,
|
includeSignatures: true,
|
||||||
includeDepsGraph: true,
|
includeDepsGraph: true,
|
||||||
includeCircularDeps: true,
|
includeCircularDeps: true,
|
||||||
|
includeHighImpactFiles: true,
|
||||||
})
|
})
|
||||||
})
|
})
|
||||||
|
|
||||||
@@ -187,6 +190,7 @@ describe("ContextConfigSchema", () => {
|
|||||||
includeSignatures: true,
|
includeSignatures: true,
|
||||||
includeDepsGraph: true,
|
includeDepsGraph: true,
|
||||||
includeCircularDeps: true,
|
includeCircularDeps: true,
|
||||||
|
includeHighImpactFiles: true,
|
||||||
})
|
})
|
||||||
})
|
})
|
||||||
|
|
||||||
@@ -204,6 +208,7 @@ describe("ContextConfigSchema", () => {
|
|||||||
includeSignatures: true,
|
includeSignatures: true,
|
||||||
includeDepsGraph: true,
|
includeDepsGraph: true,
|
||||||
includeCircularDeps: true,
|
includeCircularDeps: true,
|
||||||
|
includeHighImpactFiles: true,
|
||||||
})
|
})
|
||||||
})
|
})
|
||||||
})
|
})
|
||||||
@@ -218,6 +223,7 @@ describe("ContextConfigSchema", () => {
|
|||||||
includeSignatures: false,
|
includeSignatures: false,
|
||||||
includeDepsGraph: false,
|
includeDepsGraph: false,
|
||||||
includeCircularDeps: false,
|
includeCircularDeps: false,
|
||||||
|
includeHighImpactFiles: false,
|
||||||
}
|
}
|
||||||
|
|
||||||
const result = ContextConfigSchema.parse(config)
|
const result = ContextConfigSchema.parse(config)
|
||||||
@@ -233,6 +239,7 @@ describe("ContextConfigSchema", () => {
|
|||||||
includeSignatures: true,
|
includeSignatures: true,
|
||||||
includeDepsGraph: true,
|
includeDepsGraph: true,
|
||||||
includeCircularDeps: true,
|
includeCircularDeps: true,
|
||||||
|
includeHighImpactFiles: true,
|
||||||
}
|
}
|
||||||
|
|
||||||
const result = ContextConfigSchema.parse(config)
|
const result = ContextConfigSchema.parse(config)
|
||||||
@@ -314,4 +321,29 @@ describe("ContextConfigSchema", () => {
|
|||||||
expect(() => ContextConfigSchema.parse({ includeCircularDeps: 1 })).toThrow()
|
expect(() => ContextConfigSchema.parse({ includeCircularDeps: 1 })).toThrow()
|
||||||
})
|
})
|
||||||
})
|
})
|
||||||
|
|
||||||
|
describe("includeHighImpactFiles", () => {
|
||||||
|
it("should accept true", () => {
|
||||||
|
const result = ContextConfigSchema.parse({ includeHighImpactFiles: true })
|
||||||
|
expect(result.includeHighImpactFiles).toBe(true)
|
||||||
|
})
|
||||||
|
|
||||||
|
it("should accept false", () => {
|
||||||
|
const result = ContextConfigSchema.parse({ includeHighImpactFiles: false })
|
||||||
|
expect(result.includeHighImpactFiles).toBe(false)
|
||||||
|
})
|
||||||
|
|
||||||
|
it("should default to true", () => {
|
||||||
|
const result = ContextConfigSchema.parse({})
|
||||||
|
expect(result.includeHighImpactFiles).toBe(true)
|
||||||
|
})
|
||||||
|
|
||||||
|
it("should reject non-boolean", () => {
|
||||||
|
expect(() => ContextConfigSchema.parse({ includeHighImpactFiles: "true" })).toThrow()
|
||||||
|
})
|
||||||
|
|
||||||
|
it("should reject number", () => {
|
||||||
|
expect(() => ContextConfigSchema.parse({ includeHighImpactFiles: 1 })).toThrow()
|
||||||
|
})
|
||||||
|
})
|
||||||
})
|
})
|
||||||
|
|||||||
Reference in New Issue
Block a user