mirror of
https://github.com/samiyev/puaros.git
synced 2025-12-27 23:06:54 +05:00
Compare commits
49 Commits
ipuaro-v0.
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
3e7762ec4e | ||
|
|
c82006bbda | ||
|
|
2e84472e49 | ||
|
|
17d75dbd54 | ||
|
|
fac5966678 | ||
|
|
92ba3fd9ba | ||
|
|
e9aaa708fe | ||
|
|
d6d15dd271 | ||
|
|
d63d85d850 | ||
|
|
41cfc21f20 | ||
|
|
eeaa223436 | ||
|
|
36768c06d1 | ||
|
|
5a22cd5c9b | ||
|
|
806c9281b0 | ||
|
|
12197a9624 | ||
|
|
1489b69e69 | ||
|
|
2dcb22812c | ||
|
|
7d7c99fe4d | ||
|
|
a3f0ba948f | ||
|
|
141888bf59 | ||
|
|
b0f1778f3a | ||
|
|
9c94335729 | ||
|
|
c34d57c231 | ||
|
|
60052c0db9 | ||
|
|
fa647c41aa | ||
|
|
98b365bd94 | ||
|
|
a7669f8947 | ||
|
|
7f0ec49c90 | ||
|
|
077d160343 | ||
|
|
b5ee77d8b8 | ||
|
|
a589b0dfc4 | ||
|
|
908c2f50d7 | ||
|
|
510c42241a | ||
|
|
357cf27765 | ||
|
|
6695cb73d4 | ||
|
|
5a9470929c | ||
|
|
137c77cc53 | ||
|
|
0433ef102c | ||
|
|
902d1db831 | ||
|
|
c843b780a8 | ||
|
|
0dff0e87d0 | ||
|
|
ab2d5d40a5 | ||
|
|
baccfd53c0 | ||
|
|
8f995fc596 | ||
|
|
f947c6d157 | ||
|
|
33d52bc7ca | ||
|
|
2c6eb6ce9b | ||
|
|
7d18e87423 | ||
|
|
fd1e6ad86e |
@@ -20,6 +20,21 @@ This document provides authoritative sources, academic papers, industry standard
|
||||
12. [Aggregate Boundary Validation (DDD Tactical Patterns)](#12-aggregate-boundary-validation-ddd-tactical-patterns)
|
||||
13. [Secret Detection & Security](#13-secret-detection--security)
|
||||
14. [Severity-Based Prioritization & Technical Debt](#14-severity-based-prioritization--technical-debt)
|
||||
15. [Domain Event Usage Validation](#15-domain-event-usage-validation)
|
||||
16. [Value Object Immutability](#16-value-object-immutability)
|
||||
17. [Command Query Separation (CQS/CQRS)](#17-command-query-separation-cqscqrs)
|
||||
18. [Factory Pattern](#18-factory-pattern)
|
||||
19. [Specification Pattern](#19-specification-pattern)
|
||||
20. [Bounded Context](#20-bounded-context)
|
||||
21. [Persistence Ignorance](#21-persistence-ignorance)
|
||||
22. [Null Object Pattern](#22-null-object-pattern)
|
||||
23. [Primitive Obsession](#23-primitive-obsession)
|
||||
24. [Service Locator Anti-pattern](#24-service-locator-anti-pattern)
|
||||
25. [Double Dispatch and Visitor Pattern](#25-double-dispatch-and-visitor-pattern)
|
||||
26. [Entity Identity](#26-entity-identity)
|
||||
27. [Saga Pattern](#27-saga-pattern)
|
||||
28. [Anti-Corruption Layer](#28-anti-corruption-layer)
|
||||
29. [Ubiquitous Language](#29-ubiquitous-language)
|
||||
|
||||
---
|
||||
|
||||
@@ -801,22 +816,840 @@ This document provides authoritative sources, academic papers, industry standard
|
||||
|
||||
---
|
||||
|
||||
## 15. Domain Event Usage Validation
|
||||
|
||||
### Eric Evans: Domain-Driven Design (2003)
|
||||
|
||||
**Original Definition:**
|
||||
- Domain Events: "Something happened that domain experts care about"
|
||||
- Events capture facts about the domain that have already occurred
|
||||
- Distinct from system events - they model business-relevant occurrences
|
||||
- Reference: [Martin Fowler - Domain Event](https://martinfowler.com/eaaDev/DomainEvent.html)
|
||||
|
||||
**Book: Domain-Driven Design** (2003)
|
||||
- Author: Eric Evans
|
||||
- Publisher: Addison-Wesley Professional
|
||||
- ISBN: 978-0321125217
|
||||
- Domain Events weren't explicitly in the original book but evolved from DDD community
|
||||
- Reference: [DDD Community - Domain Events](https://www.domainlanguage.com/)
|
||||
|
||||
### Vaughn Vernon: Implementing Domain-Driven Design (2013)
|
||||
|
||||
**Chapter 8: Domain Events**
|
||||
- Author: Vaughn Vernon
|
||||
- Comprehensive coverage of Domain Events implementation
|
||||
- "Model information about activity in the domain as a series of discrete events"
|
||||
- Reference: [Amazon - Implementing DDD](https://www.amazon.com/Implementing-Domain-Driven-Design-Vaughn-Vernon/dp/0321834577)
|
||||
|
||||
**Key Principles:**
|
||||
- Events should be immutable
|
||||
- Named in past tense (OrderPlaced, UserRegistered)
|
||||
- Contain all data needed by handlers
|
||||
- Enable loose coupling between aggregates
|
||||
|
||||
### Martin Fowler's Event Patterns
|
||||
|
||||
**Event Sourcing:**
|
||||
- "Capture all changes to an application state as a sequence of events"
|
||||
- Events become the primary source of truth
|
||||
- Reference: [Martin Fowler - Event Sourcing](https://martinfowler.com/eaaDev/EventSourcing.html)
|
||||
|
||||
**Event-Driven Architecture:**
|
||||
- Promotes loose coupling between components
|
||||
- Enables asynchronous processing
|
||||
- Reference: [Martin Fowler - Event-Driven](https://martinfowler.com/articles/201701-event-driven.html)
|
||||
|
||||
### Why Direct Infrastructure Calls Are Bad
|
||||
|
||||
**Coupling Issues:**
|
||||
- Direct calls create tight coupling between domain and infrastructure
|
||||
- Makes testing difficult (need to mock infrastructure)
|
||||
- Violates Single Responsibility Principle
|
||||
- Reference: [Microsoft - Domain Events Design](https://learn.microsoft.com/en-us/dotnet/architecture/microservices/microservice-ddd-cqrs-patterns/domain-events-design-implementation)
|
||||
|
||||
**Benefits of Domain Events:**
|
||||
- Decouples domain from side effects
|
||||
- Enables eventual consistency
|
||||
- Improves testability
|
||||
- Supports audit logging naturally
|
||||
- Reference: [Jimmy Bogard - Domain Events](https://lostechies.com/jimmybogard/2010/04/08/strengthening-your-domain-domain-events/)
|
||||
|
||||
---
|
||||
|
||||
## 16. Value Object Immutability
|
||||
|
||||
### Eric Evans: Domain-Driven Design (2003)
|
||||
|
||||
**Value Object Definition:**
|
||||
- "An object that describes some characteristic or attribute but carries no concept of identity"
|
||||
- "Value Objects should be immutable"
|
||||
- When you care only about the attributes of an element, classify it as a Value Object
|
||||
- Reference: [Martin Fowler - Value Object](https://martinfowler.com/bliki/ValueObject.html)
|
||||
|
||||
**Immutability Requirement:**
|
||||
- "Treat the Value Object as immutable"
|
||||
- "Don't give it any identity and avoid the design complexities necessary to maintain Entities"
|
||||
- Reference: [DDD Reference - Value Objects](https://www.domainlanguage.com/wp-content/uploads/2016/05/DDD_Reference_2015-03.pdf)
|
||||
|
||||
### Martin Fowler on Value Objects
|
||||
|
||||
**Blog Post: Value Object** (2016)
|
||||
- "A small simple object, like money or a date range, whose equality isn't based on identity"
|
||||
- "I consider value objects to be one of the most important building blocks of good domain models"
|
||||
- Reference: [Martin Fowler - Value Object](https://martinfowler.com/bliki/ValueObject.html)
|
||||
|
||||
**Key Properties:**
|
||||
- Equality based on attribute values, not identity
|
||||
- Should be immutable (once created, cannot be changed)
|
||||
- Side-effect free behavior
|
||||
- Self-validating (validate in constructor)
|
||||
|
||||
### Vaughn Vernon: Implementing DDD
|
||||
|
||||
**Chapter 6: Value Objects**
|
||||
- Detailed implementation guidance
|
||||
- "Measures, quantifies, or describes a thing in the domain"
|
||||
- "Can be compared with other Value Objects using value equality"
|
||||
- "Completely replaceable when the measurement changes"
|
||||
- Reference: [Vaughn Vernon - Implementing DDD](https://www.amazon.com/Implementing-Domain-Driven-Design-Vaughn-Vernon/dp/0321834577)
|
||||
|
||||
### Why Immutability Matters
|
||||
|
||||
**Thread Safety:**
|
||||
- Immutable objects are inherently thread-safe
|
||||
- No synchronization needed for concurrent access
|
||||
- Reference: [Effective Java - Item 17](https://www.amazon.com/Effective-Java-Joshua-Bloch/dp/0134685997)
|
||||
|
||||
**Reasoning About Code:**
|
||||
- Easier to understand code when objects don't change
|
||||
- No defensive copying needed
|
||||
- Simplifies caching and optimization
|
||||
- Reference: [Oracle Java Tutorials - Immutable Objects](https://docs.oracle.com/javase/tutorial/essential/concurrency/immutable.html)
|
||||
|
||||
**Functional Programming Influence:**
|
||||
- Immutability is a core principle of functional programming
|
||||
- Reduces side effects and makes code more predictable
|
||||
- Reference: [Wikipedia - Immutable Object](https://en.wikipedia.org/wiki/Immutable_object)
|
||||
|
||||
---
|
||||
|
||||
## 17. Command Query Separation (CQS/CQRS)
|
||||
|
||||
### Bertrand Meyer: Original CQS Principle
|
||||
|
||||
**Book: Object-Oriented Software Construction** (1988, 2nd Ed. 1997)
|
||||
- Author: Bertrand Meyer
|
||||
- Publisher: Prentice Hall
|
||||
- ISBN: 978-0136291558
|
||||
- Introduced Command Query Separation principle
|
||||
- Reference: [Wikipedia - CQS](https://en.wikipedia.org/wiki/Command%E2%80%93query_separation)
|
||||
|
||||
**CQS Principle:**
|
||||
- "Every method should either be a command that performs an action, or a query that returns data to the caller, but not both"
|
||||
- Commands: change state, return nothing (void)
|
||||
- Queries: return data, change nothing (side-effect free)
|
||||
- Reference: [Martin Fowler - CommandQuerySeparation](https://martinfowler.com/bliki/CommandQuerySeparation.html)
|
||||
|
||||
### Greg Young: CQRS Pattern
|
||||
|
||||
**CQRS Documents** (2010)
|
||||
- Author: Greg Young
|
||||
- Extended CQS to architectural pattern
|
||||
- "CQRS is simply the creation of two objects where there was previously only one"
|
||||
- Reference: [Greg Young - CQRS Documents](https://cqrs.files.wordpress.com/2010/11/cqrs_documents.pdf)
|
||||
|
||||
**Key Concepts:**
|
||||
- Separate models for reading and writing
|
||||
- Write model (commands) optimized for business logic
|
||||
- Read model (queries) optimized for display/reporting
|
||||
- Reference: [Microsoft - CQRS Pattern](https://learn.microsoft.com/en-us/azure/architecture/patterns/cqrs)
|
||||
|
||||
### Martin Fowler on CQRS
|
||||
|
||||
**Blog Post: CQRS** (2011)
|
||||
- "At its heart is the notion that you can use a different model to update information than the model you use to read information"
|
||||
- Warns against overuse: "CQRS is a significant mental leap for all concerned"
|
||||
- Reference: [Martin Fowler - CQRS](https://martinfowler.com/bliki/CQRS.html)
|
||||
|
||||
### Benefits and Trade-offs
|
||||
|
||||
**Benefits:**
|
||||
- Independent scaling of read and write workloads
|
||||
- Optimized data schemas for each side
|
||||
- Improved security (separate read/write permissions)
|
||||
- Reference: [AWS - CQRS Pattern](https://docs.aws.amazon.com/prescriptive-guidance/latest/modernization-data-persistence/cqrs-pattern.html)
|
||||
|
||||
**Trade-offs:**
|
||||
- Increased complexity
|
||||
- Eventual consistency challenges
|
||||
- More code to maintain
|
||||
- Reference: [Microsoft - CQRS Considerations](https://learn.microsoft.com/en-us/azure/architecture/patterns/cqrs#issues-and-considerations)
|
||||
|
||||
---
|
||||
|
||||
## 18. Factory Pattern
|
||||
|
||||
### Gang of Four: Design Patterns (1994)
|
||||
|
||||
**Book: Design Patterns: Elements of Reusable Object-Oriented Software**
|
||||
- Authors: Erich Gamma, Richard Helm, Ralph Johnson, John Vlissides (Gang of Four)
|
||||
- Publisher: Addison-Wesley
|
||||
- ISBN: 978-0201633610
|
||||
- Defines Factory Method and Abstract Factory patterns
|
||||
- Reference: [Wikipedia - Design Patterns](https://en.wikipedia.org/wiki/Design_Patterns)
|
||||
|
||||
**Factory Method Pattern:**
|
||||
- "Define an interface for creating an object, but let subclasses decide which class to instantiate"
|
||||
- Lets a class defer instantiation to subclasses
|
||||
- Reference: [Refactoring Guru - Factory Method](https://refactoring.guru/design-patterns/factory-method)
|
||||
|
||||
**Abstract Factory Pattern:**
|
||||
- "Provide an interface for creating families of related or dependent objects without specifying their concrete classes"
|
||||
- Reference: [Refactoring Guru - Abstract Factory](https://refactoring.guru/design-patterns/abstract-factory)
|
||||
|
||||
### Eric Evans: Factory in DDD Context
|
||||
|
||||
**Domain-Driven Design** (2003)
|
||||
- Chapter 6: "The Life Cycle of a Domain Object"
|
||||
- Factories encapsulate complex object creation
|
||||
- "Shift the responsibility for creating instances of complex objects and Aggregates to a separate object"
|
||||
- Reference: [DDD Reference](https://www.domainlanguage.com/wp-content/uploads/2016/05/DDD_Reference_2015-03.pdf)
|
||||
|
||||
**DDD Factory Guidelines:**
|
||||
- Factory should create valid objects (invariants satisfied)
|
||||
- Two types: Factory for new objects, Factory for reconstitution
|
||||
- Keep creation logic out of the entity itself
|
||||
- Reference: Already in Section 10 - Domain-Driven Design
|
||||
|
||||
### Why Factories Matter in DDD
|
||||
|
||||
**Encapsulation of Creation Logic:**
|
||||
- Complex aggregates need coordinated creation
|
||||
- Business rules should be enforced at creation time
|
||||
- Clients shouldn't know construction details
|
||||
- Reference: [Vaughn Vernon - Implementing DDD, Chapter 11](https://www.amazon.com/Implementing-Domain-Driven-Design-Vaughn-Vernon/dp/0321834577)
|
||||
|
||||
**Factory vs Constructor:**
|
||||
- Constructors should be simple (assign values)
|
||||
- Factories handle complex creation logic
|
||||
- Factories can return different types
|
||||
- Reference: [Effective Java - Item 1: Static Factory Methods](https://www.amazon.com/Effective-Java-Joshua-Bloch/dp/0134685997)
|
||||
|
||||
---
|
||||
|
||||
## 19. Specification Pattern
|
||||
|
||||
### Eric Evans & Martin Fowler
|
||||
|
||||
**Original Paper: Specifications** (1997)
|
||||
- Authors: Eric Evans and Martin Fowler
|
||||
- Introduced the Specification pattern
|
||||
- "A Specification states a constraint on the state of another object"
|
||||
- Reference: [Martin Fowler - Specification](https://martinfowler.com/apsupp/spec.pdf)
|
||||
|
||||
**Domain-Driven Design** (2003)
|
||||
- Chapter 9: "Making Implicit Concepts Explicit"
|
||||
- Specifications make business rules explicit and reusable
|
||||
- "Create explicit predicate-like Value Objects for specialized purposes"
|
||||
- Reference: [DDD Reference](https://www.domainlanguage.com/wp-content/uploads/2016/05/DDD_Reference_2015-03.pdf)
|
||||
|
||||
### Pattern Definition
|
||||
|
||||
**Core Concept:**
|
||||
- Specification is a predicate that determines if an object satisfies some criteria
|
||||
- Encapsulates business rules that can be reused and combined
|
||||
- Reference: [Wikipedia - Specification Pattern](https://en.wikipedia.org/wiki/Specification_pattern)
|
||||
|
||||
**Three Main Uses:**
|
||||
1. **Selection**: Finding objects that match criteria
|
||||
2. **Validation**: Checking if object satisfies rules
|
||||
3. **Construction**: Describing what needs to be created
|
||||
- Reference: [Martin Fowler - Specification](https://martinfowler.com/apsupp/spec.pdf)
|
||||
|
||||
### Composite Specifications
|
||||
|
||||
**Combining Specifications:**
|
||||
- AND: Both specifications must be satisfied
|
||||
- OR: Either specification must be satisfied
|
||||
- NOT: Specification must not be satisfied
|
||||
- Reference: [Refactoring Guru - Specification Pattern](https://refactoring.guru/design-patterns/specification)
|
||||
|
||||
**Benefits:**
|
||||
- Reusable business rules
|
||||
- Testable in isolation
|
||||
- Readable domain language
|
||||
- Composable for complex rules
|
||||
- Reference: [Enterprise Craftsmanship - Specification Pattern](https://enterprisecraftsmanship.com/posts/specification-pattern-c-implementation/)
|
||||
|
||||
---
|
||||
|
||||
## 20. Bounded Context
|
||||
|
||||
### Eric Evans: Domain-Driven Design (2003)
|
||||
|
||||
**Original Definition:**
|
||||
- "A Bounded Context delimits the applicability of a particular model"
|
||||
- "Explicitly define the context within which a model applies"
|
||||
- Chapter 14: "Maintaining Model Integrity"
|
||||
- Reference: [Martin Fowler - Bounded Context](https://martinfowler.com/bliki/BoundedContext.html)
|
||||
|
||||
**Key Principles:**
|
||||
- Each Bounded Context has its own Ubiquitous Language
|
||||
- Same term can mean different things in different contexts
|
||||
- Models should not be shared across context boundaries
|
||||
- Reference: [DDD Reference](https://www.domainlanguage.com/wp-content/uploads/2016/05/DDD_Reference_2015-03.pdf)
|
||||
|
||||
### Vaughn Vernon: Strategic Design
|
||||
|
||||
**Implementing Domain-Driven Design** (2013)
|
||||
- Chapter 2: "Domains, Subdomains, and Bounded Contexts"
|
||||
- Detailed guidance on identifying and mapping contexts
|
||||
- Reference: [Vaughn Vernon - Implementing DDD](https://www.amazon.com/Implementing-Domain-Driven-Design-Vaughn-Vernon/dp/0321834577)
|
||||
|
||||
**Context Mapping Patterns:**
|
||||
- Shared Kernel
|
||||
- Customer/Supplier
|
||||
- Conformist
|
||||
- Anti-Corruption Layer
|
||||
- Open Host Service / Published Language
|
||||
- Reference: [Context Mapping Patterns](https://www.infoq.com/articles/ddd-contextmapping/)
|
||||
|
||||
### Why Bounded Contexts Matter
|
||||
|
||||
**Avoiding Big Ball of Mud:**
|
||||
- Without explicit boundaries, models become entangled
|
||||
- Different teams step on each other's models
|
||||
- Reference: [Wikipedia - Big Ball of Mud](https://en.wikipedia.org/wiki/Big_ball_of_mud)
|
||||
|
||||
**Microservices and Bounded Contexts:**
|
||||
- "Microservices should be designed around business capabilities, aligned with bounded contexts"
|
||||
- Each microservice typically represents one bounded context
|
||||
- Reference: [Microsoft - Microservices and Bounded Contexts](https://learn.microsoft.com/en-us/azure/architecture/microservices/model/domain-analysis)
|
||||
|
||||
### Cross-Context Communication
|
||||
|
||||
**Integration Patterns:**
|
||||
- Never share domain models across contexts
|
||||
- Use integration events or APIs
|
||||
- Translate between context languages
|
||||
- Reference: [Microsoft - Tactical DDD](https://learn.microsoft.com/en-us/azure/architecture/microservices/model/tactical-ddd)
|
||||
|
||||
---
|
||||
|
||||
## 21. Persistence Ignorance
|
||||
|
||||
### Definition and Principles
|
||||
|
||||
**Core Concept:**
|
||||
- Domain objects should have no knowledge of how they are persisted
|
||||
- Business logic remains pure and testable
|
||||
- Infrastructure concerns are separated from domain
|
||||
- Reference: [Microsoft - Persistence Ignorance](https://learn.microsoft.com/en-us/dotnet/architecture/microservices/microservice-ddd-cqrs-patterns/infrastructure-persistence-layer-design#the-persistence-ignorance-principle)
|
||||
|
||||
**Wikipedia Definition:**
|
||||
- "Persistence ignorance is the ability of a class to be used without any underlying persistence mechanism"
|
||||
- Objects don't know if/how they'll be stored
|
||||
- Reference: [Wikipedia - Persistence Ignorance](https://en.wikipedia.org/wiki/Persistence_ignorance)
|
||||
|
||||
### Eric Evans: DDD and Persistence
|
||||
|
||||
**Domain-Driven Design** (2003)
|
||||
- Repositories abstract away persistence details
|
||||
- Domain model should not reference ORM or database concepts
|
||||
- Reference: Already covered in Section 6 - Repository Pattern
|
||||
|
||||
**Key Quote:**
|
||||
- "The domain layer should be kept clean of all technical concerns"
|
||||
- ORM annotations violate this principle
|
||||
- Reference: [Clean Architecture and DDD](https://herbertograca.com/2017/11/16/explicit-architecture-01-ddd-hexagonal-onion-clean-cqrs-how-i-put-it-all-together/)
|
||||
|
||||
### Clean Architecture Alignment
|
||||
|
||||
**Robert C. Martin:**
|
||||
- "The database is a detail"
|
||||
- Domain entities should not depend on persistence frameworks
|
||||
- Use Repository interfaces to abstract persistence
|
||||
- Reference: [Clean Architecture Book](https://www.amazon.com/Clean-Architecture-Craftsmans-Software-Structure/dp/0134494164)
|
||||
|
||||
### Practical Implementation
|
||||
|
||||
**Two-Model Approach:**
|
||||
- Domain Model: Pure business objects
|
||||
- Persistence Model: ORM-annotated entities
|
||||
- Mappers translate between them
|
||||
- Reference: [Microsoft - Infrastructure Layer](https://learn.microsoft.com/en-us/dotnet/architecture/microservices/microservice-ddd-cqrs-patterns/infrastructure-persistence-layer-design)
|
||||
|
||||
**Benefits:**
|
||||
- Domain model can evolve independently of database schema
|
||||
- Easier testing (no ORM required)
|
||||
- Database can be changed without affecting domain
|
||||
- Reference: [Enterprise Craftsmanship - Persistence Ignorance](https://enterprisecraftsmanship.com/posts/persistence-ignorance/)
|
||||
|
||||
---
|
||||
|
||||
## 22. Null Object Pattern
|
||||
|
||||
### Original Pattern
|
||||
|
||||
**Pattern Languages of Program Design 3** (1997)
|
||||
- Author: Bobby Woolf
|
||||
- Chapter: "Null Object"
|
||||
- Publisher: Addison-Wesley
|
||||
- ISBN: 978-0201310115
|
||||
- Reference: [Wikipedia - Null Object Pattern](https://en.wikipedia.org/wiki/Null_object_pattern)
|
||||
|
||||
**Definition:**
|
||||
- "A Null Object provides a 'do nothing' behavior, hiding the details from its collaborators"
|
||||
- Replaces null checks with polymorphism
|
||||
- Reference: [Refactoring Guru - Null Object](https://refactoring.guru/introduce-null-object)
|
||||
|
||||
### Martin Fowler's Coverage
|
||||
|
||||
**Refactoring Book** (1999, 2018)
|
||||
- "Introduce Null Object" refactoring
|
||||
- "Replace conditional logic that checks for null with a null object"
|
||||
- Reference: [Refactoring Catalog](https://refactoring.com/catalog/introduceNullObject.html)
|
||||
|
||||
**Special Case Pattern:**
|
||||
- More general pattern that includes Null Object
|
||||
- "A subclass that provides special behavior for particular cases"
|
||||
- Reference: [Martin Fowler - Special Case](https://martinfowler.com/eaaCatalog/specialCase.html)
|
||||
|
||||
### Benefits
|
||||
|
||||
**Eliminates Null Checks:**
|
||||
- Reduces cyclomatic complexity
|
||||
- Cleaner, more readable code
|
||||
- Follows "Tell, Don't Ask" principle
|
||||
- Reference: [SourceMaking - Null Object](https://sourcemaking.com/design_patterns/null_object)
|
||||
|
||||
**Polymorphism Over Conditionals:**
|
||||
- Null Object responds to same interface as real object
|
||||
- Default/neutral behavior instead of null checks
|
||||
- Reference: [C2 Wiki - Null Object](https://wiki.c2.com/?NullObject)
|
||||
|
||||
### When to Use
|
||||
|
||||
**Good Candidates:**
|
||||
- Objects frequently checked for null
|
||||
- Null represents "absence" with sensible default behavior
|
||||
- Reference: [Baeldung - Null Object Pattern](https://www.baeldung.com/java-null-object-pattern)
|
||||
|
||||
**Cautions:**
|
||||
- Don't use when null has semantic meaning
|
||||
- Can hide bugs if misapplied
|
||||
- Reference: [Stack Overflow - Null Object Considerations](https://stackoverflow.com/questions/1274792/is-the-null-object-pattern-a-bad-practice)
|
||||
|
||||
---
|
||||
|
||||
## 23. Primitive Obsession
|
||||
|
||||
### Code Smell Definition
|
||||
|
||||
**Martin Fowler: Refactoring** (1999, 2018)
|
||||
- Primitive Obsession is a code smell
|
||||
- "Using primitives instead of small objects for simple tasks"
|
||||
- Reference: [Refactoring Catalog](https://refactoring.com/catalog/)
|
||||
|
||||
**Wikipedia Definition:**
|
||||
- "Using primitive data types to represent domain ideas"
|
||||
- Example: Using string for email, int for money
|
||||
- Reference: [Wikipedia - Code Smell](https://en.wikipedia.org/wiki/Code_smell)
|
||||
|
||||
### Why It's a Problem
|
||||
|
||||
**Lost Type Safety:**
|
||||
- String can contain anything, Email cannot
|
||||
- Compiler can't catch domain errors
|
||||
- Reference: [Refactoring Guru - Primitive Obsession](https://refactoring.guru/smells/primitive-obsession)
|
||||
|
||||
**Scattered Validation:**
|
||||
- Same validation repeated in multiple places
|
||||
- Violates DRY principle
|
||||
- Reference: [SourceMaking - Primitive Obsession](https://sourcemaking.com/refactoring/smells/primitive-obsession)
|
||||
|
||||
**Missing Behavior:**
|
||||
- Primitives can't have domain-specific methods
|
||||
- Logic lives in services instead of objects
|
||||
- Reference: [Enterprise Craftsmanship - Primitive Obsession](https://enterprisecraftsmanship.com/posts/functional-c-primitive-obsession/)
|
||||
|
||||
### Solutions
|
||||
|
||||
**Replace with Value Objects:**
|
||||
- Money instead of decimal
|
||||
- Email instead of string
|
||||
- PhoneNumber instead of string
|
||||
- Reference: Already covered in Section 16 - Value Object Immutability
|
||||
|
||||
**Replace Data Value with Object:**
|
||||
- Refactoring: "Replace Data Value with Object"
|
||||
- Introduce Parameter Object for related primitives
|
||||
- Reference: [Refactoring - Replace Data Value with Object](https://refactoring.com/catalog/replaceDataValueWithObject.html)
|
||||
|
||||
### Common Primitive Obsession Examples
|
||||
|
||||
**Frequently Misused Primitives:**
|
||||
- string for: email, phone, URL, currency code, country code
|
||||
- int/decimal for: money, percentage, age, quantity
|
||||
- DateTime for: date ranges, business dates
|
||||
- Reference: [DDD - Value Objects](https://martinfowler.com/bliki/ValueObject.html)
|
||||
|
||||
---
|
||||
|
||||
## 24. Service Locator Anti-pattern
|
||||
|
||||
### Martin Fowler's Analysis
|
||||
|
||||
**Blog Post: Inversion of Control Containers and the Dependency Injection pattern** (2004)
|
||||
- Compares Service Locator with Dependency Injection
|
||||
- "With service locator the application class asks for it explicitly by a message to the locator"
|
||||
- Reference: [Martin Fowler - Inversion of Control](https://martinfowler.com/articles/injection.html)
|
||||
|
||||
**Service Locator Definition:**
|
||||
- "The basic idea behind a service locator is to have an object that knows how to get hold of all of the services that an application might need"
|
||||
- Acts as a registry that provides dependencies on demand
|
||||
- Reference: [Martin Fowler - Service Locator](https://martinfowler.com/articles/injection.html#UsingAServiceLocator)
|
||||
|
||||
### Why It's Considered an Anti-pattern
|
||||
|
||||
**Mark Seemann: Dependency Injection in .NET** (2011, 2nd Ed. 2019)
|
||||
- Author: Mark Seemann
|
||||
- Extensively covers why Service Locator is problematic
|
||||
- "Service Locator is an anti-pattern"
|
||||
- Reference: [Mark Seemann - Service Locator is an Anti-Pattern](https://blog.ploeh.dk/2010/02/03/ServiceLocatorisanAnti-Pattern/)
|
||||
|
||||
**Hidden Dependencies:**
|
||||
- Dependencies are not visible in constructor
|
||||
- Makes code harder to understand and test
|
||||
- Violates Explicit Dependencies Principle
|
||||
- Reference: [DevIQ - Explicit Dependencies](https://deviq.com/principles/explicit-dependencies-principle)
|
||||
|
||||
**Testing Difficulties:**
|
||||
- Need to set up global locator for tests
|
||||
- Tests become coupled to locator setup
|
||||
- Reference: [Stack Overflow - Service Locator Testing](https://stackoverflow.com/questions/1557781/is-service-locator-an-anti-pattern)
|
||||
|
||||
### Dependency Injection Alternative
|
||||
|
||||
**Constructor Injection:**
|
||||
- Dependencies declared in constructor
|
||||
- Compiler enforces dependency provision
|
||||
- Clear, testable code
|
||||
- Reference: Already covered in Section 6 - Repository Pattern
|
||||
|
||||
**Benefits over Service Locator:**
|
||||
- Explicit dependencies
|
||||
- Easier testing (just pass mocks)
|
||||
- IDE support for navigation
|
||||
- Compile-time checking
|
||||
- Reference: [Martin Fowler - Constructor Injection](https://martinfowler.com/articles/injection.html#ConstructorInjectionWithPicocontainer)
|
||||
|
||||
---
|
||||
|
||||
## 25. Double Dispatch and Visitor Pattern
|
||||
|
||||
### Gang of Four: Visitor Pattern
|
||||
|
||||
**Design Patterns** (1994)
|
||||
- Authors: Gang of Four
|
||||
- Visitor Pattern chapter
|
||||
- "Represent an operation to be performed on the elements of an object structure"
|
||||
- Reference: [Wikipedia - Visitor Pattern](https://en.wikipedia.org/wiki/Visitor_pattern)
|
||||
|
||||
**Intent:**
|
||||
- "Lets you define a new operation without changing the classes of the elements on which it operates"
|
||||
- Separates algorithms from object structure
|
||||
- Reference: [Refactoring Guru - Visitor](https://refactoring.guru/design-patterns/visitor)
|
||||
|
||||
### Double Dispatch Mechanism
|
||||
|
||||
**Definition:**
|
||||
- "A mechanism that dispatches a function call to different concrete functions depending on the runtime types of two objects involved in the call"
|
||||
- Visitor pattern uses double dispatch
|
||||
- Reference: [Wikipedia - Double Dispatch](https://en.wikipedia.org/wiki/Double_dispatch)
|
||||
|
||||
**How It Works:**
|
||||
1. Client calls element.accept(visitor)
|
||||
2. Element calls visitor.visit(this) - first dispatch
|
||||
3. Correct visit() overload selected - second dispatch
|
||||
- Reference: [SourceMaking - Visitor](https://sourcemaking.com/design_patterns/visitor)
|
||||
|
||||
### When to Use
|
||||
|
||||
**Good Use Cases:**
|
||||
- Operations on complex object structures
|
||||
- Many distinct operations needed
|
||||
- Object structure rarely changes but operations change often
|
||||
- Reference: [Refactoring Guru - Visitor Use Cases](https://refactoring.guru/design-patterns/visitor)
|
||||
|
||||
**Alternative to Type Checking:**
|
||||
- Replace instanceof/typeof checks with polymorphism
|
||||
- More maintainable and extensible
|
||||
- Reference: [Replace Conditional with Polymorphism](https://refactoring.guru/replace-conditional-with-polymorphism)
|
||||
|
||||
### Trade-offs
|
||||
|
||||
**Advantages:**
|
||||
- Open/Closed Principle for new operations
|
||||
- Related operations grouped in one class
|
||||
- Accumulate state while traversing
|
||||
- Reference: [GoF Design Patterns](https://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612)
|
||||
|
||||
**Disadvantages:**
|
||||
- Adding new element types requires changing all visitors
|
||||
- May break encapsulation (visitors need access to element internals)
|
||||
- Reference: [C2 Wiki - Visitor Pattern](https://wiki.c2.com/?VisitorPattern)
|
||||
|
||||
---
|
||||
|
||||
## 26. Entity Identity
|
||||
|
||||
### Eric Evans: Domain-Driven Design (2003)
|
||||
|
||||
**Entity Definition:**
|
||||
- "An object that is not defined by its attributes, but rather by a thread of continuity and its identity"
|
||||
- "Some objects are not defined primarily by their attributes. They represent a thread of identity"
|
||||
- Reference: [Martin Fowler - Evans Classification](https://martinfowler.com/bliki/EvansClassification.html)
|
||||
|
||||
**Identity Characteristics:**
|
||||
- Unique within the system
|
||||
- Stable over time (doesn't change)
|
||||
- Survives state changes
|
||||
- Reference: [DDD Reference](https://www.domainlanguage.com/wp-content/uploads/2016/05/DDD_Reference_2015-03.pdf)
|
||||
|
||||
### Vaughn Vernon: Identity Implementation
|
||||
|
||||
**Implementing Domain-Driven Design** (2013)
|
||||
- Chapter 5: "Entities"
|
||||
- Detailed coverage of identity strategies
|
||||
- "The primary characteristic of an Entity is that it has a unique identity"
|
||||
- Reference: [Vaughn Vernon - Implementing DDD](https://www.amazon.com/Implementing-Domain-Driven-Design-Vaughn-Vernon/dp/0321834577)
|
||||
|
||||
**Identity Types:**
|
||||
- Natural keys (SSN, email)
|
||||
- Surrogate keys (UUID, auto-increment)
|
||||
- Domain-generated IDs
|
||||
- Reference: [Microsoft - Entity Keys](https://learn.microsoft.com/en-us/ef/core/modeling/keys)
|
||||
|
||||
### Identity Best Practices
|
||||
|
||||
**Immutability of Identity:**
|
||||
- Identity should never change after creation
|
||||
- Use readonly/final fields
|
||||
- Reference: [StackExchange - Mutable Entity ID](https://softwareengineering.stackexchange.com/questions/375765/is-it-bad-practice-to-have-mutable-entity-ids)
|
||||
|
||||
**Value Object for Identity:**
|
||||
- Wrap identity in Value Object (UserId, OrderId)
|
||||
- Type safety prevents mixing IDs
|
||||
- Can include validation logic
|
||||
- Reference: [Enterprise Craftsmanship - Strongly Typed IDs](https://enterprisecraftsmanship.com/posts/strongly-typed-ids/)
|
||||
|
||||
**Equality Based on Identity:**
|
||||
- Entity equality should compare only identity
|
||||
- Not all attributes
|
||||
- Reference: [Vaughn Vernon - Entity Equality](https://www.amazon.com/Implementing-Domain-Driven-Design-Vaughn-Vernon/dp/0321834577)
|
||||
|
||||
---
|
||||
|
||||
## 27. Saga Pattern
|
||||
|
||||
### Original Research
|
||||
|
||||
**Paper: Sagas** (1987)
|
||||
- Authors: Hector Garcia-Molina and Kenneth Salem
|
||||
- Published: ACM SIGMOD Conference
|
||||
- Introduced Sagas for long-lived transactions
|
||||
- Reference: [ACM Digital Library - Sagas](https://dl.acm.org/doi/10.1145/38713.38742)
|
||||
|
||||
**Definition:**
|
||||
- "A saga is a sequence of local transactions where each transaction updates data within a single service"
|
||||
- Alternative to distributed transactions
|
||||
- Reference: [Microsoft - Saga Pattern](https://learn.microsoft.com/en-us/azure/architecture/reference-architectures/saga/saga)
|
||||
|
||||
### Chris Richardson: Microservices Patterns
|
||||
|
||||
**Book: Microservices Patterns** (2018)
|
||||
- Author: Chris Richardson
|
||||
- Publisher: Manning
|
||||
- ISBN: 978-1617294549
|
||||
- Chapter 4: "Managing Transactions with Sagas"
|
||||
- Reference: [Manning - Microservices Patterns](https://www.manning.com/books/microservices-patterns)
|
||||
|
||||
**Saga Types:**
|
||||
1. **Choreography**: Each service publishes events that trigger next steps
|
||||
2. **Orchestration**: Central coordinator tells services what to do
|
||||
- Reference: [Microservices.io - Saga](https://microservices.io/patterns/data/saga.html)
|
||||
|
||||
### Compensating Transactions
|
||||
|
||||
**Core Concept:**
|
||||
- Each step has a compensating action to undo it
|
||||
- If step N fails, compensate steps N-1, N-2, ..., 1
|
||||
- Reference: [AWS - Saga Pattern](https://docs.aws.amazon.com/prescriptive-guidance/latest/modernization-data-persistence/saga-pattern.html)
|
||||
|
||||
**Compensation Examples:**
|
||||
- CreateOrder → DeleteOrder
|
||||
- ReserveInventory → ReleaseInventory
|
||||
- ChargePayment → RefundPayment
|
||||
- Reference: [Microsoft - Compensating Transactions](https://learn.microsoft.com/en-us/azure/architecture/patterns/compensating-transaction)
|
||||
|
||||
### Trade-offs
|
||||
|
||||
**Advantages:**
|
||||
- Works across service boundaries
|
||||
- No distributed locks
|
||||
- Services remain autonomous
|
||||
- Reference: [Chris Richardson - Saga](https://chrisrichardson.net/post/microservices/patterns/data/2019/07/22/design-sagas.html)
|
||||
|
||||
**Challenges:**
|
||||
- Complexity of compensation logic
|
||||
- Eventual consistency
|
||||
- Debugging distributed sagas
|
||||
- Reference: [Microsoft - Saga Considerations](https://learn.microsoft.com/en-us/azure/architecture/reference-architectures/saga/saga#issues-and-considerations)
|
||||
|
||||
---
|
||||
|
||||
## 28. Anti-Corruption Layer
|
||||
|
||||
### Eric Evans: Domain-Driven Design (2003)
|
||||
|
||||
**Original Definition:**
|
||||
- Chapter 14: "Maintaining Model Integrity"
|
||||
- "Create an isolating layer to provide clients with functionality in terms of their own domain model"
|
||||
- Protects your model from external/legacy models
|
||||
- Reference: [DDD Reference](https://www.domainlanguage.com/wp-content/uploads/2016/05/DDD_Reference_2015-03.pdf)
|
||||
|
||||
**Purpose:**
|
||||
- "The translation layer between a new system and an external system"
|
||||
- Prevents external model concepts from leaking in
|
||||
- Reference: [Martin Fowler - Anti-Corruption Layer](https://martinfowler.com/bliki/AntiCorruptionLayer.html)
|
||||
|
||||
### Microsoft Guidance
|
||||
|
||||
**Azure Architecture Center:**
|
||||
- "Implement a facade or adapter layer between different subsystems that don't share the same semantics"
|
||||
- Isolate subsystems by placing an anti-corruption layer between them
|
||||
- Reference: [Microsoft - ACL Pattern](https://learn.microsoft.com/en-us/azure/architecture/patterns/anti-corruption-layer)
|
||||
|
||||
**When to Use:**
|
||||
- Integrating with legacy systems
|
||||
- Migrating from monolith to microservices
|
||||
- Working with third-party APIs
|
||||
- Reference: [Microsoft - ACL When to Use](https://learn.microsoft.com/en-us/azure/architecture/patterns/anti-corruption-layer#when-to-use-this-pattern)
|
||||
|
||||
### Components of ACL
|
||||
|
||||
**Facade:**
|
||||
- Simplified interface to external system
|
||||
- Hides complexity from domain
|
||||
- Reference: [Refactoring Guru - Facade](https://refactoring.guru/design-patterns/facade)
|
||||
|
||||
**Adapter:**
|
||||
- Translates between interfaces
|
||||
- Maps external model to domain model
|
||||
- Reference: [Refactoring Guru - Adapter](https://refactoring.guru/design-patterns/adapter)
|
||||
|
||||
**Translator:**
|
||||
- Converts data structures
|
||||
- Maps field names and types
|
||||
- Handles semantic differences
|
||||
- Reference: [Evans DDD - Model Translation](https://www.domainlanguage.com/)
|
||||
|
||||
### Benefits
|
||||
|
||||
**Isolation:**
|
||||
- Changes to external system don't ripple through domain
|
||||
- Domain model remains pure
|
||||
- Reference: [Microsoft - ACL Benefits](https://learn.microsoft.com/en-us/azure/architecture/patterns/anti-corruption-layer)
|
||||
|
||||
**Gradual Migration:**
|
||||
- Replace legacy components incrementally
|
||||
- Strangler Fig pattern compatibility
|
||||
- Reference: [Martin Fowler - Strangler Fig](https://martinfowler.com/bliki/StranglerFigApplication.html)
|
||||
|
||||
---
|
||||
|
||||
## 29. Ubiquitous Language
|
||||
|
||||
### Eric Evans: Domain-Driven Design (2003)
|
||||
|
||||
**Original Definition:**
|
||||
- Chapter 2: "Communication and the Use of Language"
|
||||
- "A language structured around the domain model and used by all team members"
|
||||
- "The vocabulary of that Ubiquitous Language includes the names of classes and prominent operations"
|
||||
- Reference: [Martin Fowler - Ubiquitous Language](https://martinfowler.com/bliki/UbiquitousLanguage.html)
|
||||
|
||||
**Key Principles:**
|
||||
- Shared by developers and domain experts
|
||||
- Used in code, conversations, and documentation
|
||||
- Changes to language reflect model changes
|
||||
- Reference: [DDD Reference](https://www.domainlanguage.com/wp-content/uploads/2016/05/DDD_Reference_2015-03.pdf)
|
||||
|
||||
### Why It Matters
|
||||
|
||||
**Communication Benefits:**
|
||||
- Reduces translation between business and tech
|
||||
- Catches misunderstandings early
|
||||
- Domain experts can read code names
|
||||
- Reference: [InfoQ - Ubiquitous Language](https://www.infoq.com/articles/ddd-ubiquitous-language/)
|
||||
|
||||
**Design Benefits:**
|
||||
- Model reflects real domain concepts
|
||||
- Code becomes self-documenting
|
||||
- Easier onboarding for new team members
|
||||
- Reference: [Vaughn Vernon - Implementing DDD](https://www.amazon.com/Implementing-Domain-Driven-Design-Vaughn-Vernon/dp/0321834577)
|
||||
|
||||
### Building Ubiquitous Language
|
||||
|
||||
**Glossary:**
|
||||
- Document key terms and definitions
|
||||
- Keep updated as understanding evolves
|
||||
- Reference: [DDD Community - Glossary](https://thedomaindrivendesign.io/glossary/)
|
||||
|
||||
**Event Storming:**
|
||||
- Collaborative workshop technique
|
||||
- Discover domain events and concepts
|
||||
- Build shared understanding and language
|
||||
- Reference: [Alberto Brandolini - Event Storming](https://www.eventstorming.com/)
|
||||
|
||||
### Common Pitfalls
|
||||
|
||||
**Inconsistent Terminology:**
|
||||
- Same concept with different names (Customer/Client/User)
|
||||
- Different concepts with same name
|
||||
- Reference: [Domain Language - Building UL](https://www.domainlanguage.com/)
|
||||
|
||||
**Technical Terms in Domain:**
|
||||
- "DTO", "Entity", "Repository" are technical
|
||||
- Domain should use business terms
|
||||
- Reference: [Evans DDD - Model-Driven Design](https://www.domainlanguage.com/)
|
||||
|
||||
---
|
||||
|
||||
## Conclusion
|
||||
|
||||
The code quality detection rules implemented in Guardian are firmly grounded in:
|
||||
|
||||
1. **Academic Research**: Peer-reviewed papers on software maintainability, complexity metrics, code quality, technical debt prioritization, and severity classification
|
||||
1. **Academic Research**: Peer-reviewed papers on software maintainability, complexity metrics, code quality, technical debt prioritization, severity classification, and distributed systems (Sagas)
|
||||
2. **Industry Standards**: ISO/IEC 25010, SonarQube rules, OWASP security guidelines, Google and Airbnb style guides
|
||||
3. **Authoritative Books**:
|
||||
- Gang of Four's "Design Patterns" (1994)
|
||||
- Bertrand Meyer's "Object-Oriented Software Construction" (1988, 1997)
|
||||
- Robert C. Martin's "Clean Architecture" (2017)
|
||||
- Vaughn Vernon's "Implementing Domain-Driven Design" (2013)
|
||||
- Chris Richardson's "Microservices Patterns" (2018)
|
||||
- Eric Evans' "Domain-Driven Design" (2003)
|
||||
- Martin Fowler's "Patterns of Enterprise Application Architecture" (2002)
|
||||
- Martin Fowler's "Refactoring" (1999, 2018)
|
||||
- Steve McConnell's "Code Complete" (1993, 2004)
|
||||
4. **Expert Guidance**: Martin Fowler, Robert C. Martin (Uncle Bob), Eric Evans, Vaughn Vernon, Alistair Cockburn, Kent Beck
|
||||
- Joshua Bloch's "Effective Java" (2001, 2018)
|
||||
- Mark Seemann's "Dependency Injection in .NET" (2011, 2019)
|
||||
- Bobby Woolf's "Null Object" in PLoPD3 (1997)
|
||||
4. **Expert Guidance**: Martin Fowler, Robert C. Martin (Uncle Bob), Eric Evans, Vaughn Vernon, Alistair Cockburn, Kent Beck, Greg Young, Bertrand Meyer, Mark Seemann, Chris Richardson, Alberto Brandolini
|
||||
5. **Security Standards**: OWASP Secrets Management, GitHub Secret Scanning, GitGuardian best practices
|
||||
6. **Open Source Tools**: ArchUnit, SonarQube, ESLint, Secretlint - widely adopted in enterprise environments
|
||||
7. **DDD Tactical & Strategic Patterns**: Domain Events, Value Objects, Entities, Aggregates, Bounded Contexts, Anti-Corruption Layer, Ubiquitous Language, Specifications, Factories
|
||||
8. **Architectural Patterns**: CQS/CQRS, Saga, Visitor/Double Dispatch, Null Object, Persistence Ignorance
|
||||
|
||||
These rules represent decades of software engineering wisdom, empirical research, security best practices, and battle-tested practices from the world's leading software organizations and thought leaders.
|
||||
|
||||
@@ -845,9 +1678,9 @@ These rules represent decades of software engineering wisdom, empirical research
|
||||
|
||||
---
|
||||
|
||||
**Document Version**: 1.1
|
||||
**Last Updated**: 2025-11-26
|
||||
**Document Version**: 2.0
|
||||
**Last Updated**: 2025-12-04
|
||||
**Questions or want to contribute research?**
|
||||
- 📧 Email: fozilbek.samiyev@gmail.com
|
||||
- 🐙 GitHub: https://github.com/samiyev/puaros/issues
|
||||
**Based on research as of**: November 2025
|
||||
**Based on research as of**: December 2025
|
||||
|
||||
566
packages/ipuaro/ARCHITECTURE.md
Normal file
566
packages/ipuaro/ARCHITECTURE.md
Normal file
@@ -0,0 +1,566 @@
|
||||
# ipuaro Architecture
|
||||
|
||||
This document describes the architecture, design decisions, and implementation details of ipuaro.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Overview](#overview)
|
||||
- [Clean Architecture](#clean-architecture)
|
||||
- [Layer Details](#layer-details)
|
||||
- [Data Flow](#data-flow)
|
||||
- [Key Design Decisions](#key-design-decisions)
|
||||
- [Tech Stack](#tech-stack)
|
||||
- [Performance Considerations](#performance-considerations)
|
||||
|
||||
## Overview
|
||||
|
||||
ipuaro is a local AI agent for codebase operations built on Clean Architecture principles. It enables "infinite" context feeling through lazy loading and AST-based code understanding.
|
||||
|
||||
### Core Concepts
|
||||
|
||||
1. **Lazy Loading**: Load code on-demand via tools, not all at once
|
||||
2. **AST-Based Understanding**: Parse and index code structure for fast lookups
|
||||
3. **100% Local**: Ollama LLM + Redis storage, no cloud dependencies
|
||||
4. **Session Persistence**: Resume conversations across restarts
|
||||
5. **Tool-Based Interface**: LLM accesses code through 18 specialized tools
|
||||
|
||||
## Clean Architecture
|
||||
|
||||
The project follows Clean Architecture with strict dependency rules:
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────┐
|
||||
│ TUI Layer │ ← Ink/React components
|
||||
│ (Framework) │
|
||||
├─────────────────────────────────────────────────┤
|
||||
│ CLI Layer │ ← Commander.js entry
|
||||
│ (Interface) │
|
||||
├─────────────────────────────────────────────────┤
|
||||
│ Infrastructure Layer │ ← External adapters
|
||||
│ (Storage, LLM, Indexer, Tools, Security) │
|
||||
├─────────────────────────────────────────────────┤
|
||||
│ Application Layer │ ← Use cases & DTOs
|
||||
│ (StartSession, HandleMessage, etc.) │
|
||||
├─────────────────────────────────────────────────┤
|
||||
│ Domain Layer │ ← Business logic
|
||||
│ (Entities, Value Objects, Service Interfaces) │
|
||||
└─────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
**Dependency Rule**: Outer layers depend on inner layers, never the reverse.
|
||||
|
||||
## Layer Details
|
||||
|
||||
### Domain Layer (Core Business Logic)
|
||||
|
||||
**Location**: `src/domain/`
|
||||
|
||||
**Responsibilities**:
|
||||
- Define business entities and value objects
|
||||
- Declare service interfaces (ports)
|
||||
- No external dependencies (pure TypeScript)
|
||||
|
||||
**Components**:
|
||||
|
||||
```
|
||||
domain/
|
||||
├── entities/
|
||||
│ ├── Session.ts # Session entity with history and stats
|
||||
│ └── Project.ts # Project entity with metadata
|
||||
├── value-objects/
|
||||
│ ├── FileData.ts # File content with hash and size
|
||||
│ ├── FileAST.ts # Parsed AST structure
|
||||
│ ├── FileMeta.ts # Complexity, dependencies, hub detection
|
||||
│ ├── ChatMessage.ts # Message with role, content, tool calls
|
||||
│ ├── ToolCall.ts # Tool invocation with parameters
|
||||
│ ├── ToolResult.ts # Tool execution result
|
||||
│ └── UndoEntry.ts # File change for undo stack
|
||||
├── services/
|
||||
│ ├── IStorage.ts # Storage interface (port)
|
||||
│ ├── ILLMClient.ts # LLM interface (port)
|
||||
│ ├── ITool.ts # Tool interface (port)
|
||||
│ └── IIndexer.ts # Indexer interface (port)
|
||||
└── constants/
|
||||
└── index.ts # Domain constants
|
||||
```
|
||||
|
||||
**Key Design**:
|
||||
- Value objects are immutable
|
||||
- Entities have identity and lifecycle
|
||||
- Interfaces define contracts, not implementations
|
||||
|
||||
### Application Layer (Use Cases)
|
||||
|
||||
**Location**: `src/application/`
|
||||
|
||||
**Responsibilities**:
|
||||
- Orchestrate domain logic
|
||||
- Implement use cases (application-specific business rules)
|
||||
- Define DTOs for data transfer
|
||||
- Coordinate between domain and infrastructure
|
||||
|
||||
**Components**:
|
||||
|
||||
```
|
||||
application/
|
||||
├── use-cases/
|
||||
│ ├── StartSession.ts # Initialize or load session
|
||||
│ ├── HandleMessage.ts # Main message orchestrator
|
||||
│ ├── IndexProject.ts # Project indexing workflow
|
||||
│ ├── ExecuteTool.ts # Tool execution with validation
|
||||
│ └── UndoChange.ts # Revert file changes
|
||||
├── dtos/
|
||||
│ ├── SessionDto.ts # Session data transfer object
|
||||
│ ├── MessageDto.ts # Message DTO
|
||||
│ └── ToolCallDto.ts # Tool call DTO
|
||||
├── mappers/
|
||||
│ └── SessionMapper.ts # Domain ↔ DTO conversion
|
||||
└── interfaces/
|
||||
└── IToolRegistry.ts # Tool registry interface
|
||||
```
|
||||
|
||||
**Key Use Cases**:
|
||||
|
||||
1. **StartSession**: Creates new session or loads latest
|
||||
2. **HandleMessage**: Main flow (LLM → Tools → Response)
|
||||
3. **IndexProject**: Scan → Parse → Analyze → Store
|
||||
4. **UndoChange**: Restore file from undo stack
|
||||
|
||||
### Infrastructure Layer (External Implementations)
|
||||
|
||||
**Location**: `src/infrastructure/`
|
||||
|
||||
**Responsibilities**:
|
||||
- Implement domain interfaces
|
||||
- Handle external systems (Redis, Ollama, filesystem)
|
||||
- Provide concrete tool implementations
|
||||
- Security and validation
|
||||
|
||||
**Components**:
|
||||
|
||||
```
|
||||
infrastructure/
|
||||
├── storage/
|
||||
│ ├── RedisClient.ts # Redis connection wrapper
|
||||
│ ├── RedisStorage.ts # IStorage implementation
|
||||
│ └── schema.ts # Redis key schema
|
||||
├── llm/
|
||||
│ ├── OllamaClient.ts # ILLMClient implementation
|
||||
│ ├── prompts.ts # System prompts
|
||||
│ └── ResponseParser.ts # Parse XML tool calls
|
||||
├── indexer/
|
||||
│ ├── FileScanner.ts # Recursive file scanning
|
||||
│ ├── ASTParser.ts # tree-sitter parsing
|
||||
│ ├── MetaAnalyzer.ts # Complexity and dependencies
|
||||
│ ├── IndexBuilder.ts # Symbol index + deps graph
|
||||
│ └── Watchdog.ts # File watching (chokidar)
|
||||
├── tools/ # 18 tool implementations
|
||||
│ ├── registry.ts
|
||||
│ ├── read/ # GetLines, GetFunction, GetClass, GetStructure
|
||||
│ ├── edit/ # EditLines, CreateFile, DeleteFile
|
||||
│ ├── search/ # FindReferences, FindDefinition
|
||||
│ ├── analysis/ # GetDependencies, GetDependents, GetComplexity, GetTodos
|
||||
│ ├── git/ # GitStatus, GitDiff, GitCommit
|
||||
│ └── run/ # RunCommand, RunTests
|
||||
└── security/
|
||||
├── Blacklist.ts # Dangerous commands
|
||||
├── Whitelist.ts # Safe commands
|
||||
└── PathValidator.ts # Path traversal prevention
|
||||
```
|
||||
|
||||
**Key Implementations**:
|
||||
|
||||
1. **RedisStorage**: Uses Redis hashes for files/AST/meta, lists for undo
|
||||
2. **OllamaClient**: HTTP API client with tool calling support
|
||||
3. **ASTParser**: tree-sitter for TS/JS/TSX/JSX parsing
|
||||
4. **ToolRegistry**: Manages tool lifecycle and execution
|
||||
|
||||
### TUI Layer (Terminal UI)
|
||||
|
||||
**Location**: `src/tui/`
|
||||
|
||||
**Responsibilities**:
|
||||
- Render terminal UI with Ink (React for terminal)
|
||||
- Handle user input and hotkeys
|
||||
- Display chat history and status
|
||||
|
||||
**Components**:
|
||||
|
||||
```
|
||||
tui/
|
||||
├── App.tsx # Main app shell
|
||||
├── components/
|
||||
│ ├── StatusBar.tsx # Top status bar
|
||||
│ ├── Chat.tsx # Message history display
|
||||
│ ├── Input.tsx # User input with history
|
||||
│ ├── DiffView.tsx # Inline diff display
|
||||
│ ├── ConfirmDialog.tsx # Edit confirmation
|
||||
│ ├── ErrorDialog.tsx # Error handling
|
||||
│ └── Progress.tsx # Progress bar (indexing)
|
||||
└── hooks/
|
||||
├── useSession.ts # Session state management
|
||||
├── useHotkeys.ts # Keyboard shortcuts
|
||||
└── useCommands.ts # Slash command handling
|
||||
```
|
||||
|
||||
**Key Features**:
|
||||
|
||||
- Real-time status updates (context usage, session time)
|
||||
- Input history with ↑/↓ navigation
|
||||
- Hotkeys: Ctrl+C (interrupt), Ctrl+D (exit), Ctrl+Z (undo)
|
||||
- Diff preview for edits with confirmation
|
||||
- Error recovery with retry/skip/abort options
|
||||
|
||||
### CLI Layer (Entry Point)
|
||||
|
||||
**Location**: `src/cli/`
|
||||
|
||||
**Responsibilities**:
|
||||
- Command-line interface with Commander.js
|
||||
- Dependency injection and initialization
|
||||
- Onboarding checks (Redis, Ollama, model)
|
||||
|
||||
**Components**:
|
||||
|
||||
```
|
||||
cli/
|
||||
├── index.ts # Commander.js setup
|
||||
└── commands/
|
||||
├── start.ts # Start TUI (default command)
|
||||
├── init.ts # Create .ipuaro.json config
|
||||
└── index-cmd.ts # Index-only command
|
||||
```
|
||||
|
||||
**Commands**:
|
||||
|
||||
1. `ipuaro [path]` - Start TUI in directory
|
||||
2. `ipuaro init` - Create config file
|
||||
3. `ipuaro index` - Index without TUI
|
||||
|
||||
### Shared Module
|
||||
|
||||
**Location**: `src/shared/`
|
||||
|
||||
**Responsibilities**:
|
||||
- Cross-cutting concerns
|
||||
- Configuration management
|
||||
- Error handling
|
||||
- Utility functions
|
||||
|
||||
**Components**:
|
||||
|
||||
```
|
||||
shared/
|
||||
├── types/
|
||||
│ └── index.ts # Shared TypeScript types
|
||||
├── constants/
|
||||
│ ├── config.ts # Config schema and loader
|
||||
│ └── messages.ts # User-facing messages
|
||||
├── utils/
|
||||
│ ├── hash.ts # MD5 hashing
|
||||
│ └── tokens.ts # Token estimation
|
||||
└── errors/
|
||||
├── IpuaroError.ts # Custom error class
|
||||
└── ErrorHandler.ts # Error handling service
|
||||
```
|
||||
|
||||
## Data Flow
|
||||
|
||||
### 1. Startup Flow
|
||||
|
||||
```
|
||||
CLI Entry (bin/ipuaro.js)
|
||||
↓
|
||||
Commander.js parses arguments
|
||||
↓
|
||||
Onboarding checks (Redis, Ollama, Model)
|
||||
↓
|
||||
Initialize dependencies:
|
||||
- RedisClient connects
|
||||
- RedisStorage initialized
|
||||
- OllamaClient created
|
||||
- ToolRegistry with 18 tools
|
||||
↓
|
||||
StartSession use case:
|
||||
- Load latest session or create new
|
||||
- Initialize ContextManager
|
||||
↓
|
||||
Launch TUI (App.tsx)
|
||||
- Render StatusBar, Chat, Input
|
||||
- Set up hotkeys
|
||||
```
|
||||
|
||||
### 2. Message Flow
|
||||
|
||||
```
|
||||
User types message in Input.tsx
|
||||
↓
|
||||
useSession.handleMessage()
|
||||
↓
|
||||
HandleMessage use case:
|
||||
1. Add user message to history
|
||||
2. Build context (system prompt + structure + AST)
|
||||
3. Send to OllamaClient.chat()
|
||||
4. Parse tool calls from response
|
||||
5. For each tool call:
|
||||
- If requiresConfirmation: show ConfirmDialog
|
||||
- Execute tool via ToolRegistry
|
||||
- Collect results
|
||||
6. If tool results: goto step 3 (continue loop)
|
||||
7. Add assistant response to history
|
||||
8. Update session in Redis
|
||||
↓
|
||||
Display response in Chat.tsx
|
||||
```
|
||||
|
||||
### 3. Edit Flow
|
||||
|
||||
```
|
||||
LLM calls edit_lines tool
|
||||
↓
|
||||
ToolRegistry.execute()
|
||||
↓
|
||||
EditLinesTool.execute():
|
||||
1. Validate path (PathValidator)
|
||||
2. Check hash conflict
|
||||
3. Build diff
|
||||
↓
|
||||
ConfirmDialog shows diff
|
||||
↓
|
||||
User chooses:
|
||||
- Apply: Continue
|
||||
- Cancel: Return error to LLM
|
||||
- Edit: Manual edit (future)
|
||||
↓
|
||||
If Apply:
|
||||
1. Create UndoEntry
|
||||
2. Push to undo stack (Redis list)
|
||||
3. Write to filesystem
|
||||
4. Update RedisStorage (lines, hash, AST, meta)
|
||||
↓
|
||||
Return success to LLM
|
||||
```
|
||||
|
||||
### 4. Indexing Flow
|
||||
|
||||
```
|
||||
FileScanner.scan()
|
||||
- Recursively walk directory
|
||||
- Filter via .gitignore + ignore patterns
|
||||
- Detect binary files (skip)
|
||||
↓
|
||||
For each file:
|
||||
ASTParser.parse()
|
||||
- tree-sitter parse
|
||||
- Extract imports, exports, functions, classes
|
||||
↓
|
||||
MetaAnalyzer.analyze()
|
||||
- Calculate complexity (LOC, nesting, cyclomatic)
|
||||
- Resolve dependencies (imports → file paths)
|
||||
- Detect hubs (>5 dependents)
|
||||
↓
|
||||
RedisStorage.setFile(), .setAST(), .setMeta()
|
||||
↓
|
||||
IndexBuilder.buildSymbolIndex()
|
||||
- Map symbol names → locations
|
||||
↓
|
||||
IndexBuilder.buildDepsGraph()
|
||||
- Build bidirectional import graph
|
||||
↓
|
||||
Store indexes in Redis
|
||||
↓
|
||||
Watchdog.start()
|
||||
- Watch for file changes
|
||||
- On change: Re-parse and update indexes
|
||||
```
|
||||
|
||||
## Key Design Decisions
|
||||
|
||||
### 1. Why Redis?
|
||||
|
||||
**Pros**:
|
||||
- Fast in-memory access for frequent reads
|
||||
- AOF persistence (append-only file) for durability
|
||||
- Native support for hashes, lists, sets
|
||||
- Simple key-value model fits our needs
|
||||
- Excellent for session data
|
||||
|
||||
**Alternatives considered**:
|
||||
- SQLite: Slower, overkill for our use case
|
||||
- JSON files: No concurrent access, slow for large data
|
||||
- PostgreSQL: Too heavy, we don't need relational features
|
||||
|
||||
### 2. Why tree-sitter?
|
||||
|
||||
**Pros**:
|
||||
- Incremental parsing (fast re-parsing)
|
||||
- Error-tolerant (works with syntax errors)
|
||||
- Multi-language support
|
||||
- Used by GitHub, Neovim, Atom
|
||||
|
||||
**Alternatives considered**:
|
||||
- TypeScript Compiler API: TS-only, not error-tolerant
|
||||
- Babel: JS-focused, heavy dependencies
|
||||
- Regex: Fragile, inaccurate
|
||||
|
||||
### 3. Why Ollama?
|
||||
|
||||
**Pros**:
|
||||
- 100% local, no API keys
|
||||
- Easy installation (brew install ollama)
|
||||
- Good model selection (qwen2.5-coder, deepseek-coder)
|
||||
- Tool calling support
|
||||
|
||||
**Alternatives considered**:
|
||||
- OpenAI: Costs money, sends code to cloud
|
||||
- Anthropic Claude: Same concerns as OpenAI
|
||||
- llama.cpp: Lower level, requires more setup
|
||||
|
||||
Planned: Support for OpenAI/Anthropic in v1.2.0 as optional providers.
|
||||
|
||||
### 4. Why XML for Tool Calls?
|
||||
|
||||
**Pros**:
|
||||
- LLMs trained on XML (very common format)
|
||||
- Self-describing (parameter names in tags)
|
||||
- Easy to parse with regex
|
||||
- More reliable than JSON for smaller models
|
||||
|
||||
**Alternatives considered**:
|
||||
- JSON: Smaller models struggle with exact JSON syntax
|
||||
- Function calling API: Not all models support it
|
||||
|
||||
### 5. Why Clean Architecture?
|
||||
|
||||
**Pros**:
|
||||
- Testability (domain has no external dependencies)
|
||||
- Flexibility (easy to swap Redis for SQLite)
|
||||
- Maintainability (clear separation of concerns)
|
||||
- Scalability (layers can evolve independently)
|
||||
|
||||
**Cost**: More files and indirection, but worth it for long-term maintenance.
|
||||
|
||||
### 6. Why Lazy Loading Instead of RAG?
|
||||
|
||||
**RAG (Retrieval Augmented Generation)**:
|
||||
- Pre-computes embeddings
|
||||
- Searches embeddings for relevant chunks
|
||||
- Adds chunks to context
|
||||
|
||||
**Lazy Loading (our approach)**:
|
||||
- Agent requests specific code via tools
|
||||
- More precise control over what's loaded
|
||||
- Simpler implementation (no embeddings)
|
||||
- Works with any LLM (no embedding model needed)
|
||||
|
||||
**Trade-off**: RAG might be better for semantic search ("find error handling code"), but tool-based approach gives agent explicit control.
|
||||
|
||||
## Tech Stack
|
||||
|
||||
### Core Dependencies
|
||||
|
||||
| Package | Purpose | Why? |
|
||||
|---------|---------|------|
|
||||
| `ioredis` | Redis client | Most popular, excellent TypeScript support |
|
||||
| `ollama` | LLM client | Official SDK, simple API |
|
||||
| `tree-sitter` | AST parsing | Fast, error-tolerant, multi-language |
|
||||
| `tree-sitter-typescript` | TS/TSX parser | Official TypeScript grammar |
|
||||
| `tree-sitter-javascript` | JS/JSX parser | Official JavaScript grammar |
|
||||
| `ink` | Terminal UI | React for terminal, declarative |
|
||||
| `ink-text-input` | Input component | Maintained ink component |
|
||||
| `react` | UI framework | Required by Ink |
|
||||
| `simple-git` | Git operations | Simple API, well-tested |
|
||||
| `chokidar` | File watching | Cross-platform, reliable |
|
||||
| `commander` | CLI framework | Industry standard |
|
||||
| `zod` | Validation | Type-safe validation |
|
||||
| `globby` | File globbing | ESM-native, .gitignore support |
|
||||
|
||||
### Development Dependencies
|
||||
|
||||
| Package | Purpose |
|
||||
|---------|---------|
|
||||
| `vitest` | Testing framework |
|
||||
| `@vitest/coverage-v8` | Coverage reporting |
|
||||
| `@vitest/ui` | Interactive test UI |
|
||||
| `tsup` | TypeScript bundler |
|
||||
| `typescript` | Type checking |
|
||||
|
||||
## Performance Considerations
|
||||
|
||||
### 1. Indexing Performance
|
||||
|
||||
**Problem**: Large projects (10k+ files) take time to index.
|
||||
|
||||
**Optimizations**:
|
||||
- Incremental parsing with tree-sitter (only changed files)
|
||||
- Parallel parsing (planned for v1.1.0)
|
||||
- Ignore patterns (.gitignore, node_modules, dist)
|
||||
- Skip binary files early
|
||||
|
||||
**Current**: ~1000 files/second on M1 Mac
|
||||
|
||||
### 2. Memory Usage
|
||||
|
||||
**Problem**: Entire AST in memory could be 100s of MB.
|
||||
|
||||
**Optimizations**:
|
||||
- Store ASTs in Redis (out of Node.js heap)
|
||||
- Load ASTs on-demand from Redis
|
||||
- Lazy-load file content (not stored in session)
|
||||
|
||||
**Current**: ~200MB for 5000 files indexed
|
||||
|
||||
### 3. Context Window Management
|
||||
|
||||
**Problem**: 128k token context window fills up.
|
||||
|
||||
**Optimizations**:
|
||||
- Auto-compression at 80% usage
|
||||
- LLM summarizes old messages
|
||||
- Remove tool results older than 5 messages
|
||||
- Only load structure + metadata initially (~10k tokens)
|
||||
|
||||
### 4. Redis Performance
|
||||
|
||||
**Problem**: Redis is single-threaded.
|
||||
|
||||
**Optimizations**:
|
||||
- Pipeline commands where possible
|
||||
- Use hashes for related data (fewer keys)
|
||||
- AOF every second (not every command)
|
||||
- Keep undo stack limited (10 entries)
|
||||
|
||||
**Current**: <1ms latency for most operations
|
||||
|
||||
### 5. Tool Execution
|
||||
|
||||
**Problem**: Tool execution could block LLM.
|
||||
|
||||
**Current**: Synchronous execution (simpler)
|
||||
|
||||
**Future**: Async tool execution with progress callbacks (v1.1.0)
|
||||
|
||||
## Future Improvements
|
||||
|
||||
### v1.1.0 - Performance
|
||||
- Parallel AST parsing
|
||||
- Incremental indexing (only changed files)
|
||||
- Response caching
|
||||
- Stream LLM responses
|
||||
|
||||
### v1.2.0 - Features
|
||||
- Multiple file edits in one operation
|
||||
- Batch operations
|
||||
- Custom prompt templates
|
||||
- OpenAI/Anthropic provider support
|
||||
|
||||
### v1.3.0 - Extensibility
|
||||
- Plugin system for custom tools
|
||||
- LSP integration
|
||||
- Multi-language support (Python, Go, Rust)
|
||||
- Custom indexing rules
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: 2025-12-01
|
||||
**Version**: 0.16.0
|
||||
File diff suppressed because it is too large
Load Diff
@@ -7,9 +7,9 @@
|
||||
[](https://www.npmjs.com/package/@samiyev/ipuaro)
|
||||
[](https://opensource.org/licenses/MIT)
|
||||
|
||||
> **Status:** 🚧 Early Development (v0.1.0 Foundation)
|
||||
> **Status:** 🎉 Release Candidate (v0.16.0 → v1.0.0)
|
||||
>
|
||||
> Core infrastructure is ready. Active development in progress.
|
||||
> All core features complete. Production-ready release coming soon.
|
||||
|
||||
## Vision
|
||||
|
||||
@@ -19,18 +19,20 @@ Work with codebases of any size using local AI:
|
||||
- 🔒 **100% Local**: Your code never leaves your machine
|
||||
- ⚡ **Fast**: Redis persistence + tree-sitter parsing
|
||||
|
||||
## Planned Features
|
||||
## Features
|
||||
|
||||
### 18 LLM Tools
|
||||
### 18 LLM Tools (All Implemented ✅)
|
||||
|
||||
| Category | Tools | Status |
|
||||
|----------|-------|--------|
|
||||
| **Read** | `get_lines`, `get_function`, `get_class`, `get_structure` | 🔜 v0.5.0 |
|
||||
| **Edit** | `edit_lines`, `create_file`, `delete_file` | 🔜 v0.6.0 |
|
||||
| **Search** | `find_references`, `find_definition` | 🔜 v0.7.0 |
|
||||
| **Analysis** | `get_dependencies`, `get_dependents`, `get_complexity`, `get_todos` | 🔜 v0.8.0 |
|
||||
| **Git** | `git_status`, `git_diff`, `git_commit` | 🔜 v0.9.0 |
|
||||
| **Run** | `run_command`, `run_tests` | 🔜 v0.9.0 |
|
||||
| Category | Tools | Description |
|
||||
|----------|-------|-------------|
|
||||
| **Read** | `get_lines`, `get_function`, `get_class`, `get_structure` | Read code without loading everything into context |
|
||||
| **Edit** | `edit_lines`, `create_file`, `delete_file` | Make changes with confirmation and undo support |
|
||||
| **Search** | `find_references`, `find_definition` | Find symbol definitions and usages across codebase |
|
||||
| **Analysis** | `get_dependencies`, `get_dependents`, `get_complexity`, `get_todos` | Analyze code structure, complexity, and TODOs |
|
||||
| **Git** | `git_status`, `git_diff`, `git_commit` | Git operations with safety checks |
|
||||
| **Run** | `run_command`, `run_tests` | Execute commands and tests with security validation |
|
||||
|
||||
See [Tools Documentation](#tools-reference) below for detailed usage examples.
|
||||
|
||||
### Terminal UI
|
||||
|
||||
@@ -54,6 +56,31 @@ Work with codebases of any size using local AI:
|
||||
└───────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Slash Commands
|
||||
|
||||
Control your session with built-in commands:
|
||||
|
||||
| Command | Description |
|
||||
|---------|-------------|
|
||||
| `/help` | Show all commands and hotkeys |
|
||||
| `/clear` | Clear chat history (keeps session) |
|
||||
| `/undo` | Revert last file change from undo stack |
|
||||
| `/sessions [list\|load\|delete] [id]` | Manage sessions |
|
||||
| `/status` | Show system status (LLM, context, stats) |
|
||||
| `/reindex` | Force full project reindexation |
|
||||
| `/eval` | LLM self-check for hallucinations |
|
||||
| `/auto-apply [on\|off]` | Toggle auto-apply mode for edits |
|
||||
|
||||
### Hotkeys
|
||||
|
||||
| Hotkey | Action |
|
||||
|--------|--------|
|
||||
| `Ctrl+C` | Interrupt generation (1st press) / Exit (2nd press within 1s) |
|
||||
| `Ctrl+D` | Exit and save session |
|
||||
| `Ctrl+Z` | Undo last file change |
|
||||
| `↑` / `↓` | Navigate input history |
|
||||
| `Tab` | Path autocomplete (coming soon) |
|
||||
|
||||
### Key Capabilities
|
||||
|
||||
🔍 **Smart Code Understanding**
|
||||
@@ -124,6 +151,23 @@ ipuaro --model qwen2.5-coder:32b-instruct
|
||||
ipuaro --auto-apply
|
||||
```
|
||||
|
||||
## Quick Start
|
||||
|
||||
Try ipuaro with our demo project:
|
||||
|
||||
```bash
|
||||
# Navigate to demo project
|
||||
cd examples/demo-project
|
||||
|
||||
# Install dependencies
|
||||
npm install
|
||||
|
||||
# Start ipuaro
|
||||
npx @samiyev/ipuaro
|
||||
```
|
||||
|
||||
See [examples/demo-project](./examples/demo-project) for detailed usage guide and example conversations.
|
||||
|
||||
## Commands
|
||||
|
||||
| Command | Description |
|
||||
@@ -181,49 +225,263 @@ Clean Architecture with clear separation:
|
||||
|
||||
## Development Status
|
||||
|
||||
### ✅ Completed (v0.1.0)
|
||||
### ✅ Completed (v0.1.0 - v0.16.0)
|
||||
|
||||
- [x] Project setup (tsup, vitest, ESM)
|
||||
- [x] Domain entities (Session, Project)
|
||||
- [x] Value objects (FileData, FileAST, ChatMessage, etc.)
|
||||
- [x] Service interfaces (IStorage, ILLMClient, ITool, IIndexer)
|
||||
- [x] Shared module (Config, Errors, Utils)
|
||||
- [x] CLI placeholder commands
|
||||
- [x] 91 unit tests, 100% coverage
|
||||
- [x] **v0.1.0 - v0.4.0**: Foundation (domain, storage, indexer, LLM integration)
|
||||
- [x] **v0.5.0 - v0.9.0**: All 18 tools implemented
|
||||
- [x] **v0.10.0**: Session management with undo support
|
||||
- [x] **v0.11.0 - v0.12.0**: Full TUI with all components
|
||||
- [x] **v0.13.0**: Security (PathValidator, command validation)
|
||||
- [x] **v0.14.0**: 8 slash commands
|
||||
- [x] **v0.15.0**: CLI entry point with onboarding
|
||||
- [x] **v0.16.0**: Comprehensive error handling system
|
||||
- [x] **1420 tests, 98% coverage**
|
||||
|
||||
### 🔜 Next Up
|
||||
### 🔜 v1.0.0 - Production Ready
|
||||
|
||||
- [ ] **v0.2.0** - Redis Storage
|
||||
- [ ] **v0.3.0** - Indexer (file scanning, AST parsing)
|
||||
- [ ] **v0.4.0** - LLM Integration (Ollama)
|
||||
- [ ] **v0.5.0-0.9.0** - Tools implementation
|
||||
- [ ] **v0.10.0** - Session management
|
||||
- [ ] **v0.11.0** - TUI
|
||||
- [ ] Performance optimizations
|
||||
- [ ] Complete documentation
|
||||
- [ ] Working examples
|
||||
|
||||
See [ROADMAP.md](./ROADMAP.md) for detailed development plan.
|
||||
See [ROADMAP.md](./ROADMAP.md) for detailed development plan and [CHANGELOG.md](./CHANGELOG.md) for release history.
|
||||
|
||||
## API (Coming Soon)
|
||||
## Tools Reference
|
||||
|
||||
The AI agent has access to 18 tools for working with your codebase. Here are the most commonly used ones:
|
||||
|
||||
### Read Tools
|
||||
|
||||
**`get_lines(path, start?, end?)`**
|
||||
Read specific lines from a file.
|
||||
|
||||
```
|
||||
You: Show me the authentication logic
|
||||
Assistant: [get_lines src/auth/service.ts 45 67]
|
||||
# Returns lines 45-67 with line numbers
|
||||
```
|
||||
|
||||
**`get_function(path, name)`**
|
||||
Get a specific function's source code and metadata.
|
||||
|
||||
```
|
||||
You: How does the login function work?
|
||||
Assistant: [get_function src/auth/service.ts login]
|
||||
# Returns function code, params, return type, and metadata
|
||||
```
|
||||
|
||||
**`get_class(path, name)`**
|
||||
Get a specific class's source code and metadata.
|
||||
|
||||
```
|
||||
You: Show me the UserService class
|
||||
Assistant: [get_class src/services/user.ts UserService]
|
||||
# Returns class code, methods, properties, and inheritance info
|
||||
```
|
||||
|
||||
**`get_structure(path?, depth?)`**
|
||||
Get directory tree structure.
|
||||
|
||||
```
|
||||
You: What's in the src/auth directory?
|
||||
Assistant: [get_structure src/auth]
|
||||
# Returns ASCII tree with files and folders
|
||||
```
|
||||
|
||||
### Edit Tools
|
||||
|
||||
**`edit_lines(path, start, end, content)`**
|
||||
Replace lines in a file (requires confirmation).
|
||||
|
||||
```
|
||||
You: Update the timeout to 5000ms
|
||||
Assistant: [edit_lines src/config.ts 23 23 " timeout: 5000,"]
|
||||
# Shows diff, asks for confirmation
|
||||
```
|
||||
|
||||
**`create_file(path, content)`**
|
||||
Create a new file (requires confirmation).
|
||||
|
||||
```
|
||||
You: Create a new utility for date formatting
|
||||
Assistant: [create_file src/utils/date.ts "export function formatDate..."]
|
||||
# Creates file after confirmation
|
||||
```
|
||||
|
||||
**`delete_file(path)`**
|
||||
Delete a file (requires confirmation).
|
||||
|
||||
```
|
||||
You: Remove the old test file
|
||||
Assistant: [delete_file tests/old-test.test.ts]
|
||||
# Deletes after confirmation
|
||||
```
|
||||
|
||||
### Search Tools
|
||||
|
||||
**`find_references(symbol, path?)`**
|
||||
Find all usages of a symbol across the codebase.
|
||||
|
||||
```
|
||||
You: Where is getUserById used?
|
||||
Assistant: [find_references getUserById]
|
||||
# Returns all files/lines where it's called
|
||||
```
|
||||
|
||||
**`find_definition(symbol)`**
|
||||
Find where a symbol is defined.
|
||||
|
||||
```
|
||||
You: Where is ApiClient defined?
|
||||
Assistant: [find_definition ApiClient]
|
||||
# Returns file, line, and context
|
||||
```
|
||||
|
||||
### Analysis Tools
|
||||
|
||||
**`get_dependencies(path)`**
|
||||
Get files that a specific file imports.
|
||||
|
||||
```
|
||||
You: What does auth.ts depend on?
|
||||
Assistant: [get_dependencies src/auth/service.ts]
|
||||
# Returns list of imported files
|
||||
```
|
||||
|
||||
**`get_dependents(path)`**
|
||||
Get files that import a specific file.
|
||||
|
||||
```
|
||||
You: What files use the database module?
|
||||
Assistant: [get_dependents src/db/index.ts]
|
||||
# Returns list of files importing this
|
||||
```
|
||||
|
||||
**`get_complexity(path?, limit?)`**
|
||||
Get complexity metrics for files.
|
||||
|
||||
```
|
||||
You: Which files are most complex?
|
||||
Assistant: [get_complexity null 10]
|
||||
# Returns top 10 most complex files with metrics
|
||||
```
|
||||
|
||||
**`get_todos(path?, type?)`**
|
||||
Find TODO/FIXME/HACK comments.
|
||||
|
||||
```
|
||||
You: What TODOs are there?
|
||||
Assistant: [get_todos]
|
||||
# Returns all TODO comments with locations
|
||||
```
|
||||
|
||||
### Git Tools
|
||||
|
||||
**`git_status()`**
|
||||
Get current git repository status.
|
||||
|
||||
```
|
||||
You: What files have changed?
|
||||
Assistant: [git_status]
|
||||
# Returns branch, staged, modified, untracked files
|
||||
```
|
||||
|
||||
**`git_diff(path?, staged?)`**
|
||||
Get uncommitted changes.
|
||||
|
||||
```
|
||||
You: Show me what changed in auth.ts
|
||||
Assistant: [git_diff src/auth/service.ts]
|
||||
# Returns diff output
|
||||
```
|
||||
|
||||
**`git_commit(message, files?)`**
|
||||
Create a git commit (requires confirmation).
|
||||
|
||||
```
|
||||
You: Commit these auth changes
|
||||
Assistant: [git_commit "feat: add password reset flow" ["src/auth/service.ts"]]
|
||||
# Creates commit after confirmation
|
||||
```
|
||||
|
||||
### Run Tools
|
||||
|
||||
**`run_command(command, timeout?)`**
|
||||
Execute shell commands (with security validation).
|
||||
|
||||
```
|
||||
You: Run the build
|
||||
Assistant: [run_command "npm run build"]
|
||||
# Checks security, then executes
|
||||
```
|
||||
|
||||
**`run_tests(path?, filter?, watch?)`**
|
||||
Run project tests.
|
||||
|
||||
```
|
||||
You: Test the auth module
|
||||
Assistant: [run_tests "tests/auth" null false]
|
||||
# Auto-detects test runner and executes
|
||||
```
|
||||
|
||||
For complete tool documentation with all parameters and options, see [TOOLS.md](./TOOLS.md).
|
||||
|
||||
## Programmatic API
|
||||
|
||||
You can use ipuaro as a library in your own Node.js applications:
|
||||
|
||||
```typescript
|
||||
import { startSession, handleMessage } from "@samiyev/ipuaro"
|
||||
import {
|
||||
createRedisClient,
|
||||
RedisStorage,
|
||||
OllamaClient,
|
||||
ToolRegistry,
|
||||
StartSession,
|
||||
HandleMessage
|
||||
} from "@samiyev/ipuaro"
|
||||
|
||||
// Initialize dependencies
|
||||
const redis = await createRedisClient({ host: "localhost", port: 6379 })
|
||||
const storage = new RedisStorage(redis, "my-project")
|
||||
const llm = new OllamaClient({
|
||||
model: "qwen2.5-coder:7b-instruct",
|
||||
contextWindow: 128000,
|
||||
temperature: 0.1
|
||||
})
|
||||
const tools = new ToolRegistry()
|
||||
|
||||
// Register tools
|
||||
tools.register(new GetLinesTool(storage, "/path/to/project"))
|
||||
// ... register other tools
|
||||
|
||||
// Start a session
|
||||
const session = await startSession({
|
||||
projectPath: "./my-project",
|
||||
model: "qwen2.5-coder:7b-instruct"
|
||||
})
|
||||
const startSession = new StartSession(storage)
|
||||
const session = await startSession.execute("my-project")
|
||||
|
||||
// Send a message
|
||||
const response = await handleMessage(session, "Explain the auth flow")
|
||||
// Handle a message
|
||||
const handleMessage = new HandleMessage(storage, llm, tools)
|
||||
await handleMessage.execute(session, "Show me the auth flow")
|
||||
|
||||
console.log(response.content)
|
||||
console.log(`Tokens: ${response.stats.tokens}`)
|
||||
console.log(`Tool calls: ${response.stats.toolCalls}`)
|
||||
// Session is automatically updated in Redis
|
||||
```
|
||||
|
||||
For full API documentation, see the TypeScript definitions in `src/` or explore the [source code](./src/).
|
||||
|
||||
## How It Works
|
||||
|
||||
### Lazy Loading Context
|
||||
### 1. Project Indexing
|
||||
|
||||
When you start ipuaro, it scans your project and builds an index:
|
||||
|
||||
```
|
||||
1. File Scanner → Recursively scans files (.ts, .js, .tsx, .jsx)
|
||||
2. AST Parser → Parses with tree-sitter (extracts functions, classes, imports)
|
||||
3. Meta Analyzer → Calculates complexity, dependencies, hub detection
|
||||
4. Index Builder → Creates symbol index and dependency graph
|
||||
5. Redis Storage → Persists everything for instant startup next time
|
||||
6. Watchdog → Watches files for changes and updates index in background
|
||||
```
|
||||
|
||||
### 2. Lazy Loading Context
|
||||
|
||||
Instead of loading entire codebase into context:
|
||||
|
||||
@@ -232,24 +490,161 @@ Traditional approach:
|
||||
├── Load all files → 500k tokens → ❌ Exceeds context window
|
||||
|
||||
ipuaro approach:
|
||||
├── Load project structure → 2k tokens
|
||||
├── Load AST metadata → 10k tokens
|
||||
├── On demand: get_function("auth.ts", "login") → 200 tokens
|
||||
├── Total: ~12k tokens → ✅ Fits in context
|
||||
├── Load project structure → ~2k tokens
|
||||
├── Load AST metadata → ~10k tokens
|
||||
├── On demand: get_function("auth.ts", "login") → ~200 tokens
|
||||
├── Total: ~12k tokens → ✅ Fits in 128k context window
|
||||
```
|
||||
|
||||
### Tool-Based Code Access
|
||||
Context automatically compresses when usage exceeds 80% by summarizing old messages.
|
||||
|
||||
### 3. Tool-Based Code Access
|
||||
|
||||
The LLM doesn't see your code initially. It only sees structure and metadata. When it needs code, it uses tools:
|
||||
|
||||
```
|
||||
User: "How does user creation work?"
|
||||
You: "How does user creation work?"
|
||||
|
||||
ipuaro:
|
||||
1. [get_structure src/] → sees user/ folder
|
||||
2. [get_function src/user/service.ts createUser] → gets function code
|
||||
Agent reasoning:
|
||||
1. [get_structure src/] → sees user/ folder exists
|
||||
2. [get_function src/user/service.ts createUser] → loads specific function
|
||||
3. [find_references createUser] → finds all usages
|
||||
4. Synthesizes answer with specific code context
|
||||
4. Synthesizes answer with only relevant code loaded
|
||||
|
||||
Total tokens used: ~2k (vs loading entire src/ which could be 50k+)
|
||||
```
|
||||
|
||||
### 4. Session Persistence
|
||||
|
||||
Everything is saved to Redis:
|
||||
- Chat history and context state
|
||||
- Undo stack (last 10 file changes)
|
||||
- Session metadata and statistics
|
||||
|
||||
Resume your session anytime with `/sessions load <id>`.
|
||||
|
||||
### 5. Security Model
|
||||
|
||||
Three-layer security:
|
||||
1. **Blacklist**: Dangerous commands always blocked (rm -rf, sudo, etc.)
|
||||
2. **Whitelist**: Safe commands auto-approved (npm, git status, etc.)
|
||||
3. **Confirmation**: Unknown commands require user approval
|
||||
|
||||
File operations are restricted to project directory only (path traversal prevention).
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Redis Connection Errors
|
||||
|
||||
**Error**: `Redis connection failed`
|
||||
|
||||
**Solutions**:
|
||||
```bash
|
||||
# Check if Redis is running
|
||||
redis-cli ping # Should return "PONG"
|
||||
|
||||
# Start Redis with AOF persistence
|
||||
redis-server --appendonly yes
|
||||
|
||||
# Check Redis logs
|
||||
tail -f /usr/local/var/log/redis.log # macOS
|
||||
```
|
||||
|
||||
### Ollama Model Not Found
|
||||
|
||||
**Error**: `Model qwen2.5-coder:7b-instruct not found`
|
||||
|
||||
**Solutions**:
|
||||
```bash
|
||||
# Pull the model
|
||||
ollama pull qwen2.5-coder:7b-instruct
|
||||
|
||||
# List installed models
|
||||
ollama list
|
||||
|
||||
# Check Ollama is running
|
||||
ollama serve
|
||||
```
|
||||
|
||||
### Large Project Performance
|
||||
|
||||
**Issue**: Indexing takes too long or uses too much memory
|
||||
|
||||
**Solutions**:
|
||||
```bash
|
||||
# Index only a subdirectory
|
||||
ipuaro ./src
|
||||
|
||||
# Add more ignore patterns to .ipuaro.json
|
||||
{
|
||||
"project": {
|
||||
"ignorePatterns": ["node_modules", "dist", ".git", "coverage", "build"]
|
||||
}
|
||||
}
|
||||
|
||||
# Increase Node.js memory limit
|
||||
NODE_OPTIONS="--max-old-space-size=4096" ipuaro
|
||||
```
|
||||
|
||||
### Context Window Exceeded
|
||||
|
||||
**Issue**: `Context window exceeded` errors
|
||||
|
||||
**Solutions**:
|
||||
- Context auto-compresses at 80%, but you can manually `/clear` history
|
||||
- Use more targeted questions instead of asking about entire codebase
|
||||
- The agent will automatically use tools to load only what's needed
|
||||
|
||||
### File Changes Not Detected
|
||||
|
||||
**Issue**: Made changes but agent doesn't see them
|
||||
|
||||
**Solutions**:
|
||||
```bash
|
||||
# Force reindex
|
||||
/reindex
|
||||
|
||||
# Or restart with fresh index
|
||||
rm -rf ~/.ipuaro/cache
|
||||
ipuaro
|
||||
```
|
||||
|
||||
### Undo Not Working
|
||||
|
||||
**Issue**: `/undo` says no changes to undo
|
||||
|
||||
**Explanation**: Undo stack only tracks the last 10 file edits made through ipuaro. Manual file edits outside ipuaro cannot be undone.
|
||||
|
||||
## FAQ
|
||||
|
||||
**Q: Does ipuaro send my code to any external servers?**
|
||||
|
||||
A: No. Everything runs locally. Ollama runs on your machine, Redis stores data locally, and no network requests are made except to your local Ollama instance.
|
||||
|
||||
**Q: What languages are supported?**
|
||||
|
||||
A: Currently TypeScript, JavaScript (including TSX/JSX). More languages planned for future versions.
|
||||
|
||||
**Q: Can I use OpenAI/Anthropic/other LLM providers?**
|
||||
|
||||
A: Currently only Ollama is supported. OpenAI/Anthropic support is planned for v1.2.0.
|
||||
|
||||
**Q: How much disk space does Redis use?**
|
||||
|
||||
A: Depends on project size. A typical mid-size project (1000 files) uses ~50-100MB. Redis uses AOF persistence, so data survives restarts.
|
||||
|
||||
**Q: Can I use ipuaro in a CI/CD pipeline?**
|
||||
|
||||
A: Yes, but it's designed for interactive use. For automated code analysis, consider the programmatic API.
|
||||
|
||||
**Q: What's the difference between ipuaro and GitHub Copilot?**
|
||||
|
||||
A: Copilot is an autocomplete tool. ipuaro is a conversational agent that can read, analyze, modify files, run commands, and has full codebase understanding through AST parsing.
|
||||
|
||||
**Q: Why Redis instead of SQLite or JSON files?**
|
||||
|
||||
A: Redis provides fast in-memory access, AOF persistence, and handles concurrent access well. The session model fits Redis's data structures perfectly.
|
||||
|
||||
## Contributing
|
||||
|
||||
Contributions welcome! This project is in early development.
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
1605
packages/ipuaro/TOOLS.md
Normal file
1605
packages/ipuaro/TOOLS.md
Normal file
File diff suppressed because it is too large
Load Diff
4
packages/ipuaro/examples/demo-project/.gitignore
vendored
Normal file
4
packages/ipuaro/examples/demo-project/.gitignore
vendored
Normal file
@@ -0,0 +1,4 @@
|
||||
node_modules/
|
||||
dist/
|
||||
*.log
|
||||
.DS_Store
|
||||
21
packages/ipuaro/examples/demo-project/.ipuaro.json
Normal file
21
packages/ipuaro/examples/demo-project/.ipuaro.json
Normal file
@@ -0,0 +1,21 @@
|
||||
{
|
||||
"redis": {
|
||||
"host": "localhost",
|
||||
"port": 6379
|
||||
},
|
||||
"llm": {
|
||||
"model": "qwen2.5-coder:7b-instruct",
|
||||
"temperature": 0.1
|
||||
},
|
||||
"project": {
|
||||
"ignorePatterns": [
|
||||
"node_modules",
|
||||
"dist",
|
||||
".git",
|
||||
"*.log"
|
||||
]
|
||||
},
|
||||
"edit": {
|
||||
"autoApply": false
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,8 @@
|
||||
# Example Conversations with ipuaro
|
||||
|
||||
This document shows realistic conversations you can have with ipuaro when working with the demo project.
|
||||
|
||||
## Conversation 1: Understanding the Codebase
|
||||
|
||||
```
|
||||
You: What does this project do?
|
||||
406
packages/ipuaro/examples/demo-project/README.md
Normal file
406
packages/ipuaro/examples/demo-project/README.md
Normal file
@@ -0,0 +1,406 @@
|
||||
# ipuaro Demo Project
|
||||
|
||||
This is a demo project showcasing ipuaro's capabilities as a local AI agent for codebase operations.
|
||||
|
||||
## Project Overview
|
||||
|
||||
A simple TypeScript application demonstrating:
|
||||
- User management service
|
||||
- Authentication service
|
||||
- Validation utilities
|
||||
- Logging utilities
|
||||
- Unit tests
|
||||
|
||||
The code intentionally includes various patterns (TODOs, FIXMEs, complex functions, dependencies) to demonstrate ipuaro's analysis tools.
|
||||
|
||||
## Setup
|
||||
|
||||
### Prerequisites
|
||||
|
||||
1. **Redis** - Running locally
|
||||
```bash
|
||||
# macOS
|
||||
brew install redis
|
||||
redis-server --appendonly yes
|
||||
```
|
||||
|
||||
2. **Ollama** - With qwen2.5-coder model
|
||||
```bash
|
||||
brew install ollama
|
||||
ollama serve
|
||||
ollama pull qwen2.5-coder:7b-instruct
|
||||
```
|
||||
|
||||
3. **Node.js** - v20 or higher
|
||||
|
||||
### Installation
|
||||
|
||||
```bash
|
||||
# Install dependencies
|
||||
npm install
|
||||
|
||||
# Or with pnpm
|
||||
pnpm install
|
||||
```
|
||||
|
||||
## Using ipuaro with Demo Project
|
||||
|
||||
### Start ipuaro
|
||||
|
||||
```bash
|
||||
# From this directory
|
||||
npx @samiyev/ipuaro
|
||||
|
||||
# Or if installed globally
|
||||
ipuaro
|
||||
```
|
||||
|
||||
### Example Queries
|
||||
|
||||
Try these queries to explore ipuaro's capabilities:
|
||||
|
||||
#### 1. Understanding the Codebase
|
||||
|
||||
```
|
||||
You: What is the structure of this project?
|
||||
```
|
||||
|
||||
ipuaro will use `get_structure` to show the directory tree.
|
||||
|
||||
```
|
||||
You: How does user creation work?
|
||||
```
|
||||
|
||||
ipuaro will:
|
||||
1. Use `get_structure` to find relevant files
|
||||
2. Use `get_function` to read the `createUser` function
|
||||
3. Use `find_references` to see where it's called
|
||||
4. Explain the flow
|
||||
|
||||
#### 2. Finding Issues
|
||||
|
||||
```
|
||||
You: What TODOs and FIXMEs are in the codebase?
|
||||
```
|
||||
|
||||
ipuaro will use `get_todos` to list all TODO/FIXME comments.
|
||||
|
||||
```
|
||||
You: Which files are most complex?
|
||||
```
|
||||
|
||||
ipuaro will use `get_complexity` to analyze and rank files by complexity.
|
||||
|
||||
#### 3. Understanding Dependencies
|
||||
|
||||
```
|
||||
You: What does the UserService depend on?
|
||||
```
|
||||
|
||||
ipuaro will use `get_dependencies` to show imported modules.
|
||||
|
||||
```
|
||||
You: What files use the validation utilities?
|
||||
```
|
||||
|
||||
ipuaro will use `get_dependents` to show files importing validation.ts.
|
||||
|
||||
#### 4. Code Analysis
|
||||
|
||||
```
|
||||
You: Find all references to the ValidationError class
|
||||
```
|
||||
|
||||
ipuaro will use `find_references` to locate all usages.
|
||||
|
||||
```
|
||||
You: Where is the Logger class defined?
|
||||
```
|
||||
|
||||
ipuaro will use `find_definition` to locate the definition.
|
||||
|
||||
#### 5. Making Changes
|
||||
|
||||
```
|
||||
You: Add a method to UserService to count total users
|
||||
```
|
||||
|
||||
ipuaro will:
|
||||
1. Read UserService class with `get_class`
|
||||
2. Generate the new method
|
||||
3. Use `edit_lines` to add it
|
||||
4. Show diff and ask for confirmation
|
||||
|
||||
```
|
||||
You: Fix the TODO in validation.ts about password validation
|
||||
```
|
||||
|
||||
ipuaro will:
|
||||
1. Find the TODO with `get_todos`
|
||||
2. Read the function with `get_function`
|
||||
3. Implement stronger password validation
|
||||
4. Use `edit_lines` to apply changes
|
||||
|
||||
#### 6. Testing
|
||||
|
||||
```
|
||||
You: Run the tests
|
||||
```
|
||||
|
||||
ipuaro will use `run_tests` to execute the test suite.
|
||||
|
||||
```
|
||||
You: Add a test for the getUserByEmail method
|
||||
```
|
||||
|
||||
ipuaro will:
|
||||
1. Read existing tests with `get_lines`
|
||||
2. Generate new test following the pattern
|
||||
3. Use `edit_lines` to add it
|
||||
|
||||
#### 7. Git Operations
|
||||
|
||||
```
|
||||
You: What files have I changed?
|
||||
```
|
||||
|
||||
ipuaro will use `git_status` to show modified files.
|
||||
|
||||
```
|
||||
You: Show me the diff for UserService
|
||||
```
|
||||
|
||||
ipuaro will use `git_diff` with the file path.
|
||||
|
||||
```
|
||||
You: Commit these changes with message "feat: add user count method"
|
||||
```
|
||||
|
||||
ipuaro will use `git_commit` after confirmation.
|
||||
|
||||
## Tool Demonstration Scenarios
|
||||
|
||||
### Scenario 1: Bug Fix Flow
|
||||
|
||||
```
|
||||
You: There's a bug - we need to sanitize user input before storing. Fix this in UserService.
|
||||
|
||||
Agent will:
|
||||
1. get_function("src/services/user.ts", "createUser")
|
||||
2. See that sanitization is missing
|
||||
3. find_definition("sanitizeInput") to locate the utility
|
||||
4. edit_lines to add sanitization call
|
||||
5. run_tests to verify the fix
|
||||
```
|
||||
|
||||
### Scenario 2: Refactoring Flow
|
||||
|
||||
```
|
||||
You: Extract the ID generation logic into a separate utility function
|
||||
|
||||
Agent will:
|
||||
1. get_class("src/services/user.ts", "UserService")
|
||||
2. Find generateId private method
|
||||
3. create_file("src/utils/id.ts") with the utility
|
||||
4. edit_lines to replace private method with import
|
||||
5. find_references("generateId") to check no other usages
|
||||
6. run_tests to ensure nothing broke
|
||||
```
|
||||
|
||||
### Scenario 3: Feature Addition
|
||||
|
||||
```
|
||||
You: Add password reset functionality to AuthService
|
||||
|
||||
Agent will:
|
||||
1. get_class("src/auth/service.ts", "AuthService")
|
||||
2. get_dependencies to see what's available
|
||||
3. Design the resetPassword method
|
||||
4. edit_lines to add the method
|
||||
5. Suggest creating a test
|
||||
6. create_file("tests/auth.test.ts") if needed
|
||||
```
|
||||
|
||||
### Scenario 4: Code Review
|
||||
|
||||
```
|
||||
You: Review the code for security issues
|
||||
|
||||
Agent will:
|
||||
1. get_todos to find FIXME about XSS
|
||||
2. get_complexity to find complex functions
|
||||
3. get_function for suspicious functions
|
||||
4. Suggest improvements
|
||||
5. Optionally edit_lines to fix issues
|
||||
```
|
||||
|
||||
## Slash Commands
|
||||
|
||||
While exploring, you can use these commands:
|
||||
|
||||
```
|
||||
/help # Show all commands and hotkeys
|
||||
/status # Show system status (LLM, Redis, context)
|
||||
/sessions list # List all sessions
|
||||
/undo # Undo last file change
|
||||
/clear # Clear chat history
|
||||
/reindex # Force project reindexation
|
||||
/auto-apply on # Enable auto-apply mode (skip confirmations)
|
||||
```
|
||||
|
||||
## Hotkeys
|
||||
|
||||
- `Ctrl+C` - Interrupt generation (1st) / Exit (2nd within 1s)
|
||||
- `Ctrl+D` - Exit and save session
|
||||
- `Ctrl+Z` - Undo last change
|
||||
- `↑` / `↓` - Navigate input history
|
||||
|
||||
## Project Files Overview
|
||||
|
||||
```
|
||||
demo-project/
|
||||
├── src/
|
||||
│ ├── auth/
|
||||
│ │ └── service.ts # Authentication logic (login, logout, verify)
|
||||
│ ├── services/
|
||||
│ │ └── user.ts # User CRUD operations
|
||||
│ ├── utils/
|
||||
│ │ ├── logger.ts # Logging utility (multiple methods)
|
||||
│ │ └── validation.ts # Input validation (with TODOs/FIXMEs)
|
||||
│ ├── types/
|
||||
│ │ └── user.ts # TypeScript type definitions
|
||||
│ └── index.ts # Application entry point
|
||||
├── tests/
|
||||
│ └── user.test.ts # User service tests (vitest)
|
||||
├── package.json # Project configuration
|
||||
├── tsconfig.json # TypeScript configuration
|
||||
├── vitest.config.ts # Test configuration
|
||||
└── .ipuaro.json # ipuaro configuration
|
||||
```
|
||||
|
||||
## What ipuaro Can Do With This Project
|
||||
|
||||
### Read Tools ✅
|
||||
- **get_lines**: Read any file or specific line ranges
|
||||
- **get_function**: Extract specific functions (login, createUser, etc.)
|
||||
- **get_class**: Extract classes (UserService, AuthService, Logger, etc.)
|
||||
- **get_structure**: See directory tree
|
||||
|
||||
### Edit Tools ✅
|
||||
- **edit_lines**: Modify functions, fix bugs, add features
|
||||
- **create_file**: Add new utilities, tests, services
|
||||
- **delete_file**: Remove unused files
|
||||
|
||||
### Search Tools ✅
|
||||
- **find_references**: Find all usages of ValidationError, User, etc.
|
||||
- **find_definition**: Locate where Logger, UserService are defined
|
||||
|
||||
### Analysis Tools ✅
|
||||
- **get_dependencies**: See what UserService imports
|
||||
- **get_dependents**: See what imports validation.ts (multiple files!)
|
||||
- **get_complexity**: Identify complex functions (createUser has moderate complexity)
|
||||
- **get_todos**: Find 2 TODOs and 1 FIXME in the project
|
||||
|
||||
### Git Tools ✅
|
||||
- **git_status**: Check working tree
|
||||
- **git_diff**: See changes
|
||||
- **git_commit**: Commit with AI-generated messages
|
||||
|
||||
### Run Tools ✅
|
||||
- **run_command**: Execute npm scripts
|
||||
- **run_tests**: Run vitest tests
|
||||
|
||||
## Tips for Best Experience
|
||||
|
||||
1. **Start Small**: Ask about structure first, then dive into specific files
|
||||
2. **Be Specific**: "Show me the createUser function" vs "How does this work?"
|
||||
3. **Use Tools Implicitly**: Just ask questions, let ipuaro choose the right tools
|
||||
4. **Review Changes**: Always review diffs before applying edits
|
||||
5. **Test Often**: Ask ipuaro to run tests after making changes
|
||||
6. **Commit Incrementally**: Use git_commit for each logical change
|
||||
|
||||
## Advanced Workflows
|
||||
|
||||
### Workflow 1: Add New Feature
|
||||
|
||||
```
|
||||
You: Add email verification to the authentication flow
|
||||
|
||||
Agent will:
|
||||
1. Analyze current auth flow
|
||||
2. Propose design (new fields, methods)
|
||||
3. Edit AuthService to add verification
|
||||
4. Edit User types to add verified field
|
||||
5. Create tests for verification
|
||||
6. Run tests
|
||||
7. Offer to commit
|
||||
```
|
||||
|
||||
### Workflow 2: Performance Optimization
|
||||
|
||||
```
|
||||
You: The user lookup is slow when we have many users. Optimize it.
|
||||
|
||||
Agent will:
|
||||
1. Analyze UserService.getUserByEmail
|
||||
2. See it's using Array.find (O(n))
|
||||
3. Suggest adding an email index
|
||||
4. Edit to add private emailIndex: Map<string, User>
|
||||
5. Update createUser to populate index
|
||||
6. Update deleteUser to maintain index
|
||||
7. Run tests to verify
|
||||
```
|
||||
|
||||
### Workflow 3: Security Audit
|
||||
|
||||
```
|
||||
You: Audit the code for security vulnerabilities
|
||||
|
||||
Agent will:
|
||||
1. get_todos to find FIXME about XSS
|
||||
2. Review sanitizeInput implementation
|
||||
3. Check password validation strength
|
||||
4. Look for SQL injection risks (none here)
|
||||
5. Suggest improvements
|
||||
6. Optionally implement fixes
|
||||
```
|
||||
|
||||
## Next Steps
|
||||
|
||||
After exploring the demo project, try:
|
||||
|
||||
1. **Your Own Project**: Run `ipuaro` in your real codebase
|
||||
2. **Customize Config**: Edit `.ipuaro.json` to fit your needs
|
||||
3. **Different Model**: Try `--model qwen2.5-coder:32b-instruct` for better results
|
||||
4. **Auto-Apply Mode**: Use `--auto-apply` for faster iterations (with caution!)
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Redis Not Connected
|
||||
```bash
|
||||
# Start Redis with persistence
|
||||
redis-server --appendonly yes
|
||||
```
|
||||
|
||||
### Ollama Model Not Found
|
||||
```bash
|
||||
# Pull the model
|
||||
ollama pull qwen2.5-coder:7b-instruct
|
||||
|
||||
# Check it's installed
|
||||
ollama list
|
||||
```
|
||||
|
||||
### Indexing Takes Long
|
||||
The project is small (~10 files) so indexing should be instant. For larger projects, use ignore patterns in `.ipuaro.json`.
|
||||
|
||||
## Learn More
|
||||
|
||||
- [ipuaro Documentation](../../README.md)
|
||||
- [Architecture Guide](../../ARCHITECTURE.md)
|
||||
- [Tools Reference](../../TOOLS.md)
|
||||
- [GitHub Repository](https://github.com/samiyev/puaros)
|
||||
|
||||
---
|
||||
|
||||
**Happy coding with ipuaro!** 🎩✨
|
||||
20
packages/ipuaro/examples/demo-project/package.json
Normal file
20
packages/ipuaro/examples/demo-project/package.json
Normal file
@@ -0,0 +1,20 @@
|
||||
{
|
||||
"name": "ipuaro-demo-project",
|
||||
"version": "1.0.0",
|
||||
"description": "Demo project for ipuaro - showcasing AI agent capabilities",
|
||||
"private": true,
|
||||
"type": "module",
|
||||
"scripts": {
|
||||
"dev": "tsx src/index.ts",
|
||||
"test": "vitest",
|
||||
"test:run": "vitest run",
|
||||
"build": "tsc"
|
||||
},
|
||||
"dependencies": {},
|
||||
"devDependencies": {
|
||||
"@types/node": "^22.10.1",
|
||||
"tsx": "^4.19.2",
|
||||
"typescript": "^5.7.2",
|
||||
"vitest": "^1.6.0"
|
||||
}
|
||||
}
|
||||
85
packages/ipuaro/examples/demo-project/src/auth/service.ts
Normal file
85
packages/ipuaro/examples/demo-project/src/auth/service.ts
Normal file
@@ -0,0 +1,85 @@
|
||||
/**
|
||||
* Authentication service
|
||||
*/
|
||||
|
||||
import type { User, AuthToken } from "../types/user"
|
||||
import { UserService } from "../services/user"
|
||||
import { createLogger } from "../utils/logger"
|
||||
|
||||
const logger = createLogger("AuthService")
|
||||
|
||||
export class AuthService {
|
||||
private tokens: Map<string, AuthToken> = new Map()
|
||||
|
||||
constructor(private userService: UserService) {}
|
||||
|
||||
async login(email: string, password: string): Promise<AuthToken> {
|
||||
logger.info("Login attempt", { email })
|
||||
|
||||
// Get user
|
||||
const user = await this.userService.getUserByEmail(email)
|
||||
if (!user) {
|
||||
logger.warn("Login failed - user not found", { email })
|
||||
throw new Error("Invalid credentials")
|
||||
}
|
||||
|
||||
// TODO: Implement actual password verification
|
||||
// For demo purposes, we just check if password is provided
|
||||
if (!password) {
|
||||
logger.warn("Login failed - no password", { email })
|
||||
throw new Error("Invalid credentials")
|
||||
}
|
||||
|
||||
// Generate token
|
||||
const token = this.generateToken(user)
|
||||
this.tokens.set(token.token, token)
|
||||
|
||||
logger.info("Login successful", { userId: user.id })
|
||||
return token
|
||||
}
|
||||
|
||||
async logout(tokenString: string): Promise<void> {
|
||||
logger.info("Logout", { token: tokenString.substring(0, 10) + "..." })
|
||||
|
||||
const token = this.tokens.get(tokenString)
|
||||
if (!token) {
|
||||
throw new Error("Invalid token")
|
||||
}
|
||||
|
||||
this.tokens.delete(tokenString)
|
||||
logger.info("Logout successful", { userId: token.userId })
|
||||
}
|
||||
|
||||
async verifyToken(tokenString: string): Promise<User> {
|
||||
logger.debug("Verifying token")
|
||||
|
||||
const token = this.tokens.get(tokenString)
|
||||
if (!token) {
|
||||
throw new Error("Invalid token")
|
||||
}
|
||||
|
||||
if (token.expiresAt < new Date()) {
|
||||
this.tokens.delete(tokenString)
|
||||
throw new Error("Token expired")
|
||||
}
|
||||
|
||||
const user = await this.userService.getUserById(token.userId)
|
||||
if (!user) {
|
||||
throw new Error("User not found")
|
||||
}
|
||||
|
||||
return user
|
||||
}
|
||||
|
||||
private generateToken(user: User): AuthToken {
|
||||
const token = `tok_${Date.now()}_${Math.random().toString(36).substring(7)}`
|
||||
const expiresAt = new Date()
|
||||
expiresAt.setHours(expiresAt.getHours() + 24) // 24 hours
|
||||
|
||||
return {
|
||||
token,
|
||||
expiresAt,
|
||||
userId: user.id,
|
||||
}
|
||||
}
|
||||
}
|
||||
48
packages/ipuaro/examples/demo-project/src/index.ts
Normal file
48
packages/ipuaro/examples/demo-project/src/index.ts
Normal file
@@ -0,0 +1,48 @@
|
||||
/**
|
||||
* Demo application entry point
|
||||
*/
|
||||
|
||||
import { UserService } from "./services/user"
|
||||
import { AuthService } from "./auth/service"
|
||||
import { createLogger } from "./utils/logger"
|
||||
|
||||
const logger = createLogger("App")
|
||||
|
||||
async function main(): Promise<void> {
|
||||
logger.info("Starting demo application")
|
||||
|
||||
// Initialize services
|
||||
const userService = new UserService()
|
||||
const authService = new AuthService(userService)
|
||||
|
||||
try {
|
||||
// Create a demo user
|
||||
const user = await userService.createUser({
|
||||
email: "demo@example.com",
|
||||
name: "Demo User",
|
||||
password: "password123",
|
||||
role: "admin",
|
||||
})
|
||||
|
||||
logger.info("Demo user created", { userId: user.id })
|
||||
|
||||
// Login
|
||||
const token = await authService.login("demo@example.com", "password123")
|
||||
logger.info("Login successful", { token: token.token })
|
||||
|
||||
// Verify token
|
||||
const verifiedUser = await authService.verifyToken(token.token)
|
||||
logger.info("Token verified", { userId: verifiedUser.id })
|
||||
|
||||
// Logout
|
||||
await authService.logout(token.token)
|
||||
logger.info("Logout successful")
|
||||
} catch (error) {
|
||||
logger.error("Application error", error as Error)
|
||||
process.exit(1)
|
||||
}
|
||||
|
||||
logger.info("Demo application finished")
|
||||
}
|
||||
|
||||
main()
|
||||
100
packages/ipuaro/examples/demo-project/src/services/user.ts
Normal file
100
packages/ipuaro/examples/demo-project/src/services/user.ts
Normal file
@@ -0,0 +1,100 @@
|
||||
/**
|
||||
* User service - handles user-related operations
|
||||
*/
|
||||
|
||||
import type { User, CreateUserDto, UpdateUserDto } from "../types/user"
|
||||
import { isValidEmail, isStrongPassword, ValidationError } from "../utils/validation"
|
||||
import { createLogger } from "../utils/logger"
|
||||
|
||||
const logger = createLogger("UserService")
|
||||
|
||||
export class UserService {
|
||||
private users: Map<string, User> = new Map()
|
||||
|
||||
async createUser(dto: CreateUserDto): Promise<User> {
|
||||
logger.info("Creating user", { email: dto.email })
|
||||
|
||||
// Validate email
|
||||
if (!isValidEmail(dto.email)) {
|
||||
throw new ValidationError("Invalid email address", "email")
|
||||
}
|
||||
|
||||
// Validate password
|
||||
if (!isStrongPassword(dto.password)) {
|
||||
throw new ValidationError("Password must be at least 8 characters", "password")
|
||||
}
|
||||
|
||||
// Check if user already exists
|
||||
const existingUser = Array.from(this.users.values()).find((u) => u.email === dto.email)
|
||||
|
||||
if (existingUser) {
|
||||
throw new Error("User with this email already exists")
|
||||
}
|
||||
|
||||
// Create user
|
||||
const user: User = {
|
||||
id: this.generateId(),
|
||||
email: dto.email,
|
||||
name: dto.name,
|
||||
role: dto.role || "user",
|
||||
createdAt: new Date(),
|
||||
updatedAt: new Date(),
|
||||
}
|
||||
|
||||
this.users.set(user.id, user)
|
||||
logger.info("User created", { userId: user.id })
|
||||
|
||||
return user
|
||||
}
|
||||
|
||||
async getUserById(id: string): Promise<User | null> {
|
||||
logger.debug("Getting user by ID", { userId: id })
|
||||
return this.users.get(id) || null
|
||||
}
|
||||
|
||||
async getUserByEmail(email: string): Promise<User | null> {
|
||||
logger.debug("Getting user by email", { email })
|
||||
return Array.from(this.users.values()).find((u) => u.email === email) || null
|
||||
}
|
||||
|
||||
async updateUser(id: string, dto: UpdateUserDto): Promise<User> {
|
||||
logger.info("Updating user", { userId: id })
|
||||
|
||||
const user = this.users.get(id)
|
||||
if (!user) {
|
||||
throw new Error("User not found")
|
||||
}
|
||||
|
||||
const updated: User = {
|
||||
...user,
|
||||
...(dto.name && { name: dto.name }),
|
||||
...(dto.role && { role: dto.role }),
|
||||
updatedAt: new Date(),
|
||||
}
|
||||
|
||||
this.users.set(id, updated)
|
||||
logger.info("User updated", { userId: id })
|
||||
|
||||
return updated
|
||||
}
|
||||
|
||||
async deleteUser(id: string): Promise<void> {
|
||||
logger.info("Deleting user", { userId: id })
|
||||
|
||||
if (!this.users.has(id)) {
|
||||
throw new Error("User not found")
|
||||
}
|
||||
|
||||
this.users.delete(id)
|
||||
logger.info("User deleted", { userId: id })
|
||||
}
|
||||
|
||||
async listUsers(): Promise<User[]> {
|
||||
logger.debug("Listing all users")
|
||||
return Array.from(this.users.values())
|
||||
}
|
||||
|
||||
private generateId(): string {
|
||||
return `user_${Date.now()}_${Math.random().toString(36).substring(7)}`
|
||||
}
|
||||
}
|
||||
32
packages/ipuaro/examples/demo-project/src/types/user.ts
Normal file
32
packages/ipuaro/examples/demo-project/src/types/user.ts
Normal file
@@ -0,0 +1,32 @@
|
||||
/**
|
||||
* User-related type definitions
|
||||
*/
|
||||
|
||||
export interface User {
|
||||
id: string
|
||||
email: string
|
||||
name: string
|
||||
role: UserRole
|
||||
createdAt: Date
|
||||
updatedAt: Date
|
||||
}
|
||||
|
||||
export type UserRole = "admin" | "user" | "guest"
|
||||
|
||||
export interface CreateUserDto {
|
||||
email: string
|
||||
name: string
|
||||
password: string
|
||||
role?: UserRole
|
||||
}
|
||||
|
||||
export interface UpdateUserDto {
|
||||
name?: string
|
||||
role?: UserRole
|
||||
}
|
||||
|
||||
export interface AuthToken {
|
||||
token: string
|
||||
expiresAt: Date
|
||||
userId: string
|
||||
}
|
||||
41
packages/ipuaro/examples/demo-project/src/utils/logger.ts
Normal file
41
packages/ipuaro/examples/demo-project/src/utils/logger.ts
Normal file
@@ -0,0 +1,41 @@
|
||||
/**
|
||||
* Simple logging utility
|
||||
*/
|
||||
|
||||
export type LogLevel = "debug" | "info" | "warn" | "error"
|
||||
|
||||
export class Logger {
|
||||
constructor(private context: string) {}
|
||||
|
||||
debug(message: string, meta?: Record<string, unknown>): void {
|
||||
this.log("debug", message, meta)
|
||||
}
|
||||
|
||||
info(message: string, meta?: Record<string, unknown>): void {
|
||||
this.log("info", message, meta)
|
||||
}
|
||||
|
||||
warn(message: string, meta?: Record<string, unknown>): void {
|
||||
this.log("warn", message, meta)
|
||||
}
|
||||
|
||||
error(message: string, error?: Error, meta?: Record<string, unknown>): void {
|
||||
this.log("error", message, { ...meta, error: error?.message })
|
||||
}
|
||||
|
||||
private log(level: LogLevel, message: string, meta?: Record<string, unknown>): void {
|
||||
const timestamp = new Date().toISOString()
|
||||
const logEntry = {
|
||||
timestamp,
|
||||
level,
|
||||
context: this.context,
|
||||
message,
|
||||
...(meta && { meta }),
|
||||
}
|
||||
console.log(JSON.stringify(logEntry))
|
||||
}
|
||||
}
|
||||
|
||||
export function createLogger(context: string): Logger {
|
||||
return new Logger(context)
|
||||
}
|
||||
@@ -0,0 +1,28 @@
|
||||
/**
|
||||
* Validation utilities
|
||||
*/
|
||||
|
||||
export function isValidEmail(email: string): boolean {
|
||||
const emailRegex = /^[^\s@]+@[^\s@]+\.[^\s@]+$/
|
||||
return emailRegex.test(email)
|
||||
}
|
||||
|
||||
export function isStrongPassword(password: string): boolean {
|
||||
// TODO: Add more sophisticated password validation
|
||||
return password.length >= 8
|
||||
}
|
||||
|
||||
export function sanitizeInput(input: string): string {
|
||||
// FIXME: This is a basic implementation, needs XSS protection
|
||||
return input.trim().replace(/[<>]/g, "")
|
||||
}
|
||||
|
||||
export class ValidationError extends Error {
|
||||
constructor(
|
||||
message: string,
|
||||
public field: string,
|
||||
) {
|
||||
super(message)
|
||||
this.name = "ValidationError"
|
||||
}
|
||||
}
|
||||
141
packages/ipuaro/examples/demo-project/tests/user.test.ts
Normal file
141
packages/ipuaro/examples/demo-project/tests/user.test.ts
Normal file
@@ -0,0 +1,141 @@
|
||||
/**
|
||||
* User service tests
|
||||
*/
|
||||
|
||||
import { describe, it, expect, beforeEach } from "vitest"
|
||||
import { UserService } from "../src/services/user"
|
||||
import { ValidationError } from "../src/utils/validation"
|
||||
|
||||
describe("UserService", () => {
|
||||
let userService: UserService
|
||||
|
||||
beforeEach(() => {
|
||||
userService = new UserService()
|
||||
})
|
||||
|
||||
describe("createUser", () => {
|
||||
it("should create a new user", async () => {
|
||||
const user = await userService.createUser({
|
||||
email: "test@example.com",
|
||||
name: "Test User",
|
||||
password: "password123",
|
||||
})
|
||||
|
||||
expect(user).toBeDefined()
|
||||
expect(user.email).toBe("test@example.com")
|
||||
expect(user.name).toBe("Test User")
|
||||
expect(user.role).toBe("user")
|
||||
})
|
||||
|
||||
it("should reject invalid email", async () => {
|
||||
await expect(
|
||||
userService.createUser({
|
||||
email: "invalid-email",
|
||||
name: "Test User",
|
||||
password: "password123",
|
||||
}),
|
||||
).rejects.toThrow(ValidationError)
|
||||
})
|
||||
|
||||
it("should reject weak password", async () => {
|
||||
await expect(
|
||||
userService.createUser({
|
||||
email: "test@example.com",
|
||||
name: "Test User",
|
||||
password: "weak",
|
||||
}),
|
||||
).rejects.toThrow(ValidationError)
|
||||
})
|
||||
|
||||
it("should prevent duplicate emails", async () => {
|
||||
await userService.createUser({
|
||||
email: "test@example.com",
|
||||
name: "Test User",
|
||||
password: "password123",
|
||||
})
|
||||
|
||||
await expect(
|
||||
userService.createUser({
|
||||
email: "test@example.com",
|
||||
name: "Another User",
|
||||
password: "password123",
|
||||
}),
|
||||
).rejects.toThrow("already exists")
|
||||
})
|
||||
})
|
||||
|
||||
describe("getUserById", () => {
|
||||
it("should return user by ID", async () => {
|
||||
const created = await userService.createUser({
|
||||
email: "test@example.com",
|
||||
name: "Test User",
|
||||
password: "password123",
|
||||
})
|
||||
|
||||
const found = await userService.getUserById(created.id)
|
||||
expect(found).toEqual(created)
|
||||
})
|
||||
|
||||
it("should return null for non-existent ID", async () => {
|
||||
const found = await userService.getUserById("non-existent")
|
||||
expect(found).toBeNull()
|
||||
})
|
||||
})
|
||||
|
||||
describe("updateUser", () => {
|
||||
it("should update user name", async () => {
|
||||
const user = await userService.createUser({
|
||||
email: "test@example.com",
|
||||
name: "Test User",
|
||||
password: "password123",
|
||||
})
|
||||
|
||||
const updated = await userService.updateUser(user.id, {
|
||||
name: "Updated Name",
|
||||
})
|
||||
|
||||
expect(updated.name).toBe("Updated Name")
|
||||
expect(updated.email).toBe(user.email)
|
||||
})
|
||||
|
||||
it("should throw error for non-existent user", async () => {
|
||||
await expect(userService.updateUser("non-existent", { name: "Test" })).rejects.toThrow(
|
||||
"not found",
|
||||
)
|
||||
})
|
||||
})
|
||||
|
||||
describe("deleteUser", () => {
|
||||
it("should delete user", async () => {
|
||||
const user = await userService.createUser({
|
||||
email: "test@example.com",
|
||||
name: "Test User",
|
||||
password: "password123",
|
||||
})
|
||||
|
||||
await userService.deleteUser(user.id)
|
||||
|
||||
const found = await userService.getUserById(user.id)
|
||||
expect(found).toBeNull()
|
||||
})
|
||||
})
|
||||
|
||||
describe("listUsers", () => {
|
||||
it("should return all users", async () => {
|
||||
await userService.createUser({
|
||||
email: "user1@example.com",
|
||||
name: "User 1",
|
||||
password: "password123",
|
||||
})
|
||||
|
||||
await userService.createUser({
|
||||
email: "user2@example.com",
|
||||
name: "User 2",
|
||||
password: "password123",
|
||||
})
|
||||
|
||||
const users = await userService.listUsers()
|
||||
expect(users).toHaveLength(2)
|
||||
})
|
||||
})
|
||||
})
|
||||
16
packages/ipuaro/examples/demo-project/tsconfig.json
Normal file
16
packages/ipuaro/examples/demo-project/tsconfig.json
Normal file
@@ -0,0 +1,16 @@
|
||||
{
|
||||
"compilerOptions": {
|
||||
"target": "ES2023",
|
||||
"module": "ESNext",
|
||||
"lib": ["ES2023"],
|
||||
"moduleResolution": "Bundler",
|
||||
"esModuleInterop": true,
|
||||
"strict": true,
|
||||
"skipLibCheck": true,
|
||||
"resolveJsonModule": true,
|
||||
"outDir": "dist",
|
||||
"rootDir": "src"
|
||||
},
|
||||
"include": ["src/**/*"],
|
||||
"exclude": ["node_modules", "dist", "tests"]
|
||||
}
|
||||
8
packages/ipuaro/examples/demo-project/vitest.config.ts
Normal file
8
packages/ipuaro/examples/demo-project/vitest.config.ts
Normal file
@@ -0,0 +1,8 @@
|
||||
import { defineConfig } from "vitest/config"
|
||||
|
||||
export default defineConfig({
|
||||
test: {
|
||||
globals: true,
|
||||
environment: "node",
|
||||
},
|
||||
})
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "@samiyev/ipuaro",
|
||||
"version": "0.10.0",
|
||||
"version": "0.30.1",
|
||||
"description": "Local AI agent for codebase operations with infinite context feeling",
|
||||
"author": "Fozilbek Samiyev <fozilbek.samiyev@gmail.com>",
|
||||
"license": "MIT",
|
||||
@@ -8,7 +8,7 @@
|
||||
"main": "./dist/index.js",
|
||||
"types": "./dist/index.d.ts",
|
||||
"bin": {
|
||||
"ipuaro": "./bin/ipuaro.js"
|
||||
"ipuaro": "bin/ipuaro.js"
|
||||
},
|
||||
"exports": {
|
||||
".": {
|
||||
@@ -44,14 +44,20 @@
|
||||
"simple-git": "^3.27.0",
|
||||
"tree-sitter": "^0.21.1",
|
||||
"tree-sitter-javascript": "^0.21.0",
|
||||
"tree-sitter-json": "^0.24.8",
|
||||
"tree-sitter-typescript": "^0.21.2",
|
||||
"yaml": "^2.8.2",
|
||||
"zod": "^3.23.8"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@testing-library/react": "^16.3.0",
|
||||
"@types/jsdom": "^27.0.0",
|
||||
"@types/node": "^22.10.1",
|
||||
"@types/react": "^18.2.0",
|
||||
"@vitest/coverage-v8": "^1.6.0",
|
||||
"@vitest/ui": "^1.6.0",
|
||||
"jsdom": "^27.2.0",
|
||||
"react-dom": "18.3.1",
|
||||
"tsup": "^8.3.5",
|
||||
"typescript": "^5.7.2",
|
||||
"vitest": "^1.6.0"
|
||||
|
||||
@@ -2,6 +2,7 @@ import type { ContextState, Session } from "../../domain/entities/Session.js"
|
||||
import type { ILLMClient } from "../../domain/services/ILLMClient.js"
|
||||
import { type ChatMessage, createSystemMessage } from "../../domain/value-objects/ChatMessage.js"
|
||||
import { CONTEXT_COMPRESSION_THRESHOLD, CONTEXT_WINDOW_SIZE } from "../../domain/constants/index.js"
|
||||
import type { ContextConfig } from "../../shared/constants/config.js"
|
||||
|
||||
/**
|
||||
* File in context with token count.
|
||||
@@ -39,9 +40,13 @@ export class ContextManager {
|
||||
private readonly filesInContext = new Map<string, FileContext>()
|
||||
private currentTokens = 0
|
||||
private readonly contextWindowSize: number
|
||||
private readonly compressionThreshold: number
|
||||
private readonly compressionMethod: "llm-summary" | "truncate"
|
||||
|
||||
constructor(contextWindowSize: number = CONTEXT_WINDOW_SIZE) {
|
||||
constructor(contextWindowSize: number = CONTEXT_WINDOW_SIZE, config?: ContextConfig) {
|
||||
this.contextWindowSize = contextWindowSize
|
||||
this.compressionThreshold = config?.autoCompressAt ?? CONTEXT_COMPRESSION_THRESHOLD
|
||||
this.compressionMethod = config?.compressionMethod ?? "llm-summary"
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -97,7 +102,7 @@ export class ContextManager {
|
||||
* Check if compression is needed.
|
||||
*/
|
||||
needsCompression(): boolean {
|
||||
return this.getUsage() > CONTEXT_COMPRESSION_THRESHOLD
|
||||
return this.getUsage() > this.compressionThreshold
|
||||
}
|
||||
|
||||
/**
|
||||
|
||||
224
packages/ipuaro/src/application/use-cases/ExecuteTool.ts
Normal file
224
packages/ipuaro/src/application/use-cases/ExecuteTool.ts
Normal file
@@ -0,0 +1,224 @@
|
||||
import { randomUUID } from "node:crypto"
|
||||
import type { Session } from "../../domain/entities/Session.js"
|
||||
import type { ISessionStorage } from "../../domain/services/ISessionStorage.js"
|
||||
import type { IStorage } from "../../domain/services/IStorage.js"
|
||||
import type { DiffInfo, ToolContext } from "../../domain/services/ITool.js"
|
||||
import type { ToolCall } from "../../domain/value-objects/ToolCall.js"
|
||||
import { createErrorResult, type ToolResult } from "../../domain/value-objects/ToolResult.js"
|
||||
import { createUndoEntry } from "../../domain/value-objects/UndoEntry.js"
|
||||
import type { IToolRegistry } from "../interfaces/IToolRegistry.js"
|
||||
|
||||
/**
|
||||
* Result of confirmation dialog.
|
||||
*/
|
||||
export interface ConfirmationResult {
|
||||
confirmed: boolean
|
||||
editedContent?: string[]
|
||||
}
|
||||
|
||||
/**
|
||||
* Confirmation handler callback type.
|
||||
* Can return either a boolean (for backward compatibility) or a ConfirmationResult.
|
||||
*/
|
||||
export type ConfirmationHandler = (
|
||||
message: string,
|
||||
diff?: DiffInfo,
|
||||
) => Promise<boolean | ConfirmationResult>
|
||||
|
||||
/**
|
||||
* Progress handler callback type.
|
||||
*/
|
||||
export type ProgressHandler = (message: string) => void
|
||||
|
||||
/**
|
||||
* Options for ExecuteTool.
|
||||
*/
|
||||
export interface ExecuteToolOptions {
|
||||
/** Auto-apply edits without confirmation */
|
||||
autoApply?: boolean
|
||||
/** Confirmation handler */
|
||||
onConfirmation?: ConfirmationHandler
|
||||
/** Progress handler */
|
||||
onProgress?: ProgressHandler
|
||||
}
|
||||
|
||||
/**
|
||||
* Result of tool execution.
|
||||
*/
|
||||
export interface ExecuteToolResult {
|
||||
result: ToolResult
|
||||
undoEntryCreated: boolean
|
||||
undoEntryId?: string
|
||||
}
|
||||
|
||||
/**
|
||||
* Use case for executing a single tool.
|
||||
* Orchestrates tool execution with:
|
||||
* - Parameter validation
|
||||
* - Confirmation flow
|
||||
* - Undo stack management
|
||||
* - Storage updates
|
||||
*/
|
||||
export class ExecuteTool {
|
||||
private readonly storage: IStorage
|
||||
private readonly sessionStorage: ISessionStorage
|
||||
private readonly tools: IToolRegistry
|
||||
private readonly projectRoot: string
|
||||
private lastUndoEntryId?: string
|
||||
|
||||
constructor(
|
||||
storage: IStorage,
|
||||
sessionStorage: ISessionStorage,
|
||||
tools: IToolRegistry,
|
||||
projectRoot: string,
|
||||
) {
|
||||
this.storage = storage
|
||||
this.sessionStorage = sessionStorage
|
||||
this.tools = tools
|
||||
this.projectRoot = projectRoot
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute a tool call.
|
||||
*
|
||||
* @param toolCall - The tool call to execute
|
||||
* @param session - Current session (for undo stack)
|
||||
* @param options - Execution options
|
||||
* @returns Execution result
|
||||
*/
|
||||
async execute(
|
||||
toolCall: ToolCall,
|
||||
session: Session,
|
||||
options: ExecuteToolOptions = {},
|
||||
): Promise<ExecuteToolResult> {
|
||||
this.lastUndoEntryId = undefined
|
||||
const startTime = Date.now()
|
||||
const tool = this.tools.get(toolCall.name)
|
||||
|
||||
if (!tool) {
|
||||
return {
|
||||
result: createErrorResult(
|
||||
toolCall.id,
|
||||
`Unknown tool: ${toolCall.name}`,
|
||||
Date.now() - startTime,
|
||||
),
|
||||
undoEntryCreated: false,
|
||||
}
|
||||
}
|
||||
|
||||
const validationError = tool.validateParams(toolCall.params)
|
||||
if (validationError) {
|
||||
return {
|
||||
result: createErrorResult(toolCall.id, validationError, Date.now() - startTime),
|
||||
undoEntryCreated: false,
|
||||
}
|
||||
}
|
||||
|
||||
const context = this.buildToolContext(toolCall, session, options)
|
||||
|
||||
try {
|
||||
const result = await tool.execute(toolCall.params, context)
|
||||
|
||||
return {
|
||||
result,
|
||||
undoEntryCreated: this.lastUndoEntryId !== undefined,
|
||||
undoEntryId: this.lastUndoEntryId,
|
||||
}
|
||||
} catch (error) {
|
||||
const errorMessage = error instanceof Error ? error.message : String(error)
|
||||
return {
|
||||
result: createErrorResult(toolCall.id, errorMessage, Date.now() - startTime),
|
||||
undoEntryCreated: false,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Build tool context for execution.
|
||||
*/
|
||||
private buildToolContext(
|
||||
toolCall: ToolCall,
|
||||
session: Session,
|
||||
options: ExecuteToolOptions,
|
||||
): ToolContext {
|
||||
return {
|
||||
projectRoot: this.projectRoot,
|
||||
storage: this.storage,
|
||||
requestConfirmation: async (msg: string, diff?: DiffInfo) => {
|
||||
return this.handleConfirmation(msg, diff, toolCall, session, options)
|
||||
},
|
||||
onProgress: (msg: string) => {
|
||||
options.onProgress?.(msg)
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Handle confirmation for tool actions.
|
||||
* Supports edited content from user.
|
||||
*/
|
||||
private async handleConfirmation(
|
||||
msg: string,
|
||||
diff: DiffInfo | undefined,
|
||||
toolCall: ToolCall,
|
||||
session: Session,
|
||||
options: ExecuteToolOptions,
|
||||
): Promise<boolean> {
|
||||
if (options.autoApply) {
|
||||
if (diff) {
|
||||
this.lastUndoEntryId = await this.createUndoEntry(diff, toolCall, session)
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
if (options.onConfirmation) {
|
||||
const result = await options.onConfirmation(msg, diff)
|
||||
|
||||
const confirmed = typeof result === "boolean" ? result : result.confirmed
|
||||
const editedContent = typeof result === "boolean" ? undefined : result.editedContent
|
||||
|
||||
if (confirmed && diff) {
|
||||
if (editedContent && editedContent.length > 0) {
|
||||
diff.newLines = editedContent
|
||||
if (toolCall.params.content && typeof toolCall.params.content === "string") {
|
||||
toolCall.params.content = editedContent.join("\n")
|
||||
}
|
||||
}
|
||||
|
||||
this.lastUndoEntryId = await this.createUndoEntry(diff, toolCall, session)
|
||||
}
|
||||
|
||||
return confirmed
|
||||
}
|
||||
|
||||
if (diff) {
|
||||
this.lastUndoEntryId = await this.createUndoEntry(diff, toolCall, session)
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
/**
|
||||
* Create undo entry from diff.
|
||||
*/
|
||||
private async createUndoEntry(
|
||||
diff: DiffInfo,
|
||||
toolCall: ToolCall,
|
||||
session: Session,
|
||||
): Promise<string> {
|
||||
const entryId = randomUUID()
|
||||
const entry = createUndoEntry(
|
||||
entryId,
|
||||
diff.filePath,
|
||||
diff.oldLines,
|
||||
diff.newLines,
|
||||
`${toolCall.name}: ${diff.filePath}`,
|
||||
toolCall.id,
|
||||
)
|
||||
|
||||
session.addUndoEntry(entry)
|
||||
await this.sessionStorage.pushUndoEntry(session.id, entry)
|
||||
session.stats.editsApplied++
|
||||
|
||||
return entryId
|
||||
}
|
||||
}
|
||||
@@ -1,9 +1,8 @@
|
||||
import { randomUUID } from "node:crypto"
|
||||
import type { Session } from "../../domain/entities/Session.js"
|
||||
import type { ILLMClient } from "../../domain/services/ILLMClient.js"
|
||||
import type { ISessionStorage } from "../../domain/services/ISessionStorage.js"
|
||||
import type { IStorage } from "../../domain/services/IStorage.js"
|
||||
import type { DiffInfo, ToolContext } from "../../domain/services/ITool.js"
|
||||
import type { DiffInfo } from "../../domain/services/ITool.js"
|
||||
import {
|
||||
type ChatMessage,
|
||||
createAssistantMessage,
|
||||
@@ -12,18 +11,19 @@ import {
|
||||
createUserMessage,
|
||||
} from "../../domain/value-objects/ChatMessage.js"
|
||||
import type { ToolCall } from "../../domain/value-objects/ToolCall.js"
|
||||
import { createErrorResult, type ToolResult } from "../../domain/value-objects/ToolResult.js"
|
||||
import { createUndoEntry, type UndoEntry } from "../../domain/value-objects/UndoEntry.js"
|
||||
import { IpuaroError } from "../../shared/errors/IpuaroError.js"
|
||||
import type { ErrorChoice } from "../../shared/types/index.js"
|
||||
import type { ToolResult } from "../../domain/value-objects/ToolResult.js"
|
||||
import type { UndoEntry } from "../../domain/value-objects/UndoEntry.js"
|
||||
import { type ErrorOption, IpuaroError } from "../../shared/errors/IpuaroError.js"
|
||||
import {
|
||||
buildInitialContext,
|
||||
type ProjectStructure,
|
||||
SYSTEM_PROMPT,
|
||||
TOOL_REMINDER,
|
||||
} from "../../infrastructure/llm/prompts.js"
|
||||
import { parseToolCalls } from "../../infrastructure/llm/ResponseParser.js"
|
||||
import type { IToolRegistry } from "../interfaces/IToolRegistry.js"
|
||||
import { ContextManager } from "./ContextManager.js"
|
||||
import { type ConfirmationResult, ExecuteTool } from "./ExecuteTool.js"
|
||||
|
||||
/**
|
||||
* Status during message handling.
|
||||
@@ -57,8 +57,8 @@ export interface HandleMessageEvents {
|
||||
onMessage?: (message: ChatMessage) => void
|
||||
onToolCall?: (call: ToolCall) => void
|
||||
onToolResult?: (result: ToolResult) => void
|
||||
onConfirmation?: (message: string, diff?: DiffInfo) => Promise<boolean>
|
||||
onError?: (error: IpuaroError) => Promise<ErrorChoice>
|
||||
onConfirmation?: (message: string, diff?: DiffInfo) => Promise<boolean | ConfirmationResult>
|
||||
onError?: (error: IpuaroError) => Promise<ErrorOption>
|
||||
onStatusChange?: (status: HandleMessageStatus) => void
|
||||
onUndoEntry?: (entry: UndoEntry) => void
|
||||
}
|
||||
@@ -69,6 +69,9 @@ export interface HandleMessageEvents {
|
||||
export interface HandleMessageOptions {
|
||||
autoApply?: boolean
|
||||
maxToolCalls?: number
|
||||
maxHistoryMessages?: number
|
||||
saveInputHistory?: boolean
|
||||
contextConfig?: import("../../shared/constants/config.js").ContextConfig
|
||||
}
|
||||
|
||||
const DEFAULT_MAX_TOOL_CALLS = 20
|
||||
@@ -83,6 +86,7 @@ export class HandleMessage {
|
||||
private readonly llm: ILLMClient
|
||||
private readonly tools: IToolRegistry
|
||||
private readonly contextManager: ContextManager
|
||||
private readonly executeTool: ExecuteTool
|
||||
private readonly projectRoot: string
|
||||
private projectStructure?: ProjectStructure
|
||||
|
||||
@@ -96,13 +100,15 @@ export class HandleMessage {
|
||||
llm: ILLMClient,
|
||||
tools: IToolRegistry,
|
||||
projectRoot: string,
|
||||
contextConfig?: import("../../shared/constants/config.js").ContextConfig,
|
||||
) {
|
||||
this.storage = storage
|
||||
this.sessionStorage = sessionStorage
|
||||
this.llm = llm
|
||||
this.tools = tools
|
||||
this.projectRoot = projectRoot
|
||||
this.contextManager = new ContextManager(llm.getContextWindowSize())
|
||||
this.contextManager = new ContextManager(llm.getContextWindowSize(), contextConfig)
|
||||
this.executeTool = new ExecuteTool(storage, sessionStorage, tools, projectRoot)
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -134,6 +140,15 @@ export class HandleMessage {
|
||||
this.llm.abort()
|
||||
}
|
||||
|
||||
/**
|
||||
* Truncate session history if maxHistoryMessages is set.
|
||||
*/
|
||||
private truncateHistoryIfNeeded(session: Session): void {
|
||||
if (this.options.maxHistoryMessages !== undefined) {
|
||||
session.truncateHistory(this.options.maxHistoryMessages)
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute the message handling flow.
|
||||
*/
|
||||
@@ -144,7 +159,12 @@ export class HandleMessage {
|
||||
if (message.trim()) {
|
||||
const userMessage = createUserMessage(message)
|
||||
session.addMessage(userMessage)
|
||||
this.truncateHistoryIfNeeded(session)
|
||||
|
||||
if (this.options.saveInputHistory !== false) {
|
||||
session.addInputToHistory(message)
|
||||
}
|
||||
|
||||
this.emitMessage(userMessage)
|
||||
}
|
||||
|
||||
@@ -182,6 +202,7 @@ export class HandleMessage {
|
||||
toolCalls: 0,
|
||||
})
|
||||
session.addMessage(assistantMessage)
|
||||
this.truncateHistoryIfNeeded(session)
|
||||
this.emitMessage(assistantMessage)
|
||||
this.contextManager.addTokens(response.tokens)
|
||||
this.contextManager.updateSession(session)
|
||||
@@ -196,6 +217,7 @@ export class HandleMessage {
|
||||
toolCalls: parsed.toolCalls.length,
|
||||
})
|
||||
session.addMessage(assistantMessage)
|
||||
this.truncateHistoryIfNeeded(session)
|
||||
this.emitMessage(assistantMessage)
|
||||
|
||||
toolCallCount += parsed.toolCalls.length
|
||||
@@ -203,6 +225,7 @@ export class HandleMessage {
|
||||
const errorMsg = `Maximum tool calls (${String(maxToolCalls)}) exceeded`
|
||||
const errorMessage = createSystemMessage(errorMsg)
|
||||
session.addMessage(errorMessage)
|
||||
this.truncateHistoryIfNeeded(session)
|
||||
this.emitMessage(errorMessage)
|
||||
this.emitStatus("ready")
|
||||
return
|
||||
@@ -226,6 +249,7 @@ export class HandleMessage {
|
||||
|
||||
const toolMessage = createToolMessage(results)
|
||||
session.addMessage(toolMessage)
|
||||
this.truncateHistoryIfNeeded(session)
|
||||
|
||||
this.contextManager.addTokens(response.tokens)
|
||||
|
||||
@@ -254,91 +278,42 @@ export class HandleMessage {
|
||||
|
||||
messages.push(...session.history)
|
||||
|
||||
// Add tool reminder if last message is from user (first LLM call for this query)
|
||||
const lastMessage = session.history[session.history.length - 1]
|
||||
if (lastMessage?.role === "user") {
|
||||
messages.push(createSystemMessage(TOOL_REMINDER))
|
||||
}
|
||||
|
||||
return messages
|
||||
}
|
||||
|
||||
private async executeToolCall(toolCall: ToolCall, session: Session): Promise<ToolResult> {
|
||||
const startTime = Date.now()
|
||||
const tool = this.tools.get(toolCall.name)
|
||||
|
||||
if (!tool) {
|
||||
return createErrorResult(
|
||||
toolCall.id,
|
||||
`Unknown tool: ${toolCall.name}`,
|
||||
Date.now() - startTime,
|
||||
)
|
||||
const { result, undoEntryCreated, undoEntryId } = await this.executeTool.execute(
|
||||
toolCall,
|
||||
session,
|
||||
{
|
||||
autoApply: this.options.autoApply,
|
||||
onConfirmation: async (msg: string, diff?: DiffInfo) => {
|
||||
this.emitStatus("awaiting_confirmation")
|
||||
if (this.events.onConfirmation) {
|
||||
return this.events.onConfirmation(msg, diff)
|
||||
}
|
||||
|
||||
const context: ToolContext = {
|
||||
projectRoot: this.projectRoot,
|
||||
storage: this.storage,
|
||||
requestConfirmation: async (msg: string, diff?: DiffInfo) => {
|
||||
return this.handleConfirmation(msg, diff, toolCall, session)
|
||||
return true
|
||||
},
|
||||
onProgress: (_msg: string) => {
|
||||
this.events.onStatusChange?.("tool_call")
|
||||
},
|
||||
}
|
||||
|
||||
try {
|
||||
const validationError = tool.validateParams(toolCall.params)
|
||||
if (validationError) {
|
||||
return createErrorResult(toolCall.id, validationError, Date.now() - startTime)
|
||||
}
|
||||
|
||||
const result = await tool.execute(toolCall.params, context)
|
||||
return result
|
||||
} catch (error) {
|
||||
const errorMessage = error instanceof Error ? error.message : String(error)
|
||||
return createErrorResult(toolCall.id, errorMessage, Date.now() - startTime)
|
||||
}
|
||||
}
|
||||
|
||||
private async handleConfirmation(
|
||||
msg: string,
|
||||
diff: DiffInfo | undefined,
|
||||
toolCall: ToolCall,
|
||||
session: Session,
|
||||
): Promise<boolean> {
|
||||
if (this.options.autoApply) {
|
||||
if (diff) {
|
||||
this.createUndoEntryFromDiff(diff, toolCall, session)
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
this.emitStatus("awaiting_confirmation")
|
||||
|
||||
if (this.events.onConfirmation) {
|
||||
const confirmed = await this.events.onConfirmation(msg, diff)
|
||||
|
||||
if (confirmed && diff) {
|
||||
this.createUndoEntryFromDiff(diff, toolCall, session)
|
||||
}
|
||||
|
||||
return confirmed
|
||||
}
|
||||
|
||||
if (diff) {
|
||||
this.createUndoEntryFromDiff(diff, toolCall, session)
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
private createUndoEntryFromDiff(diff: DiffInfo, toolCall: ToolCall, session: Session): void {
|
||||
const entry = createUndoEntry(
|
||||
randomUUID(),
|
||||
diff.filePath,
|
||||
diff.oldLines,
|
||||
diff.newLines,
|
||||
`${toolCall.name}: ${diff.filePath}`,
|
||||
toolCall.id,
|
||||
},
|
||||
)
|
||||
|
||||
session.addUndoEntry(entry)
|
||||
void this.sessionStorage.pushUndoEntry(session.id, entry)
|
||||
session.stats.editsApplied++
|
||||
this.events.onUndoEntry?.(entry)
|
||||
if (undoEntryCreated && undoEntryId) {
|
||||
const undoEntry = session.undoStack.find((entry) => entry.id === undoEntryId)
|
||||
if (undoEntry) {
|
||||
this.events.onUndoEntry?.(undoEntry)
|
||||
}
|
||||
}
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
private async handleLLMError(error: unknown, session: Session): Promise<void> {
|
||||
@@ -360,6 +335,7 @@ export class HandleMessage {
|
||||
|
||||
const errorMessage = createSystemMessage(`Error: ${ipuaroError.message}`)
|
||||
session.addMessage(errorMessage)
|
||||
this.truncateHistoryIfNeeded(session)
|
||||
this.emitMessage(errorMessage)
|
||||
|
||||
this.emitStatus("ready")
|
||||
|
||||
184
packages/ipuaro/src/application/use-cases/IndexProject.ts
Normal file
184
packages/ipuaro/src/application/use-cases/IndexProject.ts
Normal file
@@ -0,0 +1,184 @@
|
||||
import * as path from "node:path"
|
||||
import type { IStorage } from "../../domain/services/IStorage.js"
|
||||
import type { IndexingStats, IndexProgress } from "../../domain/services/IIndexer.js"
|
||||
import { FileScanner } from "../../infrastructure/indexer/FileScanner.js"
|
||||
import { ASTParser } from "../../infrastructure/indexer/ASTParser.js"
|
||||
import { MetaAnalyzer } from "../../infrastructure/indexer/MetaAnalyzer.js"
|
||||
import { IndexBuilder } from "../../infrastructure/indexer/IndexBuilder.js"
|
||||
import { createFileData, type FileData } from "../../domain/value-objects/FileData.js"
|
||||
import type { FileAST } from "../../domain/value-objects/FileAST.js"
|
||||
import { md5 } from "../../shared/utils/hash.js"
|
||||
|
||||
/**
|
||||
* Options for indexing a project.
|
||||
*/
|
||||
export interface IndexProjectOptions {
|
||||
/** Additional ignore patterns */
|
||||
additionalIgnore?: string[]
|
||||
/** Progress callback */
|
||||
onProgress?: (progress: IndexProgress) => void
|
||||
}
|
||||
|
||||
/**
|
||||
* Use case for indexing a project.
|
||||
* Orchestrates the full indexing pipeline:
|
||||
* 1. Scan files
|
||||
* 2. Parse AST
|
||||
* 3. Analyze metadata
|
||||
* 4. Build indexes
|
||||
* 5. Store in Redis
|
||||
*/
|
||||
export class IndexProject {
|
||||
private readonly storage: IStorage
|
||||
private readonly scanner: FileScanner
|
||||
private readonly parser: ASTParser
|
||||
private readonly metaAnalyzer: MetaAnalyzer
|
||||
private readonly indexBuilder: IndexBuilder
|
||||
|
||||
constructor(storage: IStorage, projectRoot: string) {
|
||||
this.storage = storage
|
||||
this.scanner = new FileScanner()
|
||||
this.parser = new ASTParser()
|
||||
this.metaAnalyzer = new MetaAnalyzer(projectRoot)
|
||||
this.indexBuilder = new IndexBuilder(projectRoot)
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute the indexing pipeline.
|
||||
*
|
||||
* @param projectRoot - Absolute path to project root
|
||||
* @param options - Optional configuration
|
||||
* @returns Indexing statistics
|
||||
*/
|
||||
async execute(projectRoot: string, options: IndexProjectOptions = {}): Promise<IndexingStats> {
|
||||
const startTime = Date.now()
|
||||
const stats: IndexingStats = {
|
||||
filesScanned: 0,
|
||||
filesParsed: 0,
|
||||
parseErrors: 0,
|
||||
timeMs: 0,
|
||||
}
|
||||
|
||||
const fileDataMap = new Map<string, FileData>()
|
||||
const astMap = new Map<string, FileAST>()
|
||||
const contentMap = new Map<string, string>()
|
||||
|
||||
// Phase 1: Scanning
|
||||
this.reportProgress(options.onProgress, 0, 0, "", "scanning")
|
||||
|
||||
const scanResults = await this.scanner.scanAll(projectRoot)
|
||||
stats.filesScanned = scanResults.length
|
||||
|
||||
// Phase 2: Parsing
|
||||
let current = 0
|
||||
const total = scanResults.length
|
||||
|
||||
for (const scanResult of scanResults) {
|
||||
current++
|
||||
const fullPath = path.join(projectRoot, scanResult.path)
|
||||
this.reportProgress(options.onProgress, current, total, scanResult.path, "parsing")
|
||||
|
||||
const content = await FileScanner.readFileContent(fullPath)
|
||||
if (!content) {
|
||||
continue
|
||||
}
|
||||
|
||||
contentMap.set(scanResult.path, content)
|
||||
|
||||
const lines = content.split("\n")
|
||||
const hash = md5(content)
|
||||
|
||||
const fileData = createFileData(lines, hash, scanResult.size, scanResult.lastModified)
|
||||
fileDataMap.set(scanResult.path, fileData)
|
||||
|
||||
const language = this.detectLanguage(scanResult.path)
|
||||
if (!language) {
|
||||
continue
|
||||
}
|
||||
|
||||
const ast = this.parser.parse(content, language)
|
||||
astMap.set(scanResult.path, ast)
|
||||
|
||||
stats.filesParsed++
|
||||
if (ast.parseError) {
|
||||
stats.parseErrors++
|
||||
}
|
||||
}
|
||||
|
||||
// Phase 3: Analyzing metadata
|
||||
current = 0
|
||||
for (const [filePath, ast] of astMap) {
|
||||
current++
|
||||
this.reportProgress(options.onProgress, current, astMap.size, filePath, "analyzing")
|
||||
|
||||
const content = contentMap.get(filePath)
|
||||
if (!content) {
|
||||
continue
|
||||
}
|
||||
|
||||
const fullPath = path.join(projectRoot, filePath)
|
||||
const meta = this.metaAnalyzer.analyze(fullPath, ast, content, astMap)
|
||||
|
||||
await this.storage.setMeta(filePath, meta)
|
||||
}
|
||||
|
||||
// Phase 4: Building indexes
|
||||
this.reportProgress(options.onProgress, 1, 1, "Building indexes", "indexing")
|
||||
|
||||
const symbolIndex = this.indexBuilder.buildSymbolIndex(astMap)
|
||||
const depsGraph = this.indexBuilder.buildDepsGraph(astMap)
|
||||
|
||||
// Phase 5: Store everything
|
||||
for (const [filePath, fileData] of fileDataMap) {
|
||||
await this.storage.setFile(filePath, fileData)
|
||||
}
|
||||
|
||||
for (const [filePath, ast] of astMap) {
|
||||
await this.storage.setAST(filePath, ast)
|
||||
}
|
||||
|
||||
await this.storage.setSymbolIndex(symbolIndex)
|
||||
await this.storage.setDepsGraph(depsGraph)
|
||||
|
||||
// Store last indexed timestamp
|
||||
await this.storage.setProjectConfig("last_indexed", Date.now())
|
||||
|
||||
stats.timeMs = Date.now() - startTime
|
||||
|
||||
return stats
|
||||
}
|
||||
|
||||
/**
|
||||
* Detect language from file extension.
|
||||
*/
|
||||
private detectLanguage(filePath: string): "ts" | "tsx" | "js" | "jsx" | null {
|
||||
const ext = path.extname(filePath).toLowerCase()
|
||||
switch (ext) {
|
||||
case ".ts":
|
||||
return "ts"
|
||||
case ".tsx":
|
||||
return "tsx"
|
||||
case ".js":
|
||||
return "js"
|
||||
case ".jsx":
|
||||
return "jsx"
|
||||
default:
|
||||
return null
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Report progress to callback if provided.
|
||||
*/
|
||||
private reportProgress(
|
||||
callback: ((progress: IndexProgress) => void) | undefined,
|
||||
current: number,
|
||||
total: number,
|
||||
currentFile: string,
|
||||
phase: IndexProgress["phase"],
|
||||
): void {
|
||||
if (callback) {
|
||||
callback({ current, total, currentFile, phase })
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -4,3 +4,5 @@ export * from "./StartSession.js"
|
||||
export * from "./HandleMessage.js"
|
||||
export * from "./UndoChange.js"
|
||||
export * from "./ContextManager.js"
|
||||
export * from "./IndexProject.js"
|
||||
export * from "./ExecuteTool.js"
|
||||
|
||||
148
packages/ipuaro/src/cli/commands/index-cmd.ts
Normal file
148
packages/ipuaro/src/cli/commands/index-cmd.ts
Normal file
@@ -0,0 +1,148 @@
|
||||
/**
|
||||
* Index command implementation.
|
||||
* Indexes project without starting TUI.
|
||||
*/
|
||||
|
||||
import * as path from "node:path"
|
||||
import { RedisClient } from "../../infrastructure/storage/RedisClient.js"
|
||||
import { RedisStorage } from "../../infrastructure/storage/RedisStorage.js"
|
||||
import { generateProjectName } from "../../infrastructure/storage/schema.js"
|
||||
import { IndexProject } from "../../application/use-cases/IndexProject.js"
|
||||
import { type Config, DEFAULT_CONFIG } from "../../shared/constants/config.js"
|
||||
import { checkRedis } from "./onboarding.js"
|
||||
|
||||
/**
|
||||
* Result of index command.
|
||||
*/
|
||||
export interface IndexResult {
|
||||
success: boolean
|
||||
filesIndexed: number
|
||||
filesSkipped: number
|
||||
errors: string[]
|
||||
duration: number
|
||||
}
|
||||
|
||||
/**
|
||||
* Progress callback for indexing.
|
||||
*/
|
||||
export type IndexProgressCallback = (
|
||||
phase: "scanning" | "parsing" | "analyzing" | "storing",
|
||||
current: number,
|
||||
total: number,
|
||||
currentFile?: string,
|
||||
) => void
|
||||
|
||||
/**
|
||||
* Execute the index command.
|
||||
*/
|
||||
export async function executeIndex(
|
||||
projectPath: string,
|
||||
config: Config = DEFAULT_CONFIG,
|
||||
onProgress?: IndexProgressCallback,
|
||||
): Promise<IndexResult> {
|
||||
const startTime = Date.now()
|
||||
const resolvedPath = path.resolve(projectPath)
|
||||
const projectName = generateProjectName(resolvedPath)
|
||||
|
||||
console.warn(`📁 Indexing project: ${resolvedPath}`)
|
||||
console.warn(` Project name: ${projectName}\n`)
|
||||
|
||||
const redisResult = await checkRedis(config.redis)
|
||||
if (!redisResult.ok) {
|
||||
console.error(`❌ ${redisResult.error ?? "Redis unavailable"}`)
|
||||
return {
|
||||
success: false,
|
||||
filesIndexed: 0,
|
||||
filesSkipped: 0,
|
||||
errors: [redisResult.error ?? "Redis unavailable"],
|
||||
duration: Date.now() - startTime,
|
||||
}
|
||||
}
|
||||
|
||||
let redisClient: RedisClient | null = null
|
||||
|
||||
try {
|
||||
redisClient = new RedisClient(config.redis)
|
||||
await redisClient.connect()
|
||||
|
||||
const storage = new RedisStorage(redisClient, projectName)
|
||||
const indexProject = new IndexProject(storage, resolvedPath)
|
||||
|
||||
let lastPhase: "scanning" | "parsing" | "analyzing" | "indexing" = "scanning"
|
||||
let lastProgress = 0
|
||||
|
||||
const stats = await indexProject.execute(resolvedPath, {
|
||||
onProgress: (progress) => {
|
||||
if (progress.phase !== lastPhase) {
|
||||
if (lastPhase === "scanning") {
|
||||
console.warn(` Found ${String(progress.total)} files\n`)
|
||||
} else if (lastProgress > 0) {
|
||||
console.warn("")
|
||||
}
|
||||
|
||||
const phaseLabels = {
|
||||
scanning: "🔍 Scanning files...",
|
||||
parsing: "📝 Parsing files...",
|
||||
analyzing: "📊 Analyzing metadata...",
|
||||
indexing: "🏗️ Building indexes...",
|
||||
}
|
||||
console.warn(phaseLabels[progress.phase])
|
||||
lastPhase = progress.phase
|
||||
}
|
||||
|
||||
if (progress.phase === "indexing") {
|
||||
onProgress?.("storing", progress.current, progress.total)
|
||||
} else {
|
||||
onProgress?.(
|
||||
progress.phase,
|
||||
progress.current,
|
||||
progress.total,
|
||||
progress.currentFile,
|
||||
)
|
||||
}
|
||||
|
||||
if (
|
||||
progress.current % 50 === 0 &&
|
||||
progress.phase !== "scanning" &&
|
||||
progress.phase !== "indexing"
|
||||
) {
|
||||
process.stdout.write(
|
||||
`\r ${progress.phase === "parsing" ? "Parsed" : "Analyzed"} ${String(progress.current)}/${String(progress.total)} files...`,
|
||||
)
|
||||
}
|
||||
lastProgress = progress.current
|
||||
},
|
||||
})
|
||||
|
||||
const symbolIndex = await storage.getSymbolIndex()
|
||||
const durationSec = (stats.timeMs / 1000).toFixed(2)
|
||||
|
||||
console.warn(`\n✅ Indexing complete in ${durationSec}s`)
|
||||
console.warn(` Files scanned: ${String(stats.filesScanned)}`)
|
||||
console.warn(` Files parsed: ${String(stats.filesParsed)}`)
|
||||
console.warn(` Parse errors: ${String(stats.parseErrors)}`)
|
||||
console.warn(` Symbols: ${String(symbolIndex.size)}`)
|
||||
|
||||
return {
|
||||
success: true,
|
||||
filesIndexed: stats.filesParsed,
|
||||
filesSkipped: stats.filesScanned - stats.filesParsed,
|
||||
errors: [],
|
||||
duration: stats.timeMs,
|
||||
}
|
||||
} catch (error) {
|
||||
const message = error instanceof Error ? error.message : String(error)
|
||||
console.error(`❌ Indexing failed: ${message}`)
|
||||
return {
|
||||
success: false,
|
||||
filesIndexed: 0,
|
||||
filesSkipped: 0,
|
||||
errors: [message],
|
||||
duration: Date.now() - startTime,
|
||||
}
|
||||
} finally {
|
||||
if (redisClient) {
|
||||
await redisClient.disconnect()
|
||||
}
|
||||
}
|
||||
}
|
||||
18
packages/ipuaro/src/cli/commands/index.ts
Normal file
18
packages/ipuaro/src/cli/commands/index.ts
Normal file
@@ -0,0 +1,18 @@
|
||||
/**
|
||||
* CLI commands module.
|
||||
*/
|
||||
|
||||
export { executeStart, type StartOptions, type StartResult } from "./start.js"
|
||||
export { executeInit, type InitOptions, type InitResult } from "./init.js"
|
||||
export { executeIndex, type IndexResult, type IndexProgressCallback } from "./index-cmd.js"
|
||||
export {
|
||||
runOnboarding,
|
||||
checkRedis,
|
||||
checkOllama,
|
||||
checkModel,
|
||||
checkProjectSize,
|
||||
pullModel,
|
||||
type OnboardingResult,
|
||||
type OnboardingOptions,
|
||||
} from "./onboarding.js"
|
||||
export { registerAllTools } from "./tools-setup.js"
|
||||
114
packages/ipuaro/src/cli/commands/init.ts
Normal file
114
packages/ipuaro/src/cli/commands/init.ts
Normal file
@@ -0,0 +1,114 @@
|
||||
/**
|
||||
* Init command implementation.
|
||||
* Creates .ipuaro.json configuration file.
|
||||
*/
|
||||
|
||||
import * as fs from "node:fs/promises"
|
||||
import * as path from "node:path"
|
||||
|
||||
/**
|
||||
* Default configuration template for .ipuaro.json
|
||||
*/
|
||||
const CONFIG_TEMPLATE = {
|
||||
$schema: "https://raw.githubusercontent.com/samiyev/puaros/main/packages/ipuaro/schema.json",
|
||||
redis: {
|
||||
host: "localhost",
|
||||
port: 6379,
|
||||
db: 0,
|
||||
},
|
||||
llm: {
|
||||
model: "qwen2.5-coder:7b-instruct",
|
||||
temperature: 0.1,
|
||||
host: "http://localhost:11434",
|
||||
},
|
||||
project: {
|
||||
ignorePatterns: [],
|
||||
},
|
||||
edit: {
|
||||
autoApply: false,
|
||||
},
|
||||
}
|
||||
|
||||
/**
|
||||
* Options for init command.
|
||||
*/
|
||||
export interface InitOptions {
|
||||
force?: boolean
|
||||
}
|
||||
|
||||
/**
|
||||
* Result of init command.
|
||||
*/
|
||||
export interface InitResult {
|
||||
success: boolean
|
||||
filePath?: string
|
||||
error?: string
|
||||
skipped?: boolean
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute the init command.
|
||||
* Creates a .ipuaro.json file in the specified directory.
|
||||
*/
|
||||
export async function executeInit(
|
||||
projectPath = ".",
|
||||
options: InitOptions = {},
|
||||
): Promise<InitResult> {
|
||||
const resolvedPath = path.resolve(projectPath)
|
||||
const configPath = path.join(resolvedPath, ".ipuaro.json")
|
||||
|
||||
try {
|
||||
const exists = await fileExists(configPath)
|
||||
|
||||
if (exists && !options.force) {
|
||||
console.warn(`⚠️ Configuration file already exists: ${configPath}`)
|
||||
console.warn(" Use --force to overwrite.")
|
||||
return {
|
||||
success: true,
|
||||
skipped: true,
|
||||
filePath: configPath,
|
||||
}
|
||||
}
|
||||
|
||||
const dirExists = await fileExists(resolvedPath)
|
||||
if (!dirExists) {
|
||||
await fs.mkdir(resolvedPath, { recursive: true })
|
||||
}
|
||||
|
||||
const content = JSON.stringify(CONFIG_TEMPLATE, null, 4)
|
||||
await fs.writeFile(configPath, content, "utf-8")
|
||||
|
||||
console.warn(`✅ Created ${configPath}`)
|
||||
console.warn("\nConfiguration options:")
|
||||
console.warn(" redis.host - Redis server host (default: localhost)")
|
||||
console.warn(" redis.port - Redis server port (default: 6379)")
|
||||
console.warn(" llm.model - Ollama model name (default: qwen2.5-coder:7b-instruct)")
|
||||
console.warn(" llm.temperature - LLM temperature (default: 0.1)")
|
||||
console.warn(" edit.autoApply - Auto-apply edits without confirmation (default: false)")
|
||||
console.warn("\nRun `ipuaro` to start the AI agent.")
|
||||
|
||||
return {
|
||||
success: true,
|
||||
filePath: configPath,
|
||||
}
|
||||
} catch (error) {
|
||||
const message = error instanceof Error ? error.message : String(error)
|
||||
console.error(`❌ Failed to create configuration: ${message}`)
|
||||
return {
|
||||
success: false,
|
||||
error: message,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if a file or directory exists.
|
||||
*/
|
||||
async function fileExists(filePath: string): Promise<boolean> {
|
||||
try {
|
||||
await fs.access(filePath)
|
||||
return true
|
||||
} catch {
|
||||
return false
|
||||
}
|
||||
}
|
||||
290
packages/ipuaro/src/cli/commands/onboarding.ts
Normal file
290
packages/ipuaro/src/cli/commands/onboarding.ts
Normal file
@@ -0,0 +1,290 @@
|
||||
/**
|
||||
* Onboarding checks for CLI.
|
||||
* Validates environment before starting ipuaro.
|
||||
*/
|
||||
|
||||
import { RedisClient } from "../../infrastructure/storage/RedisClient.js"
|
||||
import { OllamaClient } from "../../infrastructure/llm/OllamaClient.js"
|
||||
import { FileScanner } from "../../infrastructure/indexer/FileScanner.js"
|
||||
import type { LLMConfig, RedisConfig } from "../../shared/constants/config.js"
|
||||
|
||||
/**
|
||||
* Result of onboarding checks.
|
||||
*/
|
||||
export interface OnboardingResult {
|
||||
success: boolean
|
||||
redisOk: boolean
|
||||
ollamaOk: boolean
|
||||
modelOk: boolean
|
||||
projectOk: boolean
|
||||
fileCount: number
|
||||
errors: string[]
|
||||
warnings: string[]
|
||||
}
|
||||
|
||||
/**
|
||||
* Options for onboarding checks.
|
||||
*/
|
||||
export interface OnboardingOptions {
|
||||
redisConfig: RedisConfig
|
||||
llmConfig: LLMConfig
|
||||
projectPath: string
|
||||
maxFiles?: number
|
||||
skipRedis?: boolean
|
||||
skipOllama?: boolean
|
||||
skipModel?: boolean
|
||||
skipProject?: boolean
|
||||
}
|
||||
|
||||
const DEFAULT_MAX_FILES = 10_000
|
||||
|
||||
/**
|
||||
* Check Redis availability.
|
||||
*/
|
||||
export async function checkRedis(config: RedisConfig): Promise<{
|
||||
ok: boolean
|
||||
error?: string
|
||||
}> {
|
||||
const client = new RedisClient(config)
|
||||
|
||||
try {
|
||||
await client.connect()
|
||||
const pingOk = await client.ping()
|
||||
await client.disconnect()
|
||||
|
||||
if (!pingOk) {
|
||||
return {
|
||||
ok: false,
|
||||
error: "Redis ping failed. Server may be overloaded.",
|
||||
}
|
||||
}
|
||||
|
||||
return { ok: true }
|
||||
} catch (error) {
|
||||
const message = error instanceof Error ? error.message : String(error)
|
||||
return {
|
||||
ok: false,
|
||||
error: `Cannot connect to Redis: ${message}
|
||||
|
||||
Redis is required for ipuaro to store project indexes and session data.
|
||||
|
||||
Install Redis:
|
||||
macOS: brew install redis && brew services start redis
|
||||
Ubuntu: sudo apt install redis-server && sudo systemctl start redis
|
||||
Docker: docker run -d -p 6379:6379 redis`,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Check Ollama availability.
|
||||
*/
|
||||
export async function checkOllama(config: LLMConfig): Promise<{
|
||||
ok: boolean
|
||||
error?: string
|
||||
}> {
|
||||
const client = new OllamaClient(config)
|
||||
|
||||
try {
|
||||
const available = await client.isAvailable()
|
||||
|
||||
if (!available) {
|
||||
return {
|
||||
ok: false,
|
||||
error: `Cannot connect to Ollama at ${config.host}
|
||||
|
||||
Ollama is required for ipuaro to process your requests using local LLMs.
|
||||
|
||||
Install Ollama:
|
||||
macOS: brew install ollama && ollama serve
|
||||
Linux: curl -fsSL https://ollama.com/install.sh | sh && ollama serve
|
||||
Manual: https://ollama.com/download
|
||||
|
||||
After installing, ensure Ollama is running with: ollama serve`,
|
||||
}
|
||||
}
|
||||
|
||||
return { ok: true }
|
||||
} catch (error) {
|
||||
const message = error instanceof Error ? error.message : String(error)
|
||||
return {
|
||||
ok: false,
|
||||
error: `Ollama check failed: ${message}`,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Check model availability.
|
||||
*/
|
||||
export async function checkModel(config: LLMConfig): Promise<{
|
||||
ok: boolean
|
||||
needsPull: boolean
|
||||
error?: string
|
||||
}> {
|
||||
const client = new OllamaClient(config)
|
||||
|
||||
try {
|
||||
const hasModel = await client.hasModel(config.model)
|
||||
|
||||
if (!hasModel) {
|
||||
return {
|
||||
ok: false,
|
||||
needsPull: true,
|
||||
error: `Model "${config.model}" is not installed.
|
||||
|
||||
Would you like to pull it? This may take a few minutes.
|
||||
Run: ollama pull ${config.model}`,
|
||||
}
|
||||
}
|
||||
|
||||
return { ok: true, needsPull: false }
|
||||
} catch (error) {
|
||||
const message = error instanceof Error ? error.message : String(error)
|
||||
return {
|
||||
ok: false,
|
||||
needsPull: false,
|
||||
error: `Model check failed: ${message}`,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Pull model from Ollama.
|
||||
*/
|
||||
export async function pullModel(
|
||||
config: LLMConfig,
|
||||
onProgress?: (status: string) => void,
|
||||
): Promise<{ ok: boolean; error?: string }> {
|
||||
const client = new OllamaClient(config)
|
||||
|
||||
try {
|
||||
onProgress?.(`Pulling model "${config.model}"...`)
|
||||
await client.pullModel(config.model)
|
||||
onProgress?.(`Model "${config.model}" pulled successfully.`)
|
||||
return { ok: true }
|
||||
} catch (error) {
|
||||
const message = error instanceof Error ? error.message : String(error)
|
||||
return {
|
||||
ok: false,
|
||||
error: `Failed to pull model: ${message}`,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Check project size.
|
||||
*/
|
||||
export async function checkProjectSize(
|
||||
projectPath: string,
|
||||
maxFiles: number = DEFAULT_MAX_FILES,
|
||||
): Promise<{
|
||||
ok: boolean
|
||||
fileCount: number
|
||||
warning?: string
|
||||
}> {
|
||||
const scanner = new FileScanner()
|
||||
|
||||
try {
|
||||
const files = await scanner.scanAll(projectPath)
|
||||
const fileCount = files.length
|
||||
|
||||
if (fileCount > maxFiles) {
|
||||
return {
|
||||
ok: true,
|
||||
fileCount,
|
||||
warning: `Project has ${fileCount.toLocaleString()} files (>${maxFiles.toLocaleString()}).
|
||||
This may take a while to index and use more memory.
|
||||
|
||||
Consider:
|
||||
1. Running ipuaro in a subdirectory: ipuaro ./src
|
||||
2. Adding patterns to .gitignore to exclude unnecessary files
|
||||
3. Using a smaller project for better performance`,
|
||||
}
|
||||
}
|
||||
|
||||
if (fileCount === 0) {
|
||||
return {
|
||||
ok: false,
|
||||
fileCount: 0,
|
||||
warning: `No supported files found in "${projectPath}".
|
||||
|
||||
ipuaro supports: .ts, .tsx, .js, .jsx, .json, .yaml, .yml
|
||||
|
||||
Ensure you're running ipuaro in a project directory with source files.`,
|
||||
}
|
||||
}
|
||||
|
||||
return { ok: true, fileCount }
|
||||
} catch (error) {
|
||||
const message = error instanceof Error ? error.message : String(error)
|
||||
return {
|
||||
ok: false,
|
||||
fileCount: 0,
|
||||
warning: `Failed to scan project: ${message}`,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Run all onboarding checks.
|
||||
*/
|
||||
export async function runOnboarding(options: OnboardingOptions): Promise<OnboardingResult> {
|
||||
const errors: string[] = []
|
||||
const warnings: string[] = []
|
||||
const maxFiles = options.maxFiles ?? DEFAULT_MAX_FILES
|
||||
|
||||
let redisOk = true
|
||||
let ollamaOk = true
|
||||
let modelOk = true
|
||||
let projectOk = true
|
||||
let fileCount = 0
|
||||
|
||||
if (!options.skipRedis) {
|
||||
const redisResult = await checkRedis(options.redisConfig)
|
||||
redisOk = redisResult.ok
|
||||
if (!redisOk && redisResult.error) {
|
||||
errors.push(redisResult.error)
|
||||
}
|
||||
}
|
||||
|
||||
if (!options.skipOllama) {
|
||||
const ollamaResult = await checkOllama(options.llmConfig)
|
||||
ollamaOk = ollamaResult.ok
|
||||
if (!ollamaOk && ollamaResult.error) {
|
||||
errors.push(ollamaResult.error)
|
||||
}
|
||||
}
|
||||
|
||||
if (!options.skipModel && ollamaOk) {
|
||||
const modelResult = await checkModel(options.llmConfig)
|
||||
modelOk = modelResult.ok
|
||||
if (!modelOk && modelResult.error) {
|
||||
errors.push(modelResult.error)
|
||||
}
|
||||
}
|
||||
|
||||
if (!options.skipProject) {
|
||||
const projectResult = await checkProjectSize(options.projectPath, maxFiles)
|
||||
projectOk = projectResult.ok
|
||||
fileCount = projectResult.fileCount
|
||||
if (projectResult.warning) {
|
||||
if (projectResult.ok) {
|
||||
warnings.push(projectResult.warning)
|
||||
} else {
|
||||
errors.push(projectResult.warning)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
success: redisOk && ollamaOk && modelOk && projectOk && errors.length === 0,
|
||||
redisOk,
|
||||
ollamaOk,
|
||||
modelOk,
|
||||
projectOk,
|
||||
fileCount,
|
||||
errors,
|
||||
warnings,
|
||||
}
|
||||
}
|
||||
162
packages/ipuaro/src/cli/commands/start.ts
Normal file
162
packages/ipuaro/src/cli/commands/start.ts
Normal file
@@ -0,0 +1,162 @@
|
||||
/**
|
||||
* Start command implementation.
|
||||
* Launches the ipuaro TUI.
|
||||
*/
|
||||
|
||||
import * as path from "node:path"
|
||||
import * as readline from "node:readline"
|
||||
import { render } from "ink"
|
||||
import React from "react"
|
||||
import { App, type AppDependencies } from "../../tui/App.js"
|
||||
import { RedisClient } from "../../infrastructure/storage/RedisClient.js"
|
||||
import { RedisStorage } from "../../infrastructure/storage/RedisStorage.js"
|
||||
import { RedisSessionStorage } from "../../infrastructure/storage/RedisSessionStorage.js"
|
||||
import { OllamaClient } from "../../infrastructure/llm/OllamaClient.js"
|
||||
import { ToolRegistry } from "../../infrastructure/tools/registry.js"
|
||||
import { generateProjectName } from "../../infrastructure/storage/schema.js"
|
||||
import { type Config, DEFAULT_CONFIG } from "../../shared/constants/config.js"
|
||||
import { checkModel, pullModel, runOnboarding } from "./onboarding.js"
|
||||
import { registerAllTools } from "./tools-setup.js"
|
||||
|
||||
/**
|
||||
* Options for start command.
|
||||
*/
|
||||
export interface StartOptions {
|
||||
autoApply?: boolean
|
||||
model?: string
|
||||
}
|
||||
|
||||
/**
|
||||
* Result of start command.
|
||||
*/
|
||||
export interface StartResult {
|
||||
success: boolean
|
||||
error?: string
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute the start command.
|
||||
*/
|
||||
export async function executeStart(
|
||||
projectPath: string,
|
||||
options: StartOptions,
|
||||
config: Config = DEFAULT_CONFIG,
|
||||
): Promise<StartResult> {
|
||||
const resolvedPath = path.resolve(projectPath)
|
||||
const projectName = generateProjectName(resolvedPath)
|
||||
|
||||
const llmConfig = {
|
||||
...config.llm,
|
||||
model: options.model ?? config.llm.model,
|
||||
}
|
||||
|
||||
console.warn("🔍 Running pre-flight checks...\n")
|
||||
|
||||
const onboardingResult = await runOnboarding({
|
||||
redisConfig: config.redis,
|
||||
llmConfig,
|
||||
projectPath: resolvedPath,
|
||||
})
|
||||
|
||||
for (const warning of onboardingResult.warnings) {
|
||||
console.warn(`⚠️ ${warning}\n`)
|
||||
}
|
||||
|
||||
if (!onboardingResult.success) {
|
||||
for (const error of onboardingResult.errors) {
|
||||
console.error(`❌ ${error}\n`)
|
||||
}
|
||||
|
||||
if (!onboardingResult.modelOk && onboardingResult.ollamaOk) {
|
||||
const shouldPull = await promptYesNo(
|
||||
`Would you like to pull "${llmConfig.model}"? (y/n): `,
|
||||
)
|
||||
|
||||
if (shouldPull) {
|
||||
const pullResult = await pullModel(llmConfig, console.warn)
|
||||
if (!pullResult.ok) {
|
||||
console.error(`❌ ${pullResult.error ?? "Unknown error"}`)
|
||||
return { success: false, error: pullResult.error }
|
||||
}
|
||||
|
||||
const recheckModel = await checkModel(llmConfig)
|
||||
if (!recheckModel.ok) {
|
||||
console.error("❌ Model still not available after pull.")
|
||||
return { success: false, error: "Model pull failed" }
|
||||
}
|
||||
} else {
|
||||
return { success: false, error: "Model not available" }
|
||||
}
|
||||
} else {
|
||||
return {
|
||||
success: false,
|
||||
error: onboardingResult.errors.join("\n"),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
console.warn(`✅ All checks passed. Found ${String(onboardingResult.fileCount)} files.\n`)
|
||||
console.warn("🚀 Starting ipuaro...\n")
|
||||
|
||||
const redisClient = new RedisClient(config.redis)
|
||||
|
||||
try {
|
||||
await redisClient.connect()
|
||||
|
||||
const storage = new RedisStorage(redisClient, projectName)
|
||||
const sessionStorage = new RedisSessionStorage(redisClient)
|
||||
const llm = new OllamaClient(llmConfig)
|
||||
const tools = new ToolRegistry()
|
||||
|
||||
registerAllTools(tools)
|
||||
|
||||
const deps: AppDependencies = {
|
||||
storage,
|
||||
sessionStorage,
|
||||
llm,
|
||||
tools,
|
||||
}
|
||||
|
||||
const handleExit = (): void => {
|
||||
void redisClient.disconnect()
|
||||
}
|
||||
|
||||
const { waitUntilExit } = render(
|
||||
React.createElement(App, {
|
||||
projectPath: resolvedPath,
|
||||
autoApply: options.autoApply ?? config.edit.autoApply,
|
||||
deps,
|
||||
onExit: handleExit,
|
||||
}),
|
||||
)
|
||||
|
||||
await waitUntilExit()
|
||||
await redisClient.disconnect()
|
||||
|
||||
return { success: true }
|
||||
} catch (error) {
|
||||
const message = error instanceof Error ? error.message : String(error)
|
||||
console.error(`❌ Failed to start ipuaro: ${message}`)
|
||||
await redisClient.disconnect()
|
||||
return { success: false, error: message }
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Simple yes/no prompt for CLI.
|
||||
*/
|
||||
async function promptYesNo(question: string): Promise<boolean> {
|
||||
return new Promise((resolve) => {
|
||||
process.stdout.write(question)
|
||||
|
||||
const rl = readline.createInterface({
|
||||
input: process.stdin,
|
||||
output: process.stdout,
|
||||
})
|
||||
|
||||
rl.once("line", (answer: string) => {
|
||||
rl.close()
|
||||
resolve(answer.toLowerCase() === "y" || answer.toLowerCase() === "yes")
|
||||
})
|
||||
})
|
||||
}
|
||||
59
packages/ipuaro/src/cli/commands/tools-setup.ts
Normal file
59
packages/ipuaro/src/cli/commands/tools-setup.ts
Normal file
@@ -0,0 +1,59 @@
|
||||
/**
|
||||
* Tool registration helper for CLI.
|
||||
* Registers all 18 tools with the tool registry.
|
||||
*/
|
||||
|
||||
import type { IToolRegistry } from "../../application/interfaces/IToolRegistry.js"
|
||||
|
||||
import { GetLinesTool } from "../../infrastructure/tools/read/GetLinesTool.js"
|
||||
import { GetFunctionTool } from "../../infrastructure/tools/read/GetFunctionTool.js"
|
||||
import { GetClassTool } from "../../infrastructure/tools/read/GetClassTool.js"
|
||||
import { GetStructureTool } from "../../infrastructure/tools/read/GetStructureTool.js"
|
||||
|
||||
import { EditLinesTool } from "../../infrastructure/tools/edit/EditLinesTool.js"
|
||||
import { CreateFileTool } from "../../infrastructure/tools/edit/CreateFileTool.js"
|
||||
import { DeleteFileTool } from "../../infrastructure/tools/edit/DeleteFileTool.js"
|
||||
|
||||
import { FindReferencesTool } from "../../infrastructure/tools/search/FindReferencesTool.js"
|
||||
import { FindDefinitionTool } from "../../infrastructure/tools/search/FindDefinitionTool.js"
|
||||
|
||||
import { GetDependenciesTool } from "../../infrastructure/tools/analysis/GetDependenciesTool.js"
|
||||
import { GetDependentsTool } from "../../infrastructure/tools/analysis/GetDependentsTool.js"
|
||||
import { GetComplexityTool } from "../../infrastructure/tools/analysis/GetComplexityTool.js"
|
||||
import { GetTodosTool } from "../../infrastructure/tools/analysis/GetTodosTool.js"
|
||||
|
||||
import { GitStatusTool } from "../../infrastructure/tools/git/GitStatusTool.js"
|
||||
import { GitDiffTool } from "../../infrastructure/tools/git/GitDiffTool.js"
|
||||
import { GitCommitTool } from "../../infrastructure/tools/git/GitCommitTool.js"
|
||||
|
||||
import { RunCommandTool } from "../../infrastructure/tools/run/RunCommandTool.js"
|
||||
import { RunTestsTool } from "../../infrastructure/tools/run/RunTestsTool.js"
|
||||
|
||||
/**
|
||||
* Register all 18 tools with the tool registry.
|
||||
*/
|
||||
export function registerAllTools(registry: IToolRegistry): void {
|
||||
registry.register(new GetLinesTool())
|
||||
registry.register(new GetFunctionTool())
|
||||
registry.register(new GetClassTool())
|
||||
registry.register(new GetStructureTool())
|
||||
|
||||
registry.register(new EditLinesTool())
|
||||
registry.register(new CreateFileTool())
|
||||
registry.register(new DeleteFileTool())
|
||||
|
||||
registry.register(new FindReferencesTool())
|
||||
registry.register(new FindDefinitionTool())
|
||||
|
||||
registry.register(new GetDependenciesTool())
|
||||
registry.register(new GetDependentsTool())
|
||||
registry.register(new GetComplexityTool())
|
||||
registry.register(new GetTodosTool())
|
||||
|
||||
registry.register(new GitStatusTool())
|
||||
registry.register(new GitDiffTool())
|
||||
registry.register(new GitCommitTool())
|
||||
|
||||
registry.register(new RunCommandTool())
|
||||
registry.register(new RunTestsTool())
|
||||
}
|
||||
@@ -1,44 +1,63 @@
|
||||
#!/usr/bin/env node
|
||||
|
||||
/**
|
||||
* ipuaro CLI entry point.
|
||||
* Local AI agent for codebase operations with infinite context feeling.
|
||||
*/
|
||||
|
||||
import { createRequire } from "node:module"
|
||||
import { Command } from "commander"
|
||||
import { executeStart } from "./commands/start.js"
|
||||
import { executeInit } from "./commands/init.js"
|
||||
import { executeIndex } from "./commands/index-cmd.js"
|
||||
import { loadConfig } from "../shared/config/loader.js"
|
||||
|
||||
const require = createRequire(import.meta.url)
|
||||
const pkg = require("../../package.json") as { version: string }
|
||||
|
||||
const program = new Command()
|
||||
|
||||
program
|
||||
.name("ipuaro")
|
||||
.description("Local AI agent for codebase operations with infinite context feeling")
|
||||
.version("0.1.0")
|
||||
.version(pkg.version)
|
||||
|
||||
program
|
||||
.command("start")
|
||||
.command("start", { isDefault: true })
|
||||
.description("Start ipuaro TUI in the current directory")
|
||||
.argument("[path]", "Project path", ".")
|
||||
.option("--auto-apply", "Enable auto-apply mode for edits")
|
||||
.option("--model <name>", "Override LLM model", "qwen2.5-coder:7b-instruct")
|
||||
.action((path: string, options: { autoApply?: boolean; model?: string }) => {
|
||||
const model = options.model ?? "default"
|
||||
const autoApply = options.autoApply ?? false
|
||||
console.warn(`Starting ipuaro in ${path}...`)
|
||||
console.warn(`Model: ${model}`)
|
||||
console.warn(`Auto-apply: ${autoApply ? "enabled" : "disabled"}`)
|
||||
console.warn("\nNot implemented yet. Coming in version 0.11.0!")
|
||||
.option("--model <name>", "Override LLM model")
|
||||
.action(async (projectPath: string, options: { autoApply?: boolean; model?: string }) => {
|
||||
const config = loadConfig(projectPath)
|
||||
const result = await executeStart(projectPath, options, config)
|
||||
if (!result.success) {
|
||||
process.exit(1)
|
||||
}
|
||||
})
|
||||
|
||||
program
|
||||
.command("init")
|
||||
.description("Create .ipuaro.json config file")
|
||||
.action(() => {
|
||||
console.warn("Creating .ipuaro.json...")
|
||||
console.warn("\nNot implemented yet. Coming in version 0.17.0!")
|
||||
.argument("[path]", "Project path", ".")
|
||||
.option("--force", "Overwrite existing config file")
|
||||
.action(async (projectPath: string, options: { force?: boolean }) => {
|
||||
const result = await executeInit(projectPath, options)
|
||||
if (!result.success) {
|
||||
process.exit(1)
|
||||
}
|
||||
})
|
||||
|
||||
program
|
||||
.command("index")
|
||||
.description("Index project without starting TUI")
|
||||
.argument("[path]", "Project path", ".")
|
||||
.action((path: string) => {
|
||||
console.warn(`Indexing ${path}...`)
|
||||
console.warn("\nNot implemented yet. Coming in version 0.3.0!")
|
||||
.action(async (projectPath: string) => {
|
||||
const config = loadConfig(projectPath)
|
||||
const result = await executeIndex(projectPath, config)
|
||||
if (!result.success) {
|
||||
process.exit(1)
|
||||
}
|
||||
})
|
||||
|
||||
program.parse()
|
||||
|
||||
@@ -94,6 +94,12 @@ export class Session {
|
||||
}
|
||||
}
|
||||
|
||||
truncateHistory(maxMessages: number): void {
|
||||
if (this.history.length > maxMessages) {
|
||||
this.history = this.history.slice(-maxMessages)
|
||||
}
|
||||
}
|
||||
|
||||
clearHistory(): void {
|
||||
this.history = []
|
||||
this.context = {
|
||||
|
||||
@@ -21,6 +21,7 @@ export interface ScanResult {
|
||||
type: "file" | "directory" | "symlink"
|
||||
size: number
|
||||
lastModified: number
|
||||
symlinkTarget?: string
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -46,7 +47,7 @@ export interface IIndexer {
|
||||
/**
|
||||
* Parse file content into AST.
|
||||
*/
|
||||
parseFile(content: string, language: "ts" | "tsx" | "js" | "jsx"): FileAST
|
||||
parseFile(content: string, language: "ts" | "tsx" | "js" | "jsx" | "json" | "yaml"): FileAST
|
||||
|
||||
/**
|
||||
* Analyze file and compute metadata.
|
||||
|
||||
@@ -1,26 +1,6 @@
|
||||
import type { ChatMessage } from "../value-objects/ChatMessage.js"
|
||||
import type { ToolCall } from "../value-objects/ToolCall.js"
|
||||
|
||||
/**
|
||||
* Tool parameter definition for LLM.
|
||||
*/
|
||||
export interface ToolParameter {
|
||||
name: string
|
||||
type: "string" | "number" | "boolean" | "array" | "object"
|
||||
description: string
|
||||
required: boolean
|
||||
enum?: string[]
|
||||
}
|
||||
|
||||
/**
|
||||
* Tool definition for LLM function calling.
|
||||
*/
|
||||
export interface ToolDef {
|
||||
name: string
|
||||
description: string
|
||||
parameters: ToolParameter[]
|
||||
}
|
||||
|
||||
/**
|
||||
* Response from LLM.
|
||||
*/
|
||||
@@ -42,12 +22,16 @@ export interface LLMResponse {
|
||||
/**
|
||||
* LLM client service interface (port).
|
||||
* Abstracts the LLM provider.
|
||||
*
|
||||
* Tool definitions should be included in the system prompt as XML format,
|
||||
* not passed as a separate parameter.
|
||||
*/
|
||||
export interface ILLMClient {
|
||||
/**
|
||||
* Send messages to LLM and get response.
|
||||
* Tool calls are extracted from the response content using XML parsing.
|
||||
*/
|
||||
chat(messages: ChatMessage[], tools?: ToolDef[]): Promise<LLMResponse>
|
||||
chat(messages: ChatMessage[]): Promise<LLMResponse>
|
||||
|
||||
/**
|
||||
* Count tokens in text.
|
||||
|
||||
@@ -52,6 +52,8 @@ export interface FunctionInfo {
|
||||
isExported: boolean
|
||||
/** Return type (if available) */
|
||||
returnType?: string
|
||||
/** Decorators applied to the function (e.g., ["@Get(':id')", "@Auth()"]) */
|
||||
decorators?: string[]
|
||||
}
|
||||
|
||||
export interface MethodInfo {
|
||||
@@ -69,6 +71,8 @@ export interface MethodInfo {
|
||||
visibility: "public" | "private" | "protected"
|
||||
/** Whether it's static */
|
||||
isStatic: boolean
|
||||
/** Decorators applied to the method (e.g., ["@Get(':id')", "@UseGuards(AuthGuard)"]) */
|
||||
decorators?: string[]
|
||||
}
|
||||
|
||||
export interface PropertyInfo {
|
||||
@@ -105,6 +109,8 @@ export interface ClassInfo {
|
||||
isExported: boolean
|
||||
/** Whether class is abstract */
|
||||
isAbstract: boolean
|
||||
/** Decorators applied to the class (e.g., ["@Controller('users')", "@Injectable()"]) */
|
||||
decorators?: string[]
|
||||
}
|
||||
|
||||
export interface InterfaceInfo {
|
||||
@@ -129,6 +135,30 @@ export interface TypeAliasInfo {
|
||||
line: number
|
||||
/** Whether it's exported */
|
||||
isExported: boolean
|
||||
/** Type definition (e.g., "string", "User & Admin", "{ id: string }") */
|
||||
definition?: string
|
||||
}
|
||||
|
||||
export interface EnumMemberInfo {
|
||||
/** Member name */
|
||||
name: string
|
||||
/** Member value (string or number, if specified) */
|
||||
value?: string | number
|
||||
}
|
||||
|
||||
export interface EnumInfo {
|
||||
/** Enum name */
|
||||
name: string
|
||||
/** Start line number */
|
||||
lineStart: number
|
||||
/** End line number */
|
||||
lineEnd: number
|
||||
/** Enum members with values */
|
||||
members: EnumMemberInfo[]
|
||||
/** Whether it's exported */
|
||||
isExported: boolean
|
||||
/** Whether it's a const enum */
|
||||
isConst: boolean
|
||||
}
|
||||
|
||||
export interface FileAST {
|
||||
@@ -144,6 +174,8 @@ export interface FileAST {
|
||||
interfaces: InterfaceInfo[]
|
||||
/** Type alias declarations */
|
||||
typeAliases: TypeAliasInfo[]
|
||||
/** Enum declarations */
|
||||
enums: EnumInfo[]
|
||||
/** Whether parsing encountered errors */
|
||||
parseError: boolean
|
||||
/** Parse error message if any */
|
||||
@@ -158,6 +190,7 @@ export function createEmptyFileAST(): FileAST {
|
||||
classes: [],
|
||||
interfaces: [],
|
||||
typeAliases: [],
|
||||
enums: [],
|
||||
parseError: false,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -26,6 +26,12 @@ export interface FileMeta {
|
||||
isEntryPoint: boolean
|
||||
/** File type classification */
|
||||
fileType: "source" | "test" | "config" | "types" | "unknown"
|
||||
/** Impact score (0-100): percentage of codebase that depends on this file */
|
||||
impactScore: number
|
||||
/** Count of files that depend on this file transitively (including indirect dependents) */
|
||||
transitiveDepCount: number
|
||||
/** Count of files this file depends on transitively (including indirect dependencies) */
|
||||
transitiveDepByCount: number
|
||||
}
|
||||
|
||||
export function createFileMeta(partial: Partial<FileMeta> = {}): FileMeta {
|
||||
@@ -41,6 +47,9 @@ export function createFileMeta(partial: Partial<FileMeta> = {}): FileMeta {
|
||||
isHub: false,
|
||||
isEntryPoint: false,
|
||||
fileType: "unknown",
|
||||
impactScore: 0,
|
||||
transitiveDepCount: 0,
|
||||
transitiveDepByCount: 0,
|
||||
...partial,
|
||||
}
|
||||
}
|
||||
@@ -48,3 +57,20 @@ export function createFileMeta(partial: Partial<FileMeta> = {}): FileMeta {
|
||||
export function isHubFile(dependentCount: number): boolean {
|
||||
return dependentCount > 5
|
||||
}
|
||||
|
||||
/**
|
||||
* Calculate impact score based on number of dependents and total files.
|
||||
* Impact score represents what percentage of the codebase depends on this file.
|
||||
* @param dependentCount - Number of files that depend on this file
|
||||
* @param totalFiles - Total number of files in the project
|
||||
* @returns Impact score from 0 to 100
|
||||
*/
|
||||
export function calculateImpactScore(dependentCount: number, totalFiles: number): number {
|
||||
if (totalFiles <= 1) {
|
||||
return 0
|
||||
}
|
||||
// Exclude the file itself from the total
|
||||
const maxPossibleDependents = totalFiles - 1
|
||||
const score = (dependentCount / maxPossibleDependents) * 100
|
||||
return Math.round(Math.min(100, score))
|
||||
}
|
||||
|
||||
@@ -21,5 +21,8 @@ export * from "./shared/index.js"
|
||||
// Infrastructure exports
|
||||
export * from "./infrastructure/index.js"
|
||||
|
||||
// TUI exports
|
||||
export * from "./tui/index.js"
|
||||
|
||||
// Version
|
||||
export const VERSION = pkg.version
|
||||
|
||||
@@ -3,3 +3,4 @@ export * from "./storage/index.js"
|
||||
export * from "./indexer/index.js"
|
||||
export * from "./llm/index.js"
|
||||
export * from "./tools/index.js"
|
||||
export * from "./security/index.js"
|
||||
|
||||
@@ -2,8 +2,11 @@ import { builtinModules } from "node:module"
|
||||
import Parser from "tree-sitter"
|
||||
import TypeScript from "tree-sitter-typescript"
|
||||
import JavaScript from "tree-sitter-javascript"
|
||||
import JSON from "tree-sitter-json"
|
||||
import * as yamlParser from "yaml"
|
||||
import {
|
||||
createEmptyFileAST,
|
||||
type EnumMemberInfo,
|
||||
type ExportInfo,
|
||||
type FileAST,
|
||||
type ImportInfo,
|
||||
@@ -13,7 +16,7 @@ import {
|
||||
} from "../../domain/value-objects/FileAST.js"
|
||||
import { FieldName, NodeType } from "./tree-sitter-types.js"
|
||||
|
||||
type Language = "ts" | "tsx" | "js" | "jsx"
|
||||
type Language = "ts" | "tsx" | "js" | "jsx" | "json" | "yaml"
|
||||
type SyntaxNode = Parser.SyntaxNode
|
||||
|
||||
/**
|
||||
@@ -39,12 +42,20 @@ export class ASTParser {
|
||||
jsParser.setLanguage(JavaScript)
|
||||
this.parsers.set("js", jsParser)
|
||||
this.parsers.set("jsx", jsParser)
|
||||
|
||||
const jsonParser = new Parser()
|
||||
jsonParser.setLanguage(JSON)
|
||||
this.parsers.set("json", jsonParser)
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse source code and extract AST information.
|
||||
*/
|
||||
parse(content: string, language: Language): FileAST {
|
||||
if (language === "yaml") {
|
||||
return this.parseYAML(content)
|
||||
}
|
||||
|
||||
const parser = this.parsers.get(language)
|
||||
if (!parser) {
|
||||
return {
|
||||
@@ -75,8 +86,77 @@ export class ASTParser {
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse YAML content using yaml package.
|
||||
*/
|
||||
private parseYAML(content: string): FileAST {
|
||||
const ast = createEmptyFileAST()
|
||||
|
||||
try {
|
||||
const doc = yamlParser.parseDocument(content)
|
||||
|
||||
if (doc.errors.length > 0) {
|
||||
return {
|
||||
...createEmptyFileAST(),
|
||||
parseError: true,
|
||||
parseErrorMessage: doc.errors[0].message,
|
||||
}
|
||||
}
|
||||
|
||||
const contents = doc.contents
|
||||
|
||||
if (yamlParser.isSeq(contents)) {
|
||||
ast.exports.push({
|
||||
name: "(array)",
|
||||
line: 1,
|
||||
isDefault: false,
|
||||
kind: "variable",
|
||||
})
|
||||
} else if (yamlParser.isMap(contents)) {
|
||||
for (const item of contents.items) {
|
||||
if (yamlParser.isPair(item) && yamlParser.isScalar(item.key)) {
|
||||
const keyRange = item.key.range
|
||||
const line = keyRange ? this.getLineFromOffset(content, keyRange[0]) : 1
|
||||
ast.exports.push({
|
||||
name: String(item.key.value),
|
||||
line,
|
||||
isDefault: false,
|
||||
kind: "variable",
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return ast
|
||||
} catch (error) {
|
||||
return {
|
||||
...createEmptyFileAST(),
|
||||
parseError: true,
|
||||
parseErrorMessage: error instanceof Error ? error.message : "YAML parse error",
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get line number from character offset.
|
||||
*/
|
||||
private getLineFromOffset(content: string, offset: number): number {
|
||||
let line = 1
|
||||
for (let i = 0; i < offset && i < content.length; i++) {
|
||||
if (content[i] === "\n") {
|
||||
line++
|
||||
}
|
||||
}
|
||||
return line
|
||||
}
|
||||
|
||||
private extractAST(root: SyntaxNode, language: Language): FileAST {
|
||||
const ast = createEmptyFileAST()
|
||||
|
||||
if (language === "json") {
|
||||
return this.extractJSONStructure(root, ast)
|
||||
}
|
||||
|
||||
const isTypeScript = language === "ts" || language === "tsx"
|
||||
|
||||
for (const child of root.children) {
|
||||
@@ -113,6 +193,11 @@ export class ASTParser {
|
||||
this.extractTypeAlias(node, ast, false)
|
||||
}
|
||||
break
|
||||
case NodeType.ENUM_DECLARATION:
|
||||
if (isTypeScript) {
|
||||
this.extractEnum(node, ast, false)
|
||||
}
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
@@ -179,13 +264,15 @@ export class ASTParser {
|
||||
const declaration = node.childForFieldName(FieldName.DECLARATION)
|
||||
|
||||
if (declaration) {
|
||||
const decorators = this.extractDecoratorsFromSiblings(declaration)
|
||||
|
||||
switch (declaration.type) {
|
||||
case NodeType.FUNCTION_DECLARATION:
|
||||
this.extractFunction(declaration, ast, true)
|
||||
this.extractFunction(declaration, ast, true, decorators)
|
||||
this.addExportInfo(ast, declaration, "function", isDefault)
|
||||
break
|
||||
case NodeType.CLASS_DECLARATION:
|
||||
this.extractClass(declaration, ast, true)
|
||||
this.extractClass(declaration, ast, true, decorators)
|
||||
this.addExportInfo(ast, declaration, "class", isDefault)
|
||||
break
|
||||
case NodeType.INTERFACE_DECLARATION:
|
||||
@@ -196,6 +283,10 @@ export class ASTParser {
|
||||
this.extractTypeAlias(declaration, ast, true)
|
||||
this.addExportInfo(ast, declaration, "type", isDefault)
|
||||
break
|
||||
case NodeType.ENUM_DECLARATION:
|
||||
this.extractEnum(declaration, ast, true)
|
||||
this.addExportInfo(ast, declaration, "type", isDefault)
|
||||
break
|
||||
case NodeType.LEXICAL_DECLARATION:
|
||||
this.extractLexicalDeclaration(declaration, ast, true)
|
||||
break
|
||||
@@ -220,7 +311,12 @@ export class ASTParser {
|
||||
}
|
||||
}
|
||||
|
||||
private extractFunction(node: SyntaxNode, ast: FileAST, isExported: boolean): void {
|
||||
private extractFunction(
|
||||
node: SyntaxNode,
|
||||
ast: FileAST,
|
||||
isExported: boolean,
|
||||
externalDecorators: string[] = [],
|
||||
): void {
|
||||
const nameNode = node.childForFieldName(FieldName.NAME)
|
||||
if (!nameNode) {
|
||||
return
|
||||
@@ -230,6 +326,9 @@ export class ASTParser {
|
||||
const isAsync = node.children.some((c) => c.type === NodeType.ASYNC)
|
||||
const returnTypeNode = node.childForFieldName(FieldName.RETURN_TYPE)
|
||||
|
||||
const nodeDecorators = this.extractNodeDecorators(node)
|
||||
const decorators = [...externalDecorators, ...nodeDecorators]
|
||||
|
||||
ast.functions.push({
|
||||
name: nameNode.text,
|
||||
lineStart: node.startPosition.row + 1,
|
||||
@@ -238,6 +337,7 @@ export class ASTParser {
|
||||
isAsync,
|
||||
isExported,
|
||||
returnType: returnTypeNode?.text?.replace(/^:\s*/, ""),
|
||||
decorators,
|
||||
})
|
||||
}
|
||||
|
||||
@@ -253,6 +353,7 @@ export class ASTParser {
|
||||
) {
|
||||
const params = this.extractParameters(valueNode)
|
||||
const isAsync = valueNode.children.some((c) => c.type === NodeType.ASYNC)
|
||||
const returnTypeNode = valueNode.childForFieldName(FieldName.RETURN_TYPE)
|
||||
|
||||
ast.functions.push({
|
||||
name: nameNode?.text ?? "",
|
||||
@@ -261,6 +362,8 @@ export class ASTParser {
|
||||
params,
|
||||
isAsync,
|
||||
isExported,
|
||||
returnType: returnTypeNode?.text?.replace(/^:\s*/, ""),
|
||||
decorators: [],
|
||||
})
|
||||
|
||||
if (isExported) {
|
||||
@@ -283,7 +386,12 @@ export class ASTParser {
|
||||
}
|
||||
}
|
||||
|
||||
private extractClass(node: SyntaxNode, ast: FileAST, isExported: boolean): void {
|
||||
private extractClass(
|
||||
node: SyntaxNode,
|
||||
ast: FileAST,
|
||||
isExported: boolean,
|
||||
externalDecorators: string[] = [],
|
||||
): void {
|
||||
const nameNode = node.childForFieldName(FieldName.NAME)
|
||||
if (!nameNode) {
|
||||
return
|
||||
@@ -294,14 +402,19 @@ export class ASTParser {
|
||||
const properties: PropertyInfo[] = []
|
||||
|
||||
if (body) {
|
||||
let pendingDecorators: string[] = []
|
||||
for (const member of body.children) {
|
||||
if (member.type === NodeType.METHOD_DEFINITION) {
|
||||
methods.push(this.extractMethod(member))
|
||||
if (member.type === NodeType.DECORATOR) {
|
||||
pendingDecorators.push(this.formatDecorator(member))
|
||||
} else if (member.type === NodeType.METHOD_DEFINITION) {
|
||||
methods.push(this.extractMethod(member, pendingDecorators))
|
||||
pendingDecorators = []
|
||||
} else if (
|
||||
member.type === NodeType.PUBLIC_FIELD_DEFINITION ||
|
||||
member.type === NodeType.FIELD_DEFINITION
|
||||
) {
|
||||
properties.push(this.extractProperty(member))
|
||||
pendingDecorators = []
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -309,6 +422,9 @@ export class ASTParser {
|
||||
const { extendsName, implementsList } = this.extractClassHeritage(node)
|
||||
const isAbstract = node.children.some((c) => c.type === NodeType.ABSTRACT)
|
||||
|
||||
const nodeDecorators = this.extractNodeDecorators(node)
|
||||
const decorators = [...externalDecorators, ...nodeDecorators]
|
||||
|
||||
ast.classes.push({
|
||||
name: nameNode.text,
|
||||
lineStart: node.startPosition.row + 1,
|
||||
@@ -319,6 +435,7 @@ export class ASTParser {
|
||||
implements: implementsList,
|
||||
isExported,
|
||||
isAbstract,
|
||||
decorators,
|
||||
})
|
||||
}
|
||||
|
||||
@@ -372,7 +489,7 @@ export class ASTParser {
|
||||
}
|
||||
}
|
||||
|
||||
private extractMethod(node: SyntaxNode): MethodInfo {
|
||||
private extractMethod(node: SyntaxNode, decorators: string[] = []): MethodInfo {
|
||||
const nameNode = node.childForFieldName(FieldName.NAME)
|
||||
const params = this.extractParameters(node)
|
||||
const isAsync = node.children.some((c) => c.type === NodeType.ASYNC)
|
||||
@@ -394,6 +511,7 @@ export class ASTParser {
|
||||
isAsync,
|
||||
visibility,
|
||||
isStatic,
|
||||
decorators,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -473,13 +591,86 @@ export class ASTParser {
|
||||
return
|
||||
}
|
||||
|
||||
const valueNode = node.childForFieldName(FieldName.VALUE)
|
||||
const definition = valueNode?.text
|
||||
|
||||
ast.typeAliases.push({
|
||||
name: nameNode.text,
|
||||
line: node.startPosition.row + 1,
|
||||
isExported,
|
||||
definition,
|
||||
})
|
||||
}
|
||||
|
||||
private extractEnum(node: SyntaxNode, ast: FileAST, isExported: boolean): void {
|
||||
const nameNode = node.childForFieldName(FieldName.NAME)
|
||||
if (!nameNode) {
|
||||
return
|
||||
}
|
||||
|
||||
const body = node.childForFieldName(FieldName.BODY)
|
||||
const members: EnumMemberInfo[] = []
|
||||
|
||||
if (body) {
|
||||
for (const child of body.children) {
|
||||
if (child.type === NodeType.ENUM_ASSIGNMENT) {
|
||||
const memberName = child.childForFieldName(FieldName.NAME)
|
||||
const memberValue = child.childForFieldName(FieldName.VALUE)
|
||||
if (memberName) {
|
||||
members.push({
|
||||
name: memberName.text,
|
||||
value: this.parseEnumValue(memberValue),
|
||||
})
|
||||
}
|
||||
} else if (
|
||||
child.type === NodeType.IDENTIFIER ||
|
||||
child.type === NodeType.PROPERTY_IDENTIFIER
|
||||
) {
|
||||
members.push({
|
||||
name: child.text,
|
||||
value: undefined,
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
const isConst = node.children.some((c) => c.text === "const")
|
||||
|
||||
ast.enums.push({
|
||||
name: nameNode.text,
|
||||
lineStart: node.startPosition.row + 1,
|
||||
lineEnd: node.endPosition.row + 1,
|
||||
members,
|
||||
isExported,
|
||||
isConst,
|
||||
})
|
||||
}
|
||||
|
||||
private parseEnumValue(valueNode: SyntaxNode | null): string | number | undefined {
|
||||
if (!valueNode) {
|
||||
return undefined
|
||||
}
|
||||
|
||||
const text = valueNode.text
|
||||
|
||||
if (valueNode.type === "number") {
|
||||
return Number(text)
|
||||
}
|
||||
|
||||
if (valueNode.type === "string") {
|
||||
return this.getStringValue(valueNode)
|
||||
}
|
||||
|
||||
if (valueNode.type === "unary_expression" && text.startsWith("-")) {
|
||||
const num = Number(text)
|
||||
if (!isNaN(num)) {
|
||||
return num
|
||||
}
|
||||
}
|
||||
|
||||
return text
|
||||
}
|
||||
|
||||
private extractParameters(node: SyntaxNode): ParameterInfo[] {
|
||||
const params: ParameterInfo[] = []
|
||||
const paramsNode = node.childForFieldName(FieldName.PARAMETERS)
|
||||
@@ -528,6 +719,49 @@ export class ASTParser {
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Format a decorator node to a string like "@Get(':id')" or "@Injectable()".
|
||||
*/
|
||||
private formatDecorator(node: SyntaxNode): string {
|
||||
return node.text.replace(/\s+/g, " ").trim()
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract decorators that are direct children of a node.
|
||||
* In tree-sitter, decorators are children of the class/function declaration.
|
||||
*/
|
||||
private extractNodeDecorators(node: SyntaxNode): string[] {
|
||||
const decorators: string[] = []
|
||||
for (const child of node.children) {
|
||||
if (child.type === NodeType.DECORATOR) {
|
||||
decorators.push(this.formatDecorator(child))
|
||||
}
|
||||
}
|
||||
return decorators
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract decorators from sibling nodes before the current node.
|
||||
* Decorators appear as children before the declaration in export statements.
|
||||
*/
|
||||
private extractDecoratorsFromSiblings(node: SyntaxNode): string[] {
|
||||
const decorators: string[] = []
|
||||
const parent = node.parent
|
||||
if (!parent) {
|
||||
return decorators
|
||||
}
|
||||
|
||||
for (const sibling of parent.children) {
|
||||
if (sibling.type === NodeType.DECORATOR) {
|
||||
decorators.push(this.formatDecorator(sibling))
|
||||
} else if (sibling === node) {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
return decorators
|
||||
}
|
||||
|
||||
private classifyImport(from: string): ImportInfo["type"] {
|
||||
if (from.startsWith(".") || from.startsWith("/")) {
|
||||
return "internal"
|
||||
@@ -548,4 +782,37 @@ export class ASTParser {
|
||||
}
|
||||
return text
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract structure from JSON file.
|
||||
* For JSON files, we extract top-level keys from objects.
|
||||
*/
|
||||
private extractJSONStructure(root: SyntaxNode, ast: FileAST): FileAST {
|
||||
for (const child of root.children) {
|
||||
if (child.type === "object") {
|
||||
this.extractJSONKeys(child, ast)
|
||||
}
|
||||
}
|
||||
return ast
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract keys from JSON object.
|
||||
*/
|
||||
private extractJSONKeys(node: SyntaxNode, ast: FileAST): void {
|
||||
for (const child of node.children) {
|
||||
if (child.type === "pair") {
|
||||
const keyNode = child.childForFieldName("key")
|
||||
if (keyNode) {
|
||||
const keyName = this.getStringValue(keyNode)
|
||||
ast.exports.push({
|
||||
name: keyName,
|
||||
line: keyNode.startPosition.row + 1,
|
||||
isDefault: false,
|
||||
kind: "variable",
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -96,12 +96,27 @@ export class FileScanner {
|
||||
const stats = await this.safeStats(fullPath)
|
||||
|
||||
if (stats) {
|
||||
yield {
|
||||
const type = stats.isSymbolicLink()
|
||||
? "symlink"
|
||||
: stats.isDirectory()
|
||||
? "directory"
|
||||
: "file"
|
||||
|
||||
const result: ScanResult = {
|
||||
path: relativePath,
|
||||
type: "file",
|
||||
type,
|
||||
size: stats.size,
|
||||
lastModified: stats.mtimeMs,
|
||||
}
|
||||
|
||||
if (type === "symlink") {
|
||||
const target = await this.safeReadlink(fullPath)
|
||||
if (target) {
|
||||
result.symlinkTarget = target
|
||||
}
|
||||
}
|
||||
|
||||
yield result
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -127,10 +142,22 @@ export class FileScanner {
|
||||
|
||||
/**
|
||||
* Safely get file stats without throwing.
|
||||
* Uses lstat to get information about symlinks themselves.
|
||||
*/
|
||||
private async safeStats(filePath: string): Promise<Stats | null> {
|
||||
try {
|
||||
return await fs.stat(filePath)
|
||||
return await fs.lstat(filePath)
|
||||
} catch {
|
||||
return null
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Safely read symlink target without throwing.
|
||||
*/
|
||||
private async safeReadlink(filePath: string): Promise<string | null> {
|
||||
try {
|
||||
return await fs.readlink(filePath)
|
||||
} catch {
|
||||
return null
|
||||
}
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
import * as path from "node:path"
|
||||
import {
|
||||
calculateImpactScore,
|
||||
type ComplexityMetrics,
|
||||
createFileMeta,
|
||||
type FileMeta,
|
||||
@@ -430,6 +431,7 @@ export class MetaAnalyzer {
|
||||
|
||||
/**
|
||||
* Batch analyze multiple files.
|
||||
* Computes impact scores and transitive dependencies after all files are analyzed.
|
||||
*/
|
||||
analyzeAll(files: Map<string, { ast: FileAST; content: string }>): Map<string, FileMeta> {
|
||||
const allASTs = new Map<string, FileAST>()
|
||||
@@ -443,6 +445,171 @@ export class MetaAnalyzer {
|
||||
results.set(filePath, meta)
|
||||
}
|
||||
|
||||
// Compute impact scores now that we know total file count
|
||||
const totalFiles = results.size
|
||||
for (const [, meta] of results) {
|
||||
meta.impactScore = calculateImpactScore(meta.dependents.length, totalFiles)
|
||||
}
|
||||
|
||||
// Compute transitive dependency counts
|
||||
this.computeTransitiveCounts(results)
|
||||
|
||||
return results
|
||||
}
|
||||
|
||||
/**
|
||||
* Compute transitive dependency counts for all files.
|
||||
* Uses DFS with memoization for efficiency.
|
||||
*/
|
||||
computeTransitiveCounts(metas: Map<string, FileMeta>): void {
|
||||
// Memoization caches
|
||||
const transitiveDepCache = new Map<string, Set<string>>()
|
||||
const transitiveDepByCache = new Map<string, Set<string>>()
|
||||
|
||||
// Compute transitive dependents (files that depend on this file, directly or transitively)
|
||||
for (const [filePath, meta] of metas) {
|
||||
const transitiveDeps = this.getTransitiveDependents(filePath, metas, transitiveDepCache)
|
||||
// Exclude the file itself from count (can happen in cycles)
|
||||
meta.transitiveDepCount = transitiveDeps.has(filePath)
|
||||
? transitiveDeps.size - 1
|
||||
: transitiveDeps.size
|
||||
}
|
||||
|
||||
// Compute transitive dependencies (files this file depends on, directly or transitively)
|
||||
for (const [filePath, meta] of metas) {
|
||||
const transitiveDepsBy = this.getTransitiveDependencies(
|
||||
filePath,
|
||||
metas,
|
||||
transitiveDepByCache,
|
||||
)
|
||||
// Exclude the file itself from count (can happen in cycles)
|
||||
meta.transitiveDepByCount = transitiveDepsBy.has(filePath)
|
||||
? transitiveDepsBy.size - 1
|
||||
: transitiveDepsBy.size
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get all files that depend on the given file transitively.
|
||||
* Uses DFS with cycle detection. Caching only at the top level.
|
||||
*/
|
||||
getTransitiveDependents(
|
||||
filePath: string,
|
||||
metas: Map<string, FileMeta>,
|
||||
cache: Map<string, Set<string>>,
|
||||
visited?: Set<string>,
|
||||
): Set<string> {
|
||||
// Return cached result if available (only valid for top-level calls)
|
||||
if (!visited) {
|
||||
const cached = cache.get(filePath)
|
||||
if (cached) {
|
||||
return cached
|
||||
}
|
||||
}
|
||||
|
||||
const isTopLevel = !visited
|
||||
if (!visited) {
|
||||
visited = new Set()
|
||||
}
|
||||
|
||||
// Detect cycles
|
||||
if (visited.has(filePath)) {
|
||||
return new Set()
|
||||
}
|
||||
|
||||
visited.add(filePath)
|
||||
const result = new Set<string>()
|
||||
|
||||
const meta = metas.get(filePath)
|
||||
if (!meta) {
|
||||
if (isTopLevel) {
|
||||
cache.set(filePath, result)
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
// Add direct dependents
|
||||
for (const dependent of meta.dependents) {
|
||||
result.add(dependent)
|
||||
|
||||
// Recursively add transitive dependents
|
||||
const transitive = this.getTransitiveDependents(
|
||||
dependent,
|
||||
metas,
|
||||
cache,
|
||||
new Set(visited),
|
||||
)
|
||||
for (const t of transitive) {
|
||||
result.add(t)
|
||||
}
|
||||
}
|
||||
|
||||
// Only cache top-level results (not intermediate results during recursion)
|
||||
if (isTopLevel) {
|
||||
cache.set(filePath, result)
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
/**
|
||||
* Get all files that the given file depends on transitively.
|
||||
* Uses DFS with cycle detection. Caching only at the top level.
|
||||
*/
|
||||
getTransitiveDependencies(
|
||||
filePath: string,
|
||||
metas: Map<string, FileMeta>,
|
||||
cache: Map<string, Set<string>>,
|
||||
visited?: Set<string>,
|
||||
): Set<string> {
|
||||
// Return cached result if available (only valid for top-level calls)
|
||||
if (!visited) {
|
||||
const cached = cache.get(filePath)
|
||||
if (cached) {
|
||||
return cached
|
||||
}
|
||||
}
|
||||
|
||||
const isTopLevel = !visited
|
||||
if (!visited) {
|
||||
visited = new Set()
|
||||
}
|
||||
|
||||
// Detect cycles
|
||||
if (visited.has(filePath)) {
|
||||
return new Set()
|
||||
}
|
||||
|
||||
visited.add(filePath)
|
||||
const result = new Set<string>()
|
||||
|
||||
const meta = metas.get(filePath)
|
||||
if (!meta) {
|
||||
if (isTopLevel) {
|
||||
cache.set(filePath, result)
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
// Add direct dependencies
|
||||
for (const dependency of meta.dependencies) {
|
||||
result.add(dependency)
|
||||
|
||||
// Recursively add transitive dependencies
|
||||
const transitive = this.getTransitiveDependencies(
|
||||
dependency,
|
||||
metas,
|
||||
cache,
|
||||
new Set(visited),
|
||||
)
|
||||
for (const t of transitive) {
|
||||
result.add(t)
|
||||
}
|
||||
}
|
||||
|
||||
// Only cache top-level results (not intermediate results during recursion)
|
||||
if (isTopLevel) {
|
||||
cache.set(filePath, result)
|
||||
}
|
||||
return result
|
||||
}
|
||||
}
|
||||
|
||||
@@ -16,6 +16,7 @@ export const NodeType = {
|
||||
CLASS_DECLARATION: "class_declaration",
|
||||
INTERFACE_DECLARATION: "interface_declaration",
|
||||
TYPE_ALIAS_DECLARATION: "type_alias_declaration",
|
||||
ENUM_DECLARATION: "enum_declaration",
|
||||
|
||||
// Clauses
|
||||
IMPORT_CLAUSE: "import_clause",
|
||||
@@ -37,6 +38,11 @@ export const NodeType = {
|
||||
FIELD_DEFINITION: "field_definition",
|
||||
PROPERTY_SIGNATURE: "property_signature",
|
||||
|
||||
// Enum members
|
||||
ENUM_BODY: "enum_body",
|
||||
ENUM_ASSIGNMENT: "enum_assignment",
|
||||
PROPERTY_IDENTIFIER: "property_identifier",
|
||||
|
||||
// Parameters
|
||||
REQUIRED_PARAMETER: "required_parameter",
|
||||
OPTIONAL_PARAMETER: "optional_parameter",
|
||||
@@ -57,6 +63,9 @@ export const NodeType = {
|
||||
DEFAULT: "default",
|
||||
ACCESSIBILITY_MODIFIER: "accessibility_modifier",
|
||||
READONLY: "readonly",
|
||||
|
||||
// Decorators
|
||||
DECORATOR: "decorator",
|
||||
} as const
|
||||
|
||||
export type NodeTypeValue = (typeof NodeType)[keyof typeof NodeType]
|
||||
|
||||
@@ -1,19 +1,17 @@
|
||||
import { type Message, Ollama, type Tool } from "ollama"
|
||||
import type {
|
||||
ILLMClient,
|
||||
LLMResponse,
|
||||
ToolDef,
|
||||
ToolParameter,
|
||||
} from "../../domain/services/ILLMClient.js"
|
||||
import type { ILLMClient, LLMResponse } from "../../domain/services/ILLMClient.js"
|
||||
import type { ChatMessage } from "../../domain/value-objects/ChatMessage.js"
|
||||
import { createToolCall, type ToolCall } from "../../domain/value-objects/ToolCall.js"
|
||||
import type { LLMConfig } from "../../shared/constants/config.js"
|
||||
import { IpuaroError } from "../../shared/errors/IpuaroError.js"
|
||||
import { estimateTokens } from "../../shared/utils/tokens.js"
|
||||
import { parseToolCalls } from "./ResponseParser.js"
|
||||
import { getOllamaNativeTools } from "./toolDefs.js"
|
||||
|
||||
/**
|
||||
* Ollama LLM client implementation.
|
||||
* Wraps the Ollama SDK for chat completions with tool support.
|
||||
* Supports both XML-based and native Ollama tool calling.
|
||||
*/
|
||||
export class OllamaClient implements ILLMClient {
|
||||
private readonly client: Ollama
|
||||
@@ -22,6 +20,7 @@ export class OllamaClient implements ILLMClient {
|
||||
private readonly contextWindow: number
|
||||
private readonly temperature: number
|
||||
private readonly timeout: number
|
||||
private readonly useNativeTools: boolean
|
||||
private abortController: AbortController | null = null
|
||||
|
||||
constructor(config: LLMConfig) {
|
||||
@@ -31,40 +30,25 @@ export class OllamaClient implements ILLMClient {
|
||||
this.contextWindow = config.contextWindow
|
||||
this.temperature = config.temperature
|
||||
this.timeout = config.timeout
|
||||
this.useNativeTools = config.useNativeTools ?? false
|
||||
}
|
||||
|
||||
/**
|
||||
* Send messages to LLM and get response.
|
||||
* Supports both XML-based tool calling and native Ollama tools.
|
||||
*/
|
||||
async chat(messages: ChatMessage[], tools?: ToolDef[]): Promise<LLMResponse> {
|
||||
async chat(messages: ChatMessage[]): Promise<LLMResponse> {
|
||||
const startTime = Date.now()
|
||||
this.abortController = new AbortController()
|
||||
|
||||
try {
|
||||
const ollamaMessages = this.convertMessages(messages)
|
||||
const ollamaTools = tools ? this.convertTools(tools) : undefined
|
||||
|
||||
const response = await this.client.chat({
|
||||
model: this.model,
|
||||
messages: ollamaMessages,
|
||||
tools: ollamaTools,
|
||||
options: {
|
||||
temperature: this.temperature,
|
||||
},
|
||||
stream: false,
|
||||
})
|
||||
|
||||
const timeMs = Date.now() - startTime
|
||||
const toolCalls = this.extractToolCalls(response.message)
|
||||
|
||||
return {
|
||||
content: response.message.content,
|
||||
toolCalls,
|
||||
tokens: response.eval_count ?? estimateTokens(response.message.content),
|
||||
timeMs,
|
||||
truncated: false,
|
||||
stopReason: this.determineStopReason(response, toolCalls),
|
||||
if (this.useNativeTools) {
|
||||
return await this.chatWithNativeTools(ollamaMessages, startTime)
|
||||
}
|
||||
|
||||
return await this.chatWithXMLTools(ollamaMessages, startTime)
|
||||
} catch (error) {
|
||||
if (error instanceof Error && error.name === "AbortError") {
|
||||
throw IpuaroError.llm("Request was aborted")
|
||||
@@ -75,6 +59,131 @@ export class OllamaClient implements ILLMClient {
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Chat using XML-based tool calling (legacy mode).
|
||||
*/
|
||||
private async chatWithXMLTools(
|
||||
ollamaMessages: Message[],
|
||||
startTime: number,
|
||||
): Promise<LLMResponse> {
|
||||
const response = await this.client.chat({
|
||||
model: this.model,
|
||||
messages: ollamaMessages,
|
||||
options: {
|
||||
temperature: this.temperature,
|
||||
},
|
||||
stream: false,
|
||||
})
|
||||
|
||||
const timeMs = Date.now() - startTime
|
||||
const parsed = parseToolCalls(response.message.content)
|
||||
|
||||
return {
|
||||
content: parsed.content,
|
||||
toolCalls: parsed.toolCalls,
|
||||
tokens: response.eval_count ?? estimateTokens(response.message.content),
|
||||
timeMs,
|
||||
truncated: false,
|
||||
stopReason: this.determineStopReason(response, parsed.toolCalls),
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Chat using native Ollama tool calling.
|
||||
*/
|
||||
private async chatWithNativeTools(
|
||||
ollamaMessages: Message[],
|
||||
startTime: number,
|
||||
): Promise<LLMResponse> {
|
||||
const nativeTools = getOllamaNativeTools() as Tool[]
|
||||
|
||||
const response = await this.client.chat({
|
||||
model: this.model,
|
||||
messages: ollamaMessages,
|
||||
tools: nativeTools,
|
||||
options: {
|
||||
temperature: this.temperature,
|
||||
},
|
||||
stream: false,
|
||||
})
|
||||
|
||||
const timeMs = Date.now() - startTime
|
||||
let toolCalls = this.parseNativeToolCalls(response.message.tool_calls)
|
||||
|
||||
// Fallback: some models return tool calls as JSON in content
|
||||
if (toolCalls.length === 0 && response.message.content) {
|
||||
toolCalls = this.parseToolCallsFromContent(response.message.content)
|
||||
}
|
||||
|
||||
const content = toolCalls.length > 0 ? "" : response.message.content || ""
|
||||
|
||||
return {
|
||||
content,
|
||||
toolCalls,
|
||||
tokens: response.eval_count ?? estimateTokens(response.message.content || ""),
|
||||
timeMs,
|
||||
truncated: false,
|
||||
stopReason: toolCalls.length > 0 ? "tool_use" : "end",
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse native Ollama tool calls into ToolCall format.
|
||||
*/
|
||||
private parseNativeToolCalls(
|
||||
nativeToolCalls?: { function: { name: string; arguments: Record<string, unknown> } }[],
|
||||
): ToolCall[] {
|
||||
if (!nativeToolCalls || nativeToolCalls.length === 0) {
|
||||
return []
|
||||
}
|
||||
|
||||
return nativeToolCalls.map((tc, index) =>
|
||||
createToolCall(
|
||||
`native_${String(Date.now())}_${String(index)}`,
|
||||
tc.function.name,
|
||||
tc.function.arguments,
|
||||
),
|
||||
)
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse tool calls from content (fallback for models that return JSON in content).
|
||||
* Supports format: {"name": "tool_name", "arguments": {...}}
|
||||
*/
|
||||
private parseToolCallsFromContent(content: string): ToolCall[] {
|
||||
const toolCalls: ToolCall[] = []
|
||||
|
||||
// Try to parse JSON objects from content
|
||||
const jsonRegex = /\{[\s\S]*?"name"[\s\S]*?"arguments"[\s\S]*?\}/g
|
||||
const matches = content.match(jsonRegex)
|
||||
|
||||
if (!matches) {
|
||||
return toolCalls
|
||||
}
|
||||
|
||||
for (const match of matches) {
|
||||
try {
|
||||
const parsed = JSON.parse(match) as {
|
||||
name?: string
|
||||
arguments?: Record<string, unknown>
|
||||
}
|
||||
if (parsed.name && typeof parsed.name === "string") {
|
||||
toolCalls.push(
|
||||
createToolCall(
|
||||
`json_${String(Date.now())}_${String(toolCalls.length)}`,
|
||||
parsed.name,
|
||||
parsed.arguments ?? {},
|
||||
),
|
||||
)
|
||||
}
|
||||
} catch {
|
||||
// Invalid JSON, skip
|
||||
}
|
||||
}
|
||||
|
||||
return toolCalls
|
||||
}
|
||||
|
||||
/**
|
||||
* Count tokens in text.
|
||||
* Uses estimation since Ollama doesn't provide a tokenizer endpoint.
|
||||
@@ -205,69 +314,12 @@ export class OllamaClient implements ILLMClient {
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Convert ToolDef array to Ollama Tool format.
|
||||
*/
|
||||
private convertTools(tools: ToolDef[]): Tool[] {
|
||||
return tools.map(
|
||||
(tool): Tool => ({
|
||||
type: "function",
|
||||
function: {
|
||||
name: tool.name,
|
||||
description: tool.description,
|
||||
parameters: {
|
||||
type: "object",
|
||||
properties: this.convertParameters(tool.parameters),
|
||||
required: tool.parameters.filter((p) => p.required).map((p) => p.name),
|
||||
},
|
||||
},
|
||||
}),
|
||||
)
|
||||
}
|
||||
|
||||
/**
|
||||
* Convert ToolParameter array to JSON Schema properties.
|
||||
*/
|
||||
private convertParameters(
|
||||
params: ToolParameter[],
|
||||
): Record<string, { type: string; description: string; enum?: string[] }> {
|
||||
const properties: Record<string, { type: string; description: string; enum?: string[] }> =
|
||||
{}
|
||||
|
||||
for (const param of params) {
|
||||
properties[param.name] = {
|
||||
type: param.type,
|
||||
description: param.description,
|
||||
...(param.enum && { enum: param.enum }),
|
||||
}
|
||||
}
|
||||
|
||||
return properties
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract tool calls from Ollama response message.
|
||||
*/
|
||||
private extractToolCalls(message: Message): ToolCall[] {
|
||||
if (!message.tool_calls || message.tool_calls.length === 0) {
|
||||
return []
|
||||
}
|
||||
|
||||
return message.tool_calls.map((tc, index) =>
|
||||
createToolCall(
|
||||
`call_${String(Date.now())}_${String(index)}`,
|
||||
tc.function.name,
|
||||
tc.function.arguments,
|
||||
),
|
||||
)
|
||||
}
|
||||
|
||||
/**
|
||||
* Determine stop reason from response.
|
||||
*/
|
||||
private determineStopReason(
|
||||
response: { done_reason?: string },
|
||||
toolCalls: ToolCall[],
|
||||
toolCalls: { name: string; params: Record<string, unknown> }[],
|
||||
): "end" | "length" | "tool_use" {
|
||||
if (toolCalls.length > 0) {
|
||||
return "tool_use"
|
||||
|
||||
@@ -27,19 +27,103 @@ const TOOL_CALL_REGEX = /<tool_call\s+name\s*=\s*"([^"]+)">([\s\S]*?)<\/tool_cal
|
||||
const PARAM_REGEX_NAMED = /<param\s+name\s*=\s*"([^"]+)">([\s\S]*?)<\/param>/gi
|
||||
const PARAM_REGEX_ELEMENT = /<([a-z_][a-z0-9_]*)>([\s\S]*?)<\/\1>/gi
|
||||
|
||||
/**
|
||||
* CDATA section pattern.
|
||||
* Matches: <![CDATA[...]]>
|
||||
*/
|
||||
const CDATA_REGEX = /<!\[CDATA\[([\s\S]*?)\]\]>/g
|
||||
|
||||
/**
|
||||
* Valid tool names.
|
||||
* Used for validation to catch typos or hallucinations.
|
||||
*/
|
||||
const VALID_TOOL_NAMES = new Set([
|
||||
"get_lines",
|
||||
"get_function",
|
||||
"get_class",
|
||||
"get_structure",
|
||||
"edit_lines",
|
||||
"create_file",
|
||||
"delete_file",
|
||||
"find_references",
|
||||
"find_definition",
|
||||
"get_dependencies",
|
||||
"get_dependents",
|
||||
"get_complexity",
|
||||
"get_todos",
|
||||
"git_status",
|
||||
"git_diff",
|
||||
"git_commit",
|
||||
"run_command",
|
||||
"run_tests",
|
||||
])
|
||||
|
||||
/**
|
||||
* Tool name aliases for common LLM typos/variations.
|
||||
* Maps incorrect names to correct tool names.
|
||||
*/
|
||||
const TOOL_ALIASES: Record<string, string> = {
|
||||
// get_lines aliases
|
||||
get_functions: "get_lines",
|
||||
read_file: "get_lines",
|
||||
read_lines: "get_lines",
|
||||
get_file: "get_lines",
|
||||
read: "get_lines",
|
||||
// get_function aliases
|
||||
getfunction: "get_function",
|
||||
// get_structure aliases
|
||||
list_files: "get_structure",
|
||||
get_files: "get_structure",
|
||||
list_structure: "get_structure",
|
||||
get_project_structure: "get_structure",
|
||||
// get_todos aliases
|
||||
find_todos: "get_todos",
|
||||
list_todos: "get_todos",
|
||||
// find_references aliases
|
||||
get_references: "find_references",
|
||||
// find_definition aliases
|
||||
get_definition: "find_definition",
|
||||
// edit_lines aliases
|
||||
edit_file: "edit_lines",
|
||||
modify_file: "edit_lines",
|
||||
update_file: "edit_lines",
|
||||
}
|
||||
|
||||
/**
|
||||
* Normalize tool name using aliases.
|
||||
*/
|
||||
function normalizeToolName(name: string): string {
|
||||
const lowerName = name.toLowerCase()
|
||||
return TOOL_ALIASES[lowerName] ?? name
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse tool calls from LLM response text.
|
||||
* Supports XML format: <tool_call name="get_lines"><path>src/index.ts</path></tool_call>
|
||||
* Supports both XML and JSON formats:
|
||||
* - XML: <tool_call name="get_lines"><path>src/index.ts</path></tool_call>
|
||||
* - JSON: {"name": "get_lines", "arguments": {"path": "src/index.ts"}}
|
||||
* Validates tool names and provides helpful error messages.
|
||||
*/
|
||||
export function parseToolCalls(response: string): ParsedResponse {
|
||||
const toolCalls: ToolCall[] = []
|
||||
const parseErrors: string[] = []
|
||||
let content = response
|
||||
|
||||
const matches = [...response.matchAll(TOOL_CALL_REGEX)]
|
||||
// First, try XML format
|
||||
const xmlMatches = [...response.matchAll(TOOL_CALL_REGEX)]
|
||||
|
||||
for (const match of matches) {
|
||||
const [fullMatch, toolName, paramsXml] = match
|
||||
for (const match of xmlMatches) {
|
||||
const [fullMatch, rawToolName, paramsXml] = match
|
||||
|
||||
// Normalize tool name (handle common LLM typos/variations)
|
||||
const toolName = normalizeToolName(rawToolName)
|
||||
|
||||
if (!VALID_TOOL_NAMES.has(toolName)) {
|
||||
parseErrors.push(
|
||||
`Unknown tool "${rawToolName}". Valid tools: ${[...VALID_TOOL_NAMES].join(", ")}`,
|
||||
)
|
||||
continue
|
||||
}
|
||||
|
||||
try {
|
||||
const params = parseParameters(paramsXml)
|
||||
@@ -52,7 +136,19 @@ export function parseToolCalls(response: string): ParsedResponse {
|
||||
content = content.replace(fullMatch, "")
|
||||
} catch (error) {
|
||||
const errorMsg = error instanceof Error ? error.message : String(error)
|
||||
parseErrors.push(`Failed to parse tool call "${toolName}": ${errorMsg}`)
|
||||
parseErrors.push(`Failed to parse tool call "${rawToolName}": ${errorMsg}`)
|
||||
}
|
||||
}
|
||||
|
||||
// If no XML tool calls found, try JSON format as fallback
|
||||
if (toolCalls.length === 0) {
|
||||
const jsonResult = parseJsonToolCalls(response)
|
||||
toolCalls.push(...jsonResult.toolCalls)
|
||||
parseErrors.push(...jsonResult.parseErrors)
|
||||
|
||||
// Remove JSON tool calls from content
|
||||
for (const jsonMatch of jsonResult.matchedStrings) {
|
||||
content = content.replace(jsonMatch, "")
|
||||
}
|
||||
}
|
||||
|
||||
@@ -66,6 +162,59 @@ export function parseToolCalls(response: string): ParsedResponse {
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* JSON tool call format pattern.
|
||||
* Matches: {"name": "tool_name", "arguments": {...}}
|
||||
*/
|
||||
const JSON_TOOL_CALL_REGEX =
|
||||
/\{\s*"name"\s*:\s*"([^"]+)"\s*,\s*"arguments"\s*:\s*(\{[^{}]*(?:\{[^{}]*\}[^{}]*)*\})\s*\}/g
|
||||
|
||||
/**
|
||||
* Parse tool calls from JSON format in response.
|
||||
* This is a fallback for LLMs that prefer JSON over XML.
|
||||
*/
|
||||
function parseJsonToolCalls(response: string): {
|
||||
toolCalls: ToolCall[]
|
||||
parseErrors: string[]
|
||||
matchedStrings: string[]
|
||||
} {
|
||||
const toolCalls: ToolCall[] = []
|
||||
const parseErrors: string[] = []
|
||||
const matchedStrings: string[] = []
|
||||
|
||||
const matches = [...response.matchAll(JSON_TOOL_CALL_REGEX)]
|
||||
|
||||
for (const match of matches) {
|
||||
const [fullMatch, rawToolName, argsJson] = match
|
||||
matchedStrings.push(fullMatch)
|
||||
|
||||
// Normalize tool name
|
||||
const toolName = normalizeToolName(rawToolName)
|
||||
|
||||
if (!VALID_TOOL_NAMES.has(toolName)) {
|
||||
parseErrors.push(
|
||||
`Unknown tool "${rawToolName}". Valid tools: ${[...VALID_TOOL_NAMES].join(", ")}`,
|
||||
)
|
||||
continue
|
||||
}
|
||||
|
||||
try {
|
||||
const args = JSON.parse(argsJson) as Record<string, unknown>
|
||||
const toolCall = createToolCall(
|
||||
`json_${String(Date.now())}_${String(toolCalls.length)}`,
|
||||
toolName,
|
||||
args,
|
||||
)
|
||||
toolCalls.push(toolCall)
|
||||
} catch (error) {
|
||||
const errorMsg = error instanceof Error ? error.message : String(error)
|
||||
parseErrors.push(`Failed to parse JSON tool call "${rawToolName}": ${errorMsg}`)
|
||||
}
|
||||
}
|
||||
|
||||
return { toolCalls, parseErrors, matchedStrings }
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse parameters from XML content.
|
||||
*/
|
||||
@@ -91,10 +240,16 @@ function parseParameters(xml: string): Record<string, unknown> {
|
||||
|
||||
/**
|
||||
* Parse a value string to appropriate type.
|
||||
* Supports CDATA sections for multiline content.
|
||||
*/
|
||||
function parseValue(value: string): unknown {
|
||||
const trimmed = value.trim()
|
||||
|
||||
const cdataMatches = [...trimmed.matchAll(CDATA_REGEX)]
|
||||
if (cdataMatches.length > 0 && cdataMatches[0][1] !== undefined) {
|
||||
return cdataMatches[0][1]
|
||||
}
|
||||
|
||||
if (trimmed === "true") {
|
||||
return true
|
||||
}
|
||||
|
||||
@@ -11,72 +11,129 @@ export interface ProjectStructure {
|
||||
directories: string[]
|
||||
}
|
||||
|
||||
/**
|
||||
* Options for building initial context.
|
||||
*/
|
||||
export interface BuildContextOptions {
|
||||
includeSignatures?: boolean
|
||||
includeDepsGraph?: boolean
|
||||
includeCircularDeps?: boolean
|
||||
includeHighImpactFiles?: boolean
|
||||
circularDeps?: string[][]
|
||||
}
|
||||
|
||||
/**
|
||||
* System prompt for the ipuaro AI agent.
|
||||
*/
|
||||
export const SYSTEM_PROMPT = `You are ipuaro, a local AI code assistant specialized in helping developers understand and modify their codebase. You operate within a single project directory and have access to powerful tools for reading, searching, analyzing, and editing code.
|
||||
export const SYSTEM_PROMPT = `You are ipuaro, a local AI code assistant with tools for reading, searching, analyzing, and editing code.
|
||||
|
||||
## Core Principles
|
||||
## When to Use Tools
|
||||
|
||||
1. **Lazy Loading**: You don't have the full code in context. Use tools to fetch exactly what you need.
|
||||
2. **Precision**: Always verify file paths and line numbers before making changes.
|
||||
3. **Safety**: Confirm destructive operations. Never execute dangerous commands.
|
||||
4. **Efficiency**: Minimize context usage. Request only necessary code sections.
|
||||
**Use tools** when the user asks about:
|
||||
- Code content (files, functions, classes)
|
||||
- Project structure
|
||||
- TODOs, complexity, dependencies
|
||||
- Git status, diffs, commits
|
||||
- Running commands or tests
|
||||
|
||||
**Do NOT use tools** for:
|
||||
- Greetings ("Hello", "Hi", "Thanks")
|
||||
- General questions not about this codebase
|
||||
- Clarifying questions back to the user
|
||||
|
||||
## MANDATORY: Tools for Code Questions
|
||||
|
||||
**CRITICAL:** You have ZERO code in your context. To answer ANY question about code, you MUST first call a tool.
|
||||
|
||||
**WRONG:**
|
||||
User: "What's in src/index.ts?"
|
||||
Assistant: "The file likely contains..." ← WRONG! Call a tool!
|
||||
|
||||
**CORRECT:**
|
||||
User: "What's in src/index.ts?"
|
||||
<tool_call name="get_lines">
|
||||
<path>src/index.ts</path>
|
||||
</tool_call>
|
||||
|
||||
## Tool Call Format
|
||||
|
||||
Output this XML format. Do NOT explain before calling - just output the XML:
|
||||
|
||||
<tool_call name="TOOL_NAME">
|
||||
<param1>value1</param1>
|
||||
<param2>value2</param2>
|
||||
</tool_call>
|
||||
|
||||
## Example Interactions
|
||||
|
||||
**Example 1 - Reading a file:**
|
||||
User: "Show me the main function in src/app.ts"
|
||||
<tool_call name="get_function">
|
||||
<path>src/app.ts</path>
|
||||
<name>main</name>
|
||||
</tool_call>
|
||||
|
||||
**Example 2 - Finding TODOs:**
|
||||
User: "Are there any TODO comments?"
|
||||
<tool_call name="get_todos">
|
||||
</tool_call>
|
||||
|
||||
**Example 3 - Project structure:**
|
||||
User: "What files are in this project?"
|
||||
<tool_call name="get_structure">
|
||||
<path>.</path>
|
||||
</tool_call>
|
||||
|
||||
## Available Tools
|
||||
|
||||
### Reading Tools
|
||||
- \`get_lines\`: Get specific lines from a file
|
||||
- \`get_function\`: Get a function by name
|
||||
- \`get_class\`: Get a class by name
|
||||
- \`get_structure\`: Get project directory structure
|
||||
### Reading
|
||||
- get_lines(path, start?, end?) - Read file lines
|
||||
- get_function(path, name) - Get function by name
|
||||
- get_class(path, name) - Get class by name
|
||||
- get_structure(path?, depth?) - List project files
|
||||
|
||||
### Editing Tools (require confirmation)
|
||||
- \`edit_lines\`: Replace specific lines in a file
|
||||
- \`create_file\`: Create a new file
|
||||
- \`delete_file\`: Delete a file
|
||||
### Analysis
|
||||
- get_todos(path?, type?) - Find TODO/FIXME comments
|
||||
- get_dependencies(path) - What this file imports
|
||||
- get_dependents(path) - What imports this file
|
||||
- get_complexity(path?) - Code complexity metrics
|
||||
- find_references(symbol) - Find all usages of a symbol
|
||||
- find_definition(symbol) - Find where symbol is defined
|
||||
|
||||
### Search Tools
|
||||
- \`find_references\`: Find all usages of a symbol
|
||||
- \`find_definition\`: Find where a symbol is defined
|
||||
### Editing (requires confirmation)
|
||||
- edit_lines(path, start, end, content) - Modify file lines
|
||||
- create_file(path, content) - Create new file
|
||||
- delete_file(path) - Delete a file
|
||||
|
||||
### Analysis Tools
|
||||
- \`get_dependencies\`: Get files this file imports
|
||||
- \`get_dependents\`: Get files that import this file
|
||||
- \`get_complexity\`: Get complexity metrics
|
||||
- \`get_todos\`: Find TODO/FIXME comments
|
||||
### Git
|
||||
- git_status() - Repository status
|
||||
- git_diff(path?, staged?) - Show changes
|
||||
- git_commit(message, files?) - Create commit
|
||||
|
||||
### Git Tools
|
||||
- \`git_status\`: Get repository status
|
||||
- \`git_diff\`: Get uncommitted changes
|
||||
- \`git_commit\`: Create a commit (requires confirmation)
|
||||
### Commands
|
||||
- run_command(command, timeout?) - Execute shell command
|
||||
- run_tests(path?, filter?) - Run test suite
|
||||
|
||||
### Run Tools
|
||||
- \`run_command\`: Execute a shell command (security checked)
|
||||
- \`run_tests\`: Run the test suite
|
||||
## Rules
|
||||
|
||||
## Response Guidelines
|
||||
1. **ALWAYS call a tool first** when asked about code - you cannot see any files
|
||||
2. **Output XML directly** - don't say "I will use..." just output the tool call
|
||||
3. **Wait for results** before making conclusions
|
||||
4. **Be concise** in your responses
|
||||
5. **Verify before editing** - always read code before modifying it
|
||||
6. **Stay safe** - never execute destructive commands without user confirmation`
|
||||
|
||||
1. **Be concise**: Don't repeat information already in context.
|
||||
2. **Show your work**: Explain what tools you're using and why.
|
||||
3. **Verify before editing**: Always read the target code before modifying it.
|
||||
4. **Handle errors gracefully**: If a tool fails, explain what went wrong and suggest alternatives.
|
||||
/**
|
||||
* Tool usage reminder - appended to messages to reinforce tool usage.
|
||||
* This is added as the last system message before LLM call.
|
||||
*/
|
||||
export const TOOL_REMINDER = `⚠️ REMINDER: To answer this question, you MUST use a tool first.
|
||||
Output the <tool_call> XML directly. Do NOT describe what you will do - just call the tool.
|
||||
|
||||
## Code Editing Rules
|
||||
|
||||
1. Always use \`get_lines\` or \`get_function\` before \`edit_lines\`.
|
||||
2. Provide exact line numbers for edits.
|
||||
3. For large changes, break into multiple small edits.
|
||||
4. After editing, suggest running tests if available.
|
||||
|
||||
## Safety Rules
|
||||
|
||||
1. Never execute commands that could harm the system.
|
||||
2. Never expose sensitive data (API keys, passwords).
|
||||
3. Always confirm file deletions and destructive git operations.
|
||||
4. Stay within the project directory.
|
||||
|
||||
When you need to perform an action, use the appropriate tool. Think step by step about what information you need and which tools will provide it most efficiently.`
|
||||
Example - if asked about a file, output:
|
||||
<tool_call name="get_lines">
|
||||
<path>the/file/path.ts</path>
|
||||
</tool_call>`
|
||||
|
||||
/**
|
||||
* Build initial context from project structure and AST metadata.
|
||||
@@ -86,12 +143,38 @@ export function buildInitialContext(
|
||||
structure: ProjectStructure,
|
||||
asts: Map<string, FileAST>,
|
||||
metas?: Map<string, FileMeta>,
|
||||
options?: BuildContextOptions,
|
||||
): string {
|
||||
const sections: string[] = []
|
||||
const includeSignatures = options?.includeSignatures ?? true
|
||||
const includeDepsGraph = options?.includeDepsGraph ?? true
|
||||
const includeCircularDeps = options?.includeCircularDeps ?? true
|
||||
const includeHighImpactFiles = options?.includeHighImpactFiles ?? true
|
||||
|
||||
sections.push(formatProjectHeader(structure))
|
||||
sections.push(formatDirectoryTree(structure))
|
||||
sections.push(formatFileOverview(asts, metas))
|
||||
sections.push(formatFileOverview(asts, metas, includeSignatures))
|
||||
|
||||
if (includeDepsGraph && metas && metas.size > 0) {
|
||||
const depsGraph = formatDependencyGraph(metas)
|
||||
if (depsGraph) {
|
||||
sections.push(depsGraph)
|
||||
}
|
||||
}
|
||||
|
||||
if (includeHighImpactFiles && metas && metas.size > 0) {
|
||||
const highImpactSection = formatHighImpactFiles(metas)
|
||||
if (highImpactSection) {
|
||||
sections.push(highImpactSection)
|
||||
}
|
||||
}
|
||||
|
||||
if (includeCircularDeps && options?.circularDeps && options.circularDeps.length > 0) {
|
||||
const circularDepsSection = formatCircularDeps(options.circularDeps)
|
||||
if (circularDepsSection) {
|
||||
sections.push(circularDepsSection)
|
||||
}
|
||||
}
|
||||
|
||||
return sections.join("\n\n")
|
||||
}
|
||||
@@ -127,7 +210,11 @@ function formatDirectoryTree(structure: ProjectStructure): string {
|
||||
/**
|
||||
* Format file overview with AST summaries.
|
||||
*/
|
||||
function formatFileOverview(asts: Map<string, FileAST>, metas?: Map<string, FileMeta>): string {
|
||||
function formatFileOverview(
|
||||
asts: Map<string, FileAST>,
|
||||
metas?: Map<string, FileMeta>,
|
||||
includeSignatures = true,
|
||||
): string {
|
||||
const lines: string[] = ["## Files", ""]
|
||||
|
||||
const sortedPaths = [...asts.keys()].sort()
|
||||
@@ -138,16 +225,183 @@ function formatFileOverview(asts: Map<string, FileAST>, metas?: Map<string, File
|
||||
}
|
||||
|
||||
const meta = metas?.get(path)
|
||||
lines.push(formatFileSummary(path, ast, meta))
|
||||
lines.push(formatFileSummary(path, ast, meta, includeSignatures))
|
||||
}
|
||||
|
||||
return lines.join("\n")
|
||||
}
|
||||
|
||||
/**
|
||||
* Format a single file's AST summary.
|
||||
* Format decorators as a prefix string.
|
||||
* Example: "@Get(':id') @Auth() "
|
||||
*/
|
||||
function formatFileSummary(path: string, ast: FileAST, meta?: FileMeta): string {
|
||||
function formatDecoratorsPrefix(decorators: string[] | undefined): string {
|
||||
if (!decorators || decorators.length === 0) {
|
||||
return ""
|
||||
}
|
||||
return `${decorators.join(" ")} `
|
||||
}
|
||||
|
||||
/**
|
||||
* Format a function signature.
|
||||
*/
|
||||
function formatFunctionSignature(fn: FileAST["functions"][0]): string {
|
||||
const decoratorsPrefix = formatDecoratorsPrefix(fn.decorators)
|
||||
const asyncPrefix = fn.isAsync ? "async " : ""
|
||||
const params = fn.params
|
||||
.map((p) => {
|
||||
const optional = p.optional ? "?" : ""
|
||||
const type = p.type ? `: ${p.type}` : ""
|
||||
return `${p.name}${optional}${type}`
|
||||
})
|
||||
.join(", ")
|
||||
const returnType = fn.returnType ? `: ${fn.returnType}` : ""
|
||||
return `${decoratorsPrefix}${asyncPrefix}${fn.name}(${params})${returnType}`
|
||||
}
|
||||
|
||||
/**
|
||||
* Format an interface signature with fields.
|
||||
* Example: "interface User extends Base { id: string, name: string, email?: string }"
|
||||
*/
|
||||
function formatInterfaceSignature(iface: FileAST["interfaces"][0]): string {
|
||||
const extList = iface.extends ?? []
|
||||
const ext = extList.length > 0 ? ` extends ${extList.join(", ")}` : ""
|
||||
|
||||
if (iface.properties.length === 0) {
|
||||
return `interface ${iface.name}${ext}`
|
||||
}
|
||||
|
||||
const fields = iface.properties
|
||||
.map((p) => {
|
||||
const readonly = p.isReadonly ? "readonly " : ""
|
||||
const optional = p.name.endsWith("?") ? "" : ""
|
||||
const type = p.type ? `: ${p.type}` : ""
|
||||
return `${readonly}${p.name}${optional}${type}`
|
||||
})
|
||||
.join(", ")
|
||||
|
||||
return `interface ${iface.name}${ext} { ${fields} }`
|
||||
}
|
||||
|
||||
/**
|
||||
* Format a type alias signature with definition.
|
||||
* Example: "type UserId = string" or "type Handler = (event: Event) => void"
|
||||
*/
|
||||
function formatTypeAliasSignature(type: FileAST["typeAliases"][0]): string {
|
||||
if (!type.definition) {
|
||||
return `type ${type.name}`
|
||||
}
|
||||
|
||||
const definition = truncateDefinition(type.definition, 80)
|
||||
return `type ${type.name} = ${definition}`
|
||||
}
|
||||
|
||||
/**
|
||||
* Format an enum signature with members and values.
|
||||
* Example: "enum Status { Active=1, Inactive=0, Pending=2 }"
|
||||
* Example: "const enum Role { Admin="admin", User="user" }"
|
||||
*/
|
||||
function formatEnumSignature(enumInfo: FileAST["enums"][0]): string {
|
||||
const constPrefix = enumInfo.isConst ? "const " : ""
|
||||
|
||||
if (enumInfo.members.length === 0) {
|
||||
return `${constPrefix}enum ${enumInfo.name}`
|
||||
}
|
||||
|
||||
const membersStr = enumInfo.members
|
||||
.map((m) => {
|
||||
if (m.value === undefined) {
|
||||
return m.name
|
||||
}
|
||||
const valueStr = typeof m.value === "string" ? `"${m.value}"` : String(m.value)
|
||||
return `${m.name}=${valueStr}`
|
||||
})
|
||||
.join(", ")
|
||||
|
||||
const result = `${constPrefix}enum ${enumInfo.name} { ${membersStr} }`
|
||||
|
||||
if (result.length > 100) {
|
||||
return truncateDefinition(result, 100)
|
||||
}
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
/**
|
||||
* Truncate long type definitions for display.
|
||||
*/
|
||||
function truncateDefinition(definition: string, maxLength: number): string {
|
||||
const normalized = definition.replace(/\s+/g, " ").trim()
|
||||
if (normalized.length <= maxLength) {
|
||||
return normalized
|
||||
}
|
||||
return `${normalized.slice(0, maxLength - 3)}...`
|
||||
}
|
||||
|
||||
/**
|
||||
* Format a single file's AST summary.
|
||||
* When includeSignatures is true, shows full function signatures.
|
||||
* When false, shows compact format with just names.
|
||||
*/
|
||||
function formatFileSummary(
|
||||
path: string,
|
||||
ast: FileAST,
|
||||
meta?: FileMeta,
|
||||
includeSignatures = true,
|
||||
): string {
|
||||
const flags = formatFileFlags(meta)
|
||||
|
||||
if (!includeSignatures) {
|
||||
return formatFileSummaryCompact(path, ast, flags)
|
||||
}
|
||||
|
||||
const lines: string[] = []
|
||||
lines.push(`### ${path}${flags}`)
|
||||
|
||||
if (ast.functions.length > 0) {
|
||||
for (const fn of ast.functions) {
|
||||
lines.push(`- ${formatFunctionSignature(fn)}`)
|
||||
}
|
||||
}
|
||||
|
||||
if (ast.classes.length > 0) {
|
||||
for (const cls of ast.classes) {
|
||||
const decoratorsPrefix = formatDecoratorsPrefix(cls.decorators)
|
||||
const ext = cls.extends ? ` extends ${cls.extends}` : ""
|
||||
const impl = cls.implements.length > 0 ? ` implements ${cls.implements.join(", ")}` : ""
|
||||
lines.push(`- ${decoratorsPrefix}class ${cls.name}${ext}${impl}`)
|
||||
}
|
||||
}
|
||||
|
||||
if (ast.interfaces.length > 0) {
|
||||
for (const iface of ast.interfaces) {
|
||||
lines.push(`- ${formatInterfaceSignature(iface)}`)
|
||||
}
|
||||
}
|
||||
|
||||
if (ast.typeAliases.length > 0) {
|
||||
for (const type of ast.typeAliases) {
|
||||
lines.push(`- ${formatTypeAliasSignature(type)}`)
|
||||
}
|
||||
}
|
||||
|
||||
if (ast.enums && ast.enums.length > 0) {
|
||||
for (const enumInfo of ast.enums) {
|
||||
lines.push(`- ${formatEnumSignature(enumInfo)}`)
|
||||
}
|
||||
}
|
||||
|
||||
if (lines.length === 1) {
|
||||
return `- ${path}${flags}`
|
||||
}
|
||||
|
||||
return lines.join("\n")
|
||||
}
|
||||
|
||||
/**
|
||||
* Format file summary in compact mode (just names, no signatures).
|
||||
*/
|
||||
function formatFileSummaryCompact(path: string, ast: FileAST, flags: string): string {
|
||||
const parts: string[] = []
|
||||
|
||||
if (ast.functions.length > 0) {
|
||||
@@ -170,9 +424,12 @@ function formatFileSummary(path: string, ast: FileAST, meta?: FileMeta): string
|
||||
parts.push(`type: ${names}`)
|
||||
}
|
||||
|
||||
const summary = parts.length > 0 ? ` [${parts.join(" | ")}]` : ""
|
||||
const flags = formatFileFlags(meta)
|
||||
if (ast.enums && ast.enums.length > 0) {
|
||||
const names = ast.enums.map((e) => e.name).join(", ")
|
||||
parts.push(`enum: ${names}`)
|
||||
}
|
||||
|
||||
const summary = parts.length > 0 ? ` [${parts.join(" | ")}]` : ""
|
||||
return `- ${path}${summary}${flags}`
|
||||
}
|
||||
|
||||
@@ -201,6 +458,220 @@ function formatFileFlags(meta?: FileMeta): string {
|
||||
return flags.length > 0 ? ` (${flags.join(", ")})` : ""
|
||||
}
|
||||
|
||||
/**
|
||||
* Shorten a file path for display in dependency graph.
|
||||
* Removes common prefixes like "src/" and file extensions.
|
||||
*/
|
||||
function shortenPath(path: string): string {
|
||||
let short = path
|
||||
if (short.startsWith("src/")) {
|
||||
short = short.slice(4)
|
||||
}
|
||||
// Remove common extensions
|
||||
short = short.replace(/\.(ts|tsx|js|jsx)$/, "")
|
||||
// Remove /index suffix
|
||||
short = short.replace(/\/index$/, "")
|
||||
return short
|
||||
}
|
||||
|
||||
/**
|
||||
* Format a single dependency graph entry.
|
||||
* Format: "path: → dep1, dep2 ← dependent1, dependent2"
|
||||
*/
|
||||
function formatDepsEntry(path: string, dependencies: string[], dependents: string[]): string {
|
||||
const parts: string[] = []
|
||||
const shortPath = shortenPath(path)
|
||||
|
||||
if (dependencies.length > 0) {
|
||||
const deps = dependencies.map(shortenPath).join(", ")
|
||||
parts.push(`→ ${deps}`)
|
||||
}
|
||||
|
||||
if (dependents.length > 0) {
|
||||
const deps = dependents.map(shortenPath).join(", ")
|
||||
parts.push(`← ${deps}`)
|
||||
}
|
||||
|
||||
if (parts.length === 0) {
|
||||
return ""
|
||||
}
|
||||
|
||||
return `${shortPath}: ${parts.join(" ")}`
|
||||
}
|
||||
|
||||
/**
|
||||
* Format dependency graph for all files.
|
||||
* Shows hub files first, then files with dependencies/dependents.
|
||||
*
|
||||
* Format:
|
||||
* ## Dependency Graph
|
||||
* services/user: → types/user, utils/validation ← controllers/user
|
||||
* services/auth: → services/user, utils/jwt ← controllers/auth
|
||||
*/
|
||||
export function formatDependencyGraph(metas: Map<string, FileMeta>): string | null {
|
||||
if (metas.size === 0) {
|
||||
return null
|
||||
}
|
||||
|
||||
const entries: { path: string; deps: string[]; dependents: string[]; isHub: boolean }[] = []
|
||||
|
||||
for (const [path, meta] of metas) {
|
||||
// Only include files that have connections
|
||||
if (meta.dependencies.length > 0 || meta.dependents.length > 0) {
|
||||
entries.push({
|
||||
path,
|
||||
deps: meta.dependencies,
|
||||
dependents: meta.dependents,
|
||||
isHub: meta.isHub,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
if (entries.length === 0) {
|
||||
return null
|
||||
}
|
||||
|
||||
// Sort: hubs first, then by total connections (desc), then by path
|
||||
entries.sort((a, b) => {
|
||||
if (a.isHub !== b.isHub) {
|
||||
return a.isHub ? -1 : 1
|
||||
}
|
||||
const aTotal = a.deps.length + a.dependents.length
|
||||
const bTotal = b.deps.length + b.dependents.length
|
||||
if (aTotal !== bTotal) {
|
||||
return bTotal - aTotal
|
||||
}
|
||||
return a.path.localeCompare(b.path)
|
||||
})
|
||||
|
||||
const lines: string[] = ["## Dependency Graph", ""]
|
||||
|
||||
for (const entry of entries) {
|
||||
const line = formatDepsEntry(entry.path, entry.deps, entry.dependents)
|
||||
if (line) {
|
||||
lines.push(line)
|
||||
}
|
||||
}
|
||||
|
||||
// Return null if only header (no actual entries)
|
||||
if (lines.length <= 2) {
|
||||
return null
|
||||
}
|
||||
|
||||
return lines.join("\n")
|
||||
}
|
||||
|
||||
/**
|
||||
* Format circular dependencies for display in context.
|
||||
* Shows warning section with cycle chains.
|
||||
*
|
||||
* Format:
|
||||
* ## ⚠️ Circular Dependencies
|
||||
* - services/user → services/auth → services/user
|
||||
* - utils/a → utils/b → utils/c → utils/a
|
||||
*/
|
||||
export function formatCircularDeps(cycles: string[][]): string | null {
|
||||
if (!cycles || cycles.length === 0) {
|
||||
return null
|
||||
}
|
||||
|
||||
const lines: string[] = ["## ⚠️ Circular Dependencies", ""]
|
||||
|
||||
for (const cycle of cycles) {
|
||||
if (cycle.length === 0) {
|
||||
continue
|
||||
}
|
||||
const formattedCycle = cycle.map(shortenPath).join(" → ")
|
||||
lines.push(`- ${formattedCycle}`)
|
||||
}
|
||||
|
||||
// Return null if only header (no actual cycles)
|
||||
if (lines.length <= 2) {
|
||||
return null
|
||||
}
|
||||
|
||||
return lines.join("\n")
|
||||
}
|
||||
|
||||
/**
|
||||
* Format high impact files table for display in context.
|
||||
* Shows files with highest impact scores (most dependents).
|
||||
* Includes both direct and transitive dependent counts.
|
||||
*
|
||||
* Format:
|
||||
* ## High Impact Files
|
||||
* | File | Impact | Direct | Transitive |
|
||||
* |------|--------|--------|------------|
|
||||
* | src/utils/validation.ts | 67% | 12 | 24 |
|
||||
*
|
||||
* @param metas - Map of file paths to their metadata
|
||||
* @param limit - Maximum number of files to show (default: 10)
|
||||
* @param minImpact - Minimum impact score to include (default: 5)
|
||||
*/
|
||||
export function formatHighImpactFiles(
|
||||
metas: Map<string, FileMeta>,
|
||||
limit = 10,
|
||||
minImpact = 5,
|
||||
): string | null {
|
||||
if (metas.size === 0) {
|
||||
return null
|
||||
}
|
||||
|
||||
// Collect files with impact score >= minImpact
|
||||
const impactFiles: {
|
||||
path: string
|
||||
impact: number
|
||||
dependents: number
|
||||
transitive: number
|
||||
}[] = []
|
||||
|
||||
for (const [path, meta] of metas) {
|
||||
if (meta.impactScore >= minImpact) {
|
||||
impactFiles.push({
|
||||
path,
|
||||
impact: meta.impactScore,
|
||||
dependents: meta.dependents.length,
|
||||
transitive: meta.transitiveDepCount,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
if (impactFiles.length === 0) {
|
||||
return null
|
||||
}
|
||||
|
||||
// Sort by transitive count descending, then by impact, then by path
|
||||
impactFiles.sort((a, b) => {
|
||||
if (a.transitive !== b.transitive) {
|
||||
return b.transitive - a.transitive
|
||||
}
|
||||
if (a.impact !== b.impact) {
|
||||
return b.impact - a.impact
|
||||
}
|
||||
return a.path.localeCompare(b.path)
|
||||
})
|
||||
|
||||
// Take top N files
|
||||
const topFiles = impactFiles.slice(0, limit)
|
||||
|
||||
const lines: string[] = [
|
||||
"## High Impact Files",
|
||||
"",
|
||||
"| File | Impact | Direct | Transitive |",
|
||||
"|------|--------|--------|------------|",
|
||||
]
|
||||
|
||||
for (const file of topFiles) {
|
||||
const shortPath = shortenPath(file.path)
|
||||
const impact = `${String(file.impact)}%`
|
||||
const direct = String(file.dependents)
|
||||
const transitive = String(file.transitive)
|
||||
lines.push(`| ${shortPath} | ${impact} | ${direct} | ${transitive} |`)
|
||||
}
|
||||
|
||||
return lines.join("\n")
|
||||
}
|
||||
|
||||
/**
|
||||
* Format line range for display.
|
||||
*/
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
import type { ToolDef } from "../../domain/services/ILLMClient.js"
|
||||
import type { ToolDef } from "../../shared/types/tool-definitions.js"
|
||||
|
||||
/**
|
||||
* Tool definitions for ipuaro LLM.
|
||||
@@ -509,3 +509,87 @@ export function getToolsByCategory(category: string): ToolDef[] {
|
||||
return []
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* =============================================================================
|
||||
* Native Ollama Tools Format
|
||||
* =============================================================================
|
||||
*/
|
||||
|
||||
/**
|
||||
* Ollama native tool definition format.
|
||||
*/
|
||||
export interface OllamaTool {
|
||||
type: "function"
|
||||
function: {
|
||||
name: string
|
||||
description: string
|
||||
parameters: {
|
||||
type: "object"
|
||||
properties: Record<string, OllamaToolProperty>
|
||||
required: string[]
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
interface OllamaToolProperty {
|
||||
type: string
|
||||
description: string
|
||||
enum?: string[]
|
||||
items?: { type: string }
|
||||
}
|
||||
|
||||
/**
|
||||
* Convert ToolDef to Ollama native format.
|
||||
*/
|
||||
function convertToOllamaTool(tool: ToolDef): OllamaTool {
|
||||
const properties: Record<string, OllamaToolProperty> = {}
|
||||
const required: string[] = []
|
||||
|
||||
for (const param of tool.parameters) {
|
||||
const prop: OllamaToolProperty = {
|
||||
type: param.type === "array" ? "array" : param.type,
|
||||
description: param.description,
|
||||
}
|
||||
|
||||
if (param.enum) {
|
||||
prop.enum = param.enum
|
||||
}
|
||||
|
||||
if (param.type === "array") {
|
||||
prop.items = { type: "string" }
|
||||
}
|
||||
|
||||
properties[param.name] = prop
|
||||
|
||||
if (param.required) {
|
||||
required.push(param.name)
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
type: "function",
|
||||
function: {
|
||||
name: tool.name,
|
||||
description: tool.description,
|
||||
parameters: {
|
||||
type: "object",
|
||||
properties,
|
||||
required,
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* All tools in Ollama native format.
|
||||
* Used when useNativeTools is enabled.
|
||||
*/
|
||||
export const OLLAMA_NATIVE_TOOLS: OllamaTool[] = ALL_TOOLS.map(convertToOllamaTool)
|
||||
|
||||
/**
|
||||
* Get native tool definitions for Ollama.
|
||||
*/
|
||||
export function getOllamaNativeTools(): OllamaTool[] {
|
||||
return OLLAMA_NATIVE_TOOLS
|
||||
}
|
||||
|
||||
293
packages/ipuaro/src/infrastructure/security/PathValidator.ts
Normal file
293
packages/ipuaro/src/infrastructure/security/PathValidator.ts
Normal file
@@ -0,0 +1,293 @@
|
||||
import * as path from "node:path"
|
||||
import { promises as fs } from "node:fs"
|
||||
|
||||
/**
|
||||
* Path validation result classification.
|
||||
*/
|
||||
export type PathValidationStatus = "valid" | "invalid" | "outside_project"
|
||||
|
||||
/**
|
||||
* Result of path validation.
|
||||
*/
|
||||
export interface PathValidationResult {
|
||||
/** Validation status */
|
||||
status: PathValidationStatus
|
||||
/** Reason for the status */
|
||||
reason: string
|
||||
/** Normalized absolute path (only if valid) */
|
||||
absolutePath?: string
|
||||
/** Normalized relative path (only if valid) */
|
||||
relativePath?: string
|
||||
}
|
||||
|
||||
/**
|
||||
* Options for path validation.
|
||||
*/
|
||||
export interface PathValidatorOptions {
|
||||
/** Allow paths that don't exist yet (for create operations) */
|
||||
allowNonExistent?: boolean
|
||||
/** Check if path is a directory */
|
||||
requireDirectory?: boolean
|
||||
/** Check if path is a file */
|
||||
requireFile?: boolean
|
||||
/** Follow symlinks when checking existence */
|
||||
followSymlinks?: boolean
|
||||
}
|
||||
|
||||
/**
|
||||
* Path validator for ensuring file operations stay within project boundaries.
|
||||
* Prevents path traversal attacks and unauthorized file access.
|
||||
*/
|
||||
export class PathValidator {
|
||||
private readonly projectRoot: string
|
||||
|
||||
constructor(projectRoot: string) {
|
||||
this.projectRoot = path.resolve(projectRoot)
|
||||
}
|
||||
|
||||
/**
|
||||
* Validate a path and return detailed result.
|
||||
* @param inputPath - Path to validate (relative or absolute)
|
||||
* @param options - Validation options
|
||||
*/
|
||||
async validate(
|
||||
inputPath: string,
|
||||
options: PathValidatorOptions = {},
|
||||
): Promise<PathValidationResult> {
|
||||
if (!inputPath || inputPath.trim() === "") {
|
||||
return {
|
||||
status: "invalid",
|
||||
reason: "Path is empty",
|
||||
}
|
||||
}
|
||||
|
||||
const normalizedInput = inputPath.trim()
|
||||
|
||||
if (this.containsTraversalPatterns(normalizedInput)) {
|
||||
return {
|
||||
status: "invalid",
|
||||
reason: "Path contains traversal patterns",
|
||||
}
|
||||
}
|
||||
|
||||
const absolutePath = path.resolve(this.projectRoot, normalizedInput)
|
||||
|
||||
if (!this.isWithinProject(absolutePath)) {
|
||||
return {
|
||||
status: "outside_project",
|
||||
reason: "Path is outside project root",
|
||||
}
|
||||
}
|
||||
|
||||
const relativePath = path.relative(this.projectRoot, absolutePath)
|
||||
|
||||
if (!options.allowNonExistent) {
|
||||
const existsResult = await this.checkExists(absolutePath, options)
|
||||
if (existsResult) {
|
||||
return existsResult
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
status: "valid",
|
||||
reason: "Path is valid",
|
||||
absolutePath,
|
||||
relativePath,
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Synchronous validation for simple checks.
|
||||
* Does not check file existence or type.
|
||||
* @param inputPath - Path to validate (relative or absolute)
|
||||
*/
|
||||
validateSync(inputPath: string): PathValidationResult {
|
||||
if (!inputPath || inputPath.trim() === "") {
|
||||
return {
|
||||
status: "invalid",
|
||||
reason: "Path is empty",
|
||||
}
|
||||
}
|
||||
|
||||
const normalizedInput = inputPath.trim()
|
||||
|
||||
if (this.containsTraversalPatterns(normalizedInput)) {
|
||||
return {
|
||||
status: "invalid",
|
||||
reason: "Path contains traversal patterns",
|
||||
}
|
||||
}
|
||||
|
||||
const absolutePath = path.resolve(this.projectRoot, normalizedInput)
|
||||
|
||||
if (!this.isWithinProject(absolutePath)) {
|
||||
return {
|
||||
status: "outside_project",
|
||||
reason: "Path is outside project root",
|
||||
}
|
||||
}
|
||||
|
||||
const relativePath = path.relative(this.projectRoot, absolutePath)
|
||||
|
||||
return {
|
||||
status: "valid",
|
||||
reason: "Path is valid",
|
||||
absolutePath,
|
||||
relativePath,
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Quick check if path is within project.
|
||||
* @param inputPath - Path to check (relative or absolute)
|
||||
*/
|
||||
isWithin(inputPath: string): boolean {
|
||||
if (!inputPath || inputPath.trim() === "") {
|
||||
return false
|
||||
}
|
||||
|
||||
const normalizedInput = inputPath.trim()
|
||||
|
||||
if (this.containsTraversalPatterns(normalizedInput)) {
|
||||
return false
|
||||
}
|
||||
|
||||
const absolutePath = path.resolve(this.projectRoot, normalizedInput)
|
||||
return this.isWithinProject(absolutePath)
|
||||
}
|
||||
|
||||
/**
|
||||
* Resolve a path relative to project root.
|
||||
* Returns null if path would be outside project.
|
||||
* @param inputPath - Path to resolve
|
||||
*/
|
||||
resolve(inputPath: string): string | null {
|
||||
const result = this.validateSync(inputPath)
|
||||
return result.status === "valid" ? (result.absolutePath ?? null) : null
|
||||
}
|
||||
|
||||
/**
|
||||
* Resolve a path or throw an error if invalid.
|
||||
* @param inputPath - Path to resolve
|
||||
* @returns Tuple of [absolutePath, relativePath]
|
||||
* @throws Error if path is invalid
|
||||
*/
|
||||
resolveOrThrow(inputPath: string): [absolutePath: string, relativePath: string] {
|
||||
const result = this.validateSync(inputPath)
|
||||
if (result.status !== "valid" || result.absolutePath === undefined) {
|
||||
throw new Error(result.reason)
|
||||
}
|
||||
return [result.absolutePath, result.relativePath ?? ""]
|
||||
}
|
||||
|
||||
/**
|
||||
* Get relative path from project root.
|
||||
* Returns null if path would be outside project.
|
||||
* @param inputPath - Path to make relative
|
||||
*/
|
||||
relativize(inputPath: string): string | null {
|
||||
const result = this.validateSync(inputPath)
|
||||
return result.status === "valid" ? (result.relativePath ?? null) : null
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the project root path.
|
||||
*/
|
||||
getProjectRoot(): string {
|
||||
return this.projectRoot
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if path contains directory traversal patterns.
|
||||
*/
|
||||
private containsTraversalPatterns(inputPath: string): boolean {
|
||||
const normalized = inputPath.replace(/\\/g, "/")
|
||||
|
||||
if (normalized.includes("..")) {
|
||||
return true
|
||||
}
|
||||
|
||||
if (normalized.startsWith("~")) {
|
||||
return true
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if absolute path is within project root.
|
||||
*/
|
||||
private isWithinProject(absolutePath: string): boolean {
|
||||
const normalizedProject = this.projectRoot.replace(/\\/g, "/")
|
||||
const normalizedPath = absolutePath.replace(/\\/g, "/")
|
||||
|
||||
if (normalizedPath === normalizedProject) {
|
||||
return true
|
||||
}
|
||||
|
||||
const projectWithSep = normalizedProject.endsWith("/")
|
||||
? normalizedProject
|
||||
: `${normalizedProject}/`
|
||||
|
||||
return normalizedPath.startsWith(projectWithSep)
|
||||
}
|
||||
|
||||
/**
|
||||
* Check file existence and type.
|
||||
*/
|
||||
private async checkExists(
|
||||
absolutePath: string,
|
||||
options: PathValidatorOptions,
|
||||
): Promise<PathValidationResult | null> {
|
||||
try {
|
||||
const statFn = options.followSymlinks ? fs.stat : fs.lstat
|
||||
const stats = await statFn(absolutePath)
|
||||
|
||||
if (options.requireDirectory && !stats.isDirectory()) {
|
||||
return {
|
||||
status: "invalid",
|
||||
reason: "Path is not a directory",
|
||||
}
|
||||
}
|
||||
|
||||
if (options.requireFile && !stats.isFile()) {
|
||||
return {
|
||||
status: "invalid",
|
||||
reason: "Path is not a file",
|
||||
}
|
||||
}
|
||||
|
||||
return null
|
||||
} catch (error) {
|
||||
if ((error as NodeJS.ErrnoException).code === "ENOENT") {
|
||||
return {
|
||||
status: "invalid",
|
||||
reason: "Path does not exist",
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
status: "invalid",
|
||||
reason: `Cannot access path: ${(error as Error).message}`,
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a path validator for a project.
|
||||
* @param projectRoot - Root directory of the project
|
||||
*/
|
||||
export function createPathValidator(projectRoot: string): PathValidator {
|
||||
return new PathValidator(projectRoot)
|
||||
}
|
||||
|
||||
/**
|
||||
* Standalone function for quick path validation.
|
||||
* @param inputPath - Path to validate
|
||||
* @param projectRoot - Project root directory
|
||||
*/
|
||||
export function validatePath(inputPath: string, projectRoot: string): boolean {
|
||||
const validator = new PathValidator(projectRoot)
|
||||
return validator.isWithin(inputPath)
|
||||
}
|
||||
9
packages/ipuaro/src/infrastructure/security/index.ts
Normal file
9
packages/ipuaro/src/infrastructure/security/index.ts
Normal file
@@ -0,0 +1,9 @@
|
||||
// Security module exports
|
||||
export {
|
||||
PathValidator,
|
||||
createPathValidator,
|
||||
validatePath,
|
||||
type PathValidationResult,
|
||||
type PathValidationStatus,
|
||||
type PathValidatorOptions,
|
||||
} from "./PathValidator.js"
|
||||
@@ -8,6 +8,7 @@ import {
|
||||
type ToolResult,
|
||||
} from "../../../domain/value-objects/ToolResult.js"
|
||||
import { hashLines } from "../../../shared/utils/hash.js"
|
||||
import { PathValidator } from "../../security/PathValidator.js"
|
||||
|
||||
/**
|
||||
* Result data from create_file tool.
|
||||
@@ -62,17 +63,18 @@ export class CreateFileTool implements ITool {
|
||||
const startTime = Date.now()
|
||||
const callId = `${this.name}-${String(startTime)}`
|
||||
|
||||
const relativePath = params.path as string
|
||||
const inputPath = params.path as string
|
||||
const content = params.content as string
|
||||
|
||||
const absolutePath = path.resolve(ctx.projectRoot, relativePath)
|
||||
const pathValidator = new PathValidator(ctx.projectRoot)
|
||||
|
||||
if (!absolutePath.startsWith(ctx.projectRoot)) {
|
||||
return createErrorResult(
|
||||
callId,
|
||||
"Path must be within project root",
|
||||
Date.now() - startTime,
|
||||
)
|
||||
let absolutePath: string
|
||||
let relativePath: string
|
||||
try {
|
||||
;[absolutePath, relativePath] = pathValidator.resolveOrThrow(inputPath)
|
||||
} catch (error) {
|
||||
const message = error instanceof Error ? error.message : String(error)
|
||||
return createErrorResult(callId, message, Date.now() - startTime)
|
||||
}
|
||||
|
||||
try {
|
||||
|
||||
@@ -1,11 +1,11 @@
|
||||
import { promises as fs } from "node:fs"
|
||||
import * as path from "node:path"
|
||||
import type { ITool, ToolContext, ToolParameterSchema } from "../../../domain/services/ITool.js"
|
||||
import {
|
||||
createErrorResult,
|
||||
createSuccessResult,
|
||||
type ToolResult,
|
||||
} from "../../../domain/value-objects/ToolResult.js"
|
||||
import { PathValidator } from "../../security/PathValidator.js"
|
||||
|
||||
/**
|
||||
* Result data from delete_file tool.
|
||||
@@ -49,15 +49,16 @@ export class DeleteFileTool implements ITool {
|
||||
const startTime = Date.now()
|
||||
const callId = `${this.name}-${String(startTime)}`
|
||||
|
||||
const relativePath = params.path as string
|
||||
const absolutePath = path.resolve(ctx.projectRoot, relativePath)
|
||||
const inputPath = params.path as string
|
||||
const pathValidator = new PathValidator(ctx.projectRoot)
|
||||
|
||||
if (!absolutePath.startsWith(ctx.projectRoot)) {
|
||||
return createErrorResult(
|
||||
callId,
|
||||
"Path must be within project root",
|
||||
Date.now() - startTime,
|
||||
)
|
||||
let absolutePath: string
|
||||
let relativePath: string
|
||||
try {
|
||||
;[absolutePath, relativePath] = pathValidator.resolveOrThrow(inputPath)
|
||||
} catch (error) {
|
||||
const message = error instanceof Error ? error.message : String(error)
|
||||
return createErrorResult(callId, message, Date.now() - startTime)
|
||||
}
|
||||
|
||||
try {
|
||||
|
||||
@@ -1,5 +1,4 @@
|
||||
import { promises as fs } from "node:fs"
|
||||
import * as path from "node:path"
|
||||
import type { ITool, ToolContext, ToolParameterSchema } from "../../../domain/services/ITool.js"
|
||||
import { createFileData } from "../../../domain/value-objects/FileData.js"
|
||||
import {
|
||||
@@ -8,6 +7,7 @@ import {
|
||||
type ToolResult,
|
||||
} from "../../../domain/value-objects/ToolResult.js"
|
||||
import { hashLines } from "../../../shared/utils/hash.js"
|
||||
import { PathValidator } from "../../security/PathValidator.js"
|
||||
|
||||
/**
|
||||
* Result data from edit_lines tool.
|
||||
@@ -94,19 +94,20 @@ export class EditLinesTool implements ITool {
|
||||
const startTime = Date.now()
|
||||
const callId = `${this.name}-${String(startTime)}`
|
||||
|
||||
const relativePath = params.path as string
|
||||
const inputPath = params.path as string
|
||||
const startLine = params.start as number
|
||||
const endLine = params.end as number
|
||||
const newContent = params.content as string
|
||||
|
||||
const absolutePath = path.resolve(ctx.projectRoot, relativePath)
|
||||
const pathValidator = new PathValidator(ctx.projectRoot)
|
||||
|
||||
if (!absolutePath.startsWith(ctx.projectRoot)) {
|
||||
return createErrorResult(
|
||||
callId,
|
||||
"Path must be within project root",
|
||||
Date.now() - startTime,
|
||||
)
|
||||
let absolutePath: string
|
||||
let relativePath: string
|
||||
try {
|
||||
;[absolutePath, relativePath] = pathValidator.resolveOrThrow(inputPath)
|
||||
} catch (error) {
|
||||
const message = error instanceof Error ? error.message : String(error)
|
||||
return createErrorResult(callId, message, Date.now() - startTime)
|
||||
}
|
||||
|
||||
try {
|
||||
|
||||
@@ -1,5 +1,4 @@
|
||||
import { promises as fs } from "node:fs"
|
||||
import * as path from "node:path"
|
||||
import type { ITool, ToolContext, ToolParameterSchema } from "../../../domain/services/ITool.js"
|
||||
import type { ClassInfo } from "../../../domain/value-objects/FileAST.js"
|
||||
import {
|
||||
@@ -7,6 +6,7 @@ import {
|
||||
createSuccessResult,
|
||||
type ToolResult,
|
||||
} from "../../../domain/value-objects/ToolResult.js"
|
||||
import { PathValidator } from "../../security/PathValidator.js"
|
||||
|
||||
/**
|
||||
* Result data from get_class tool.
|
||||
@@ -67,16 +67,17 @@ export class GetClassTool implements ITool {
|
||||
const startTime = Date.now()
|
||||
const callId = `${this.name}-${String(startTime)}`
|
||||
|
||||
const relativePath = params.path as string
|
||||
const inputPath = params.path as string
|
||||
const className = params.name as string
|
||||
const absolutePath = path.resolve(ctx.projectRoot, relativePath)
|
||||
const pathValidator = new PathValidator(ctx.projectRoot)
|
||||
|
||||
if (!absolutePath.startsWith(ctx.projectRoot)) {
|
||||
return createErrorResult(
|
||||
callId,
|
||||
"Path must be within project root",
|
||||
Date.now() - startTime,
|
||||
)
|
||||
let absolutePath: string
|
||||
let relativePath: string
|
||||
try {
|
||||
;[absolutePath, relativePath] = pathValidator.resolveOrThrow(inputPath)
|
||||
} catch (error) {
|
||||
const message = error instanceof Error ? error.message : String(error)
|
||||
return createErrorResult(callId, message, Date.now() - startTime)
|
||||
}
|
||||
|
||||
try {
|
||||
|
||||
@@ -1,5 +1,4 @@
|
||||
import { promises as fs } from "node:fs"
|
||||
import * as path from "node:path"
|
||||
import type { ITool, ToolContext, ToolParameterSchema } from "../../../domain/services/ITool.js"
|
||||
import type { FunctionInfo } from "../../../domain/value-objects/FileAST.js"
|
||||
import {
|
||||
@@ -7,6 +6,7 @@ import {
|
||||
createSuccessResult,
|
||||
type ToolResult,
|
||||
} from "../../../domain/value-objects/ToolResult.js"
|
||||
import { PathValidator } from "../../security/PathValidator.js"
|
||||
|
||||
/**
|
||||
* Result data from get_function tool.
|
||||
@@ -65,16 +65,17 @@ export class GetFunctionTool implements ITool {
|
||||
const startTime = Date.now()
|
||||
const callId = `${this.name}-${String(startTime)}`
|
||||
|
||||
const relativePath = params.path as string
|
||||
const inputPath = params.path as string
|
||||
const functionName = params.name as string
|
||||
const absolutePath = path.resolve(ctx.projectRoot, relativePath)
|
||||
const pathValidator = new PathValidator(ctx.projectRoot)
|
||||
|
||||
if (!absolutePath.startsWith(ctx.projectRoot)) {
|
||||
return createErrorResult(
|
||||
callId,
|
||||
"Path must be within project root",
|
||||
Date.now() - startTime,
|
||||
)
|
||||
let absolutePath: string
|
||||
let relativePath: string
|
||||
try {
|
||||
;[absolutePath, relativePath] = pathValidator.resolveOrThrow(inputPath)
|
||||
} catch (error) {
|
||||
const message = error instanceof Error ? error.message : String(error)
|
||||
return createErrorResult(callId, message, Date.now() - startTime)
|
||||
}
|
||||
|
||||
try {
|
||||
|
||||
@@ -1,11 +1,11 @@
|
||||
import { promises as fs } from "node:fs"
|
||||
import * as path from "node:path"
|
||||
import type { ITool, ToolContext, ToolParameterSchema } from "../../../domain/services/ITool.js"
|
||||
import {
|
||||
createErrorResult,
|
||||
createSuccessResult,
|
||||
type ToolResult,
|
||||
} from "../../../domain/value-objects/ToolResult.js"
|
||||
import { PathValidator } from "../../security/PathValidator.js"
|
||||
|
||||
/**
|
||||
* Result data from get_lines tool.
|
||||
@@ -84,15 +84,16 @@ export class GetLinesTool implements ITool {
|
||||
const startTime = Date.now()
|
||||
const callId = `${this.name}-${String(startTime)}`
|
||||
|
||||
const relativePath = params.path as string
|
||||
const absolutePath = path.resolve(ctx.projectRoot, relativePath)
|
||||
const inputPath = params.path as string
|
||||
const pathValidator = new PathValidator(ctx.projectRoot)
|
||||
|
||||
if (!absolutePath.startsWith(ctx.projectRoot)) {
|
||||
return createErrorResult(
|
||||
callId,
|
||||
"Path must be within project root",
|
||||
Date.now() - startTime,
|
||||
)
|
||||
let absolutePath: string
|
||||
let relativePath: string
|
||||
try {
|
||||
;[absolutePath, relativePath] = pathValidator.resolveOrThrow(inputPath)
|
||||
} catch (error) {
|
||||
const message = error instanceof Error ? error.message : String(error)
|
||||
return createErrorResult(callId, message, Date.now() - startTime)
|
||||
}
|
||||
|
||||
try {
|
||||
|
||||
@@ -7,6 +7,7 @@ import {
|
||||
type ToolResult,
|
||||
} from "../../../domain/value-objects/ToolResult.js"
|
||||
import { DEFAULT_IGNORE_PATTERNS } from "../../../domain/constants/index.js"
|
||||
import { PathValidator } from "../../security/PathValidator.js"
|
||||
|
||||
/**
|
||||
* Tree node representing a file or directory.
|
||||
@@ -89,16 +90,17 @@ export class GetStructureTool implements ITool {
|
||||
const startTime = Date.now()
|
||||
const callId = `${this.name}-${String(startTime)}`
|
||||
|
||||
const relativePath = (params.path as string | undefined) ?? ""
|
||||
const inputPath = (params.path as string | undefined) ?? "."
|
||||
const maxDepth = params.depth as number | undefined
|
||||
const absolutePath = path.resolve(ctx.projectRoot, relativePath)
|
||||
const pathValidator = new PathValidator(ctx.projectRoot)
|
||||
|
||||
if (!absolutePath.startsWith(ctx.projectRoot)) {
|
||||
return createErrorResult(
|
||||
callId,
|
||||
"Path must be within project root",
|
||||
Date.now() - startTime,
|
||||
)
|
||||
let absolutePath: string
|
||||
let relativePath: string
|
||||
try {
|
||||
;[absolutePath, relativePath] = pathValidator.resolveOrThrow(inputPath)
|
||||
} catch (error) {
|
||||
const message = error instanceof Error ? error.message : String(error)
|
||||
return createErrorResult(callId, message, Date.now() - startTime)
|
||||
}
|
||||
|
||||
try {
|
||||
|
||||
@@ -16,12 +16,7 @@ export class ToolRegistry implements IToolRegistry {
|
||||
*/
|
||||
register(tool: ITool): void {
|
||||
if (this.tools.has(tool.name)) {
|
||||
throw new IpuaroError(
|
||||
"validation",
|
||||
`Tool "${tool.name}" is already registered`,
|
||||
true,
|
||||
"Use a different tool name or unregister the existing tool first",
|
||||
)
|
||||
throw IpuaroError.validation(`Tool "${tool.name}" is already registered`)
|
||||
}
|
||||
this.tools.set(tool.name, tool)
|
||||
}
|
||||
|
||||
@@ -6,6 +6,7 @@ import {
|
||||
createSuccessResult,
|
||||
type ToolResult,
|
||||
} from "../../../domain/value-objects/ToolResult.js"
|
||||
import type { CommandsConfig } from "../../../shared/constants/config.js"
|
||||
import { CommandSecurity } from "./CommandSecurity.js"
|
||||
|
||||
const execAsync = promisify(exec)
|
||||
@@ -60,7 +61,7 @@ export class RunCommandTool implements ITool {
|
||||
{
|
||||
name: "timeout",
|
||||
type: "number",
|
||||
description: "Timeout in milliseconds (default: 30000)",
|
||||
description: "Timeout in milliseconds (default: from config or 30000, max: 600000)",
|
||||
required: false,
|
||||
},
|
||||
]
|
||||
@@ -69,10 +70,12 @@ export class RunCommandTool implements ITool {
|
||||
|
||||
private readonly security: CommandSecurity
|
||||
private readonly execFn: typeof execAsync
|
||||
private readonly configTimeout: number | null
|
||||
|
||||
constructor(security?: CommandSecurity, execFn?: typeof execAsync) {
|
||||
constructor(security?: CommandSecurity, execFn?: typeof execAsync, config?: CommandsConfig) {
|
||||
this.security = security ?? new CommandSecurity()
|
||||
this.execFn = execFn ?? execAsync
|
||||
this.configTimeout = config?.timeout ?? null
|
||||
}
|
||||
|
||||
validateParams(params: Record<string, unknown>): string | null {
|
||||
@@ -104,7 +107,7 @@ export class RunCommandTool implements ITool {
|
||||
const callId = `${this.name}-${String(startTime)}`
|
||||
|
||||
const command = params.command as string
|
||||
const timeout = (params.timeout as number) ?? DEFAULT_TIMEOUT
|
||||
const timeout = (params.timeout as number) ?? this.configTimeout ?? DEFAULT_TIMEOUT
|
||||
|
||||
const securityCheck = this.security.check(command)
|
||||
|
||||
|
||||
@@ -20,6 +20,7 @@ export const LLMConfigSchema = z.object({
|
||||
temperature: z.number().min(0).max(2).default(0.1),
|
||||
host: z.string().default("http://localhost:11434"),
|
||||
timeout: z.number().int().positive().default(120_000),
|
||||
useNativeTools: z.boolean().default(false),
|
||||
})
|
||||
|
||||
/**
|
||||
@@ -76,6 +77,64 @@ export const UndoConfigSchema = z.object({
|
||||
*/
|
||||
export const EditConfigSchema = z.object({
|
||||
autoApply: z.boolean().default(false),
|
||||
syntaxHighlight: z.boolean().default(true),
|
||||
})
|
||||
|
||||
/**
|
||||
* Input configuration schema.
|
||||
*/
|
||||
export const InputConfigSchema = z.object({
|
||||
multiline: z.union([z.boolean(), z.literal("auto")]).default(false),
|
||||
})
|
||||
|
||||
/**
|
||||
* Display configuration schema.
|
||||
*/
|
||||
export const DisplayConfigSchema = z.object({
|
||||
showStats: z.boolean().default(true),
|
||||
showToolCalls: z.boolean().default(true),
|
||||
theme: z.enum(["dark", "light"]).default("dark"),
|
||||
bellOnComplete: z.boolean().default(false),
|
||||
progressBar: z.boolean().default(true),
|
||||
})
|
||||
|
||||
/**
|
||||
* Session configuration schema.
|
||||
*/
|
||||
export const SessionConfigSchema = z.object({
|
||||
persistIndefinitely: z.boolean().default(true),
|
||||
maxHistoryMessages: z.number().int().positive().default(100),
|
||||
saveInputHistory: z.boolean().default(true),
|
||||
})
|
||||
|
||||
/**
|
||||
* Context configuration schema.
|
||||
*/
|
||||
export const ContextConfigSchema = z.object({
|
||||
systemPromptTokens: z.number().int().positive().default(2000),
|
||||
maxContextUsage: z.number().min(0).max(1).default(0.8),
|
||||
autoCompressAt: z.number().min(0).max(1).default(0.8),
|
||||
compressionMethod: z.enum(["llm-summary", "truncate"]).default("llm-summary"),
|
||||
includeSignatures: z.boolean().default(true),
|
||||
includeDepsGraph: z.boolean().default(true),
|
||||
includeCircularDeps: z.boolean().default(true),
|
||||
includeHighImpactFiles: z.boolean().default(true),
|
||||
})
|
||||
|
||||
/**
|
||||
* Autocomplete configuration schema.
|
||||
*/
|
||||
export const AutocompleteConfigSchema = z.object({
|
||||
enabled: z.boolean().default(true),
|
||||
source: z.enum(["redis-index", "filesystem", "both"]).default("redis-index"),
|
||||
maxSuggestions: z.number().int().positive().default(10),
|
||||
})
|
||||
|
||||
/**
|
||||
* Commands configuration schema.
|
||||
*/
|
||||
export const CommandsConfigSchema = z.object({
|
||||
timeout: z.number().int().positive().nullable().default(null),
|
||||
})
|
||||
|
||||
/**
|
||||
@@ -88,6 +147,12 @@ export const ConfigSchema = z.object({
|
||||
watchdog: WatchdogConfigSchema.default({}),
|
||||
undo: UndoConfigSchema.default({}),
|
||||
edit: EditConfigSchema.default({}),
|
||||
input: InputConfigSchema.default({}),
|
||||
display: DisplayConfigSchema.default({}),
|
||||
session: SessionConfigSchema.default({}),
|
||||
context: ContextConfigSchema.default({}),
|
||||
autocomplete: AutocompleteConfigSchema.default({}),
|
||||
commands: CommandsConfigSchema.default({}),
|
||||
})
|
||||
|
||||
/**
|
||||
@@ -100,6 +165,12 @@ export type ProjectConfig = z.infer<typeof ProjectConfigSchema>
|
||||
export type WatchdogConfig = z.infer<typeof WatchdogConfigSchema>
|
||||
export type UndoConfig = z.infer<typeof UndoConfigSchema>
|
||||
export type EditConfig = z.infer<typeof EditConfigSchema>
|
||||
export type InputConfig = z.infer<typeof InputConfigSchema>
|
||||
export type DisplayConfig = z.infer<typeof DisplayConfigSchema>
|
||||
export type SessionConfig = z.infer<typeof SessionConfigSchema>
|
||||
export type ContextConfig = z.infer<typeof ContextConfigSchema>
|
||||
export type AutocompleteConfig = z.infer<typeof AutocompleteConfigSchema>
|
||||
export type CommandsConfig = z.infer<typeof CommandsConfigSchema>
|
||||
|
||||
/**
|
||||
* Default configuration.
|
||||
|
||||
295
packages/ipuaro/src/shared/errors/ErrorHandler.ts
Normal file
295
packages/ipuaro/src/shared/errors/ErrorHandler.ts
Normal file
@@ -0,0 +1,295 @@
|
||||
/**
|
||||
* ErrorHandler service for handling errors with user interaction.
|
||||
* Implements the error handling matrix from ROADMAP.md.
|
||||
*/
|
||||
|
||||
import { ERROR_MATRIX, type ErrorOption, type ErrorType, IpuaroError } from "./IpuaroError.js"
|
||||
|
||||
/**
|
||||
* Result of error handling.
|
||||
*/
|
||||
export interface ErrorHandlingResult {
|
||||
action: ErrorOption
|
||||
shouldContinue: boolean
|
||||
retryCount?: number
|
||||
}
|
||||
|
||||
/**
|
||||
* Callback for requesting user choice on error.
|
||||
*/
|
||||
export type ErrorChoiceCallback = (
|
||||
error: IpuaroError,
|
||||
availableOptions: ErrorOption[],
|
||||
defaultOption: ErrorOption,
|
||||
) => Promise<ErrorOption>
|
||||
|
||||
/**
|
||||
* Options for ErrorHandler.
|
||||
*/
|
||||
export interface ErrorHandlerOptions {
|
||||
maxRetries?: number
|
||||
autoSkipParseErrors?: boolean
|
||||
autoRetryLLMErrors?: boolean
|
||||
onError?: ErrorChoiceCallback
|
||||
}
|
||||
|
||||
const DEFAULT_MAX_RETRIES = 3
|
||||
|
||||
/**
|
||||
* Error handler service with matrix-based logic.
|
||||
*/
|
||||
export class ErrorHandler {
|
||||
private readonly maxRetries: number
|
||||
private readonly autoSkipParseErrors: boolean
|
||||
private readonly autoRetryLLMErrors: boolean
|
||||
private readonly onError?: ErrorChoiceCallback
|
||||
|
||||
private readonly retryCounters = new Map<string, number>()
|
||||
|
||||
constructor(options: ErrorHandlerOptions = {}) {
|
||||
this.maxRetries = options.maxRetries ?? DEFAULT_MAX_RETRIES
|
||||
this.autoSkipParseErrors = options.autoSkipParseErrors ?? true
|
||||
this.autoRetryLLMErrors = options.autoRetryLLMErrors ?? false
|
||||
this.onError = options.onError
|
||||
}
|
||||
|
||||
/**
|
||||
* Handle an error and determine the action to take.
|
||||
*/
|
||||
async handle(error: IpuaroError, contextKey?: string): Promise<ErrorHandlingResult> {
|
||||
const key = contextKey ?? error.message
|
||||
const currentRetries = this.retryCounters.get(key) ?? 0
|
||||
|
||||
if (this.shouldAutoHandle(error)) {
|
||||
const autoAction = this.getAutoAction(error, currentRetries)
|
||||
if (autoAction) {
|
||||
return this.createResult(autoAction, key, currentRetries)
|
||||
}
|
||||
}
|
||||
|
||||
if (!error.recoverable) {
|
||||
return {
|
||||
action: "abort",
|
||||
shouldContinue: false,
|
||||
}
|
||||
}
|
||||
|
||||
if (this.onError) {
|
||||
const choice = await this.onError(error, error.options, error.defaultOption)
|
||||
return this.createResult(choice, key, currentRetries)
|
||||
}
|
||||
|
||||
return this.createResult(error.defaultOption, key, currentRetries)
|
||||
}
|
||||
|
||||
/**
|
||||
* Handle an error synchronously with default behavior.
|
||||
*/
|
||||
handleSync(error: IpuaroError, contextKey?: string): ErrorHandlingResult {
|
||||
const key = contextKey ?? error.message
|
||||
const currentRetries = this.retryCounters.get(key) ?? 0
|
||||
|
||||
if (this.shouldAutoHandle(error)) {
|
||||
const autoAction = this.getAutoAction(error, currentRetries)
|
||||
if (autoAction) {
|
||||
return this.createResult(autoAction, key, currentRetries)
|
||||
}
|
||||
}
|
||||
|
||||
if (!error.recoverable) {
|
||||
return {
|
||||
action: "abort",
|
||||
shouldContinue: false,
|
||||
}
|
||||
}
|
||||
|
||||
return this.createResult(error.defaultOption, key, currentRetries)
|
||||
}
|
||||
|
||||
/**
|
||||
* Reset retry counters.
|
||||
*/
|
||||
resetRetries(contextKey?: string): void {
|
||||
if (contextKey) {
|
||||
this.retryCounters.delete(contextKey)
|
||||
} else {
|
||||
this.retryCounters.clear()
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get retry count for a context.
|
||||
*/
|
||||
getRetryCount(contextKey: string): number {
|
||||
return this.retryCounters.get(contextKey) ?? 0
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if max retries exceeded for a context.
|
||||
*/
|
||||
isMaxRetriesExceeded(contextKey: string): boolean {
|
||||
return this.getRetryCount(contextKey) >= this.maxRetries
|
||||
}
|
||||
|
||||
/**
|
||||
* Wrap a function with error handling.
|
||||
*/
|
||||
async wrap<T>(
|
||||
fn: () => Promise<T>,
|
||||
errorType: ErrorType,
|
||||
contextKey?: string,
|
||||
): Promise<{ success: true; data: T } | { success: false; result: ErrorHandlingResult }> {
|
||||
try {
|
||||
const data = await fn()
|
||||
if (contextKey) {
|
||||
this.resetRetries(contextKey)
|
||||
}
|
||||
return { success: true, data }
|
||||
} catch (err) {
|
||||
const error =
|
||||
err instanceof IpuaroError
|
||||
? err
|
||||
: new IpuaroError(errorType, err instanceof Error ? err.message : String(err))
|
||||
|
||||
const result = await this.handle(error, contextKey)
|
||||
return { success: false, result }
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Wrap a function with retry logic.
|
||||
*/
|
||||
async withRetry<T>(fn: () => Promise<T>, errorType: ErrorType, contextKey: string): Promise<T> {
|
||||
const key = contextKey
|
||||
|
||||
while (!this.isMaxRetriesExceeded(key)) {
|
||||
try {
|
||||
const result = await fn()
|
||||
this.resetRetries(key)
|
||||
return result
|
||||
} catch (err) {
|
||||
const error =
|
||||
err instanceof IpuaroError
|
||||
? err
|
||||
: new IpuaroError(
|
||||
errorType,
|
||||
err instanceof Error ? err.message : String(err),
|
||||
)
|
||||
|
||||
const handlingResult = await this.handle(error, key)
|
||||
|
||||
if (handlingResult.action !== "retry" || !handlingResult.shouldContinue) {
|
||||
throw error
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
throw new IpuaroError(
|
||||
errorType,
|
||||
`Max retries (${String(this.maxRetries)}) exceeded for: ${key}`,
|
||||
)
|
||||
}
|
||||
|
||||
private shouldAutoHandle(error: IpuaroError): boolean {
|
||||
if (error.type === "parse" && this.autoSkipParseErrors) {
|
||||
return true
|
||||
}
|
||||
if ((error.type === "llm" || error.type === "timeout") && this.autoRetryLLMErrors) {
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
private getAutoAction(error: IpuaroError, currentRetries: number): ErrorOption | null {
|
||||
if (error.type === "parse" && this.autoSkipParseErrors) {
|
||||
return "skip"
|
||||
}
|
||||
|
||||
if ((error.type === "llm" || error.type === "timeout") && this.autoRetryLLMErrors) {
|
||||
if (currentRetries < this.maxRetries) {
|
||||
return "retry"
|
||||
}
|
||||
return "abort"
|
||||
}
|
||||
|
||||
return null
|
||||
}
|
||||
|
||||
private createResult(
|
||||
action: ErrorOption,
|
||||
key: string,
|
||||
currentRetries: number,
|
||||
): ErrorHandlingResult {
|
||||
if (action === "retry") {
|
||||
this.retryCounters.set(key, currentRetries + 1)
|
||||
const newRetryCount = currentRetries + 1
|
||||
|
||||
if (newRetryCount > this.maxRetries) {
|
||||
return {
|
||||
action: "abort",
|
||||
shouldContinue: false,
|
||||
retryCount: newRetryCount,
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
action: "retry",
|
||||
shouldContinue: true,
|
||||
retryCount: newRetryCount,
|
||||
}
|
||||
}
|
||||
|
||||
this.retryCounters.delete(key)
|
||||
|
||||
return {
|
||||
action,
|
||||
shouldContinue: action === "skip" || action === "confirm" || action === "regenerate",
|
||||
retryCount: currentRetries,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get available options for an error type.
|
||||
*/
|
||||
export function getErrorOptions(errorType: ErrorType): ErrorOption[] {
|
||||
return ERROR_MATRIX[errorType].options
|
||||
}
|
||||
|
||||
/**
|
||||
* Get default option for an error type.
|
||||
*/
|
||||
export function getDefaultErrorOption(errorType: ErrorType): ErrorOption {
|
||||
return ERROR_MATRIX[errorType].defaultOption
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if an error type is recoverable by default.
|
||||
*/
|
||||
export function isRecoverableError(errorType: ErrorType): boolean {
|
||||
return ERROR_MATRIX[errorType].recoverable
|
||||
}
|
||||
|
||||
/**
|
||||
* Convert any error to IpuaroError.
|
||||
*/
|
||||
export function toIpuaroError(error: unknown, defaultType: ErrorType = "unknown"): IpuaroError {
|
||||
if (error instanceof IpuaroError) {
|
||||
return error
|
||||
}
|
||||
|
||||
if (error instanceof Error) {
|
||||
return new IpuaroError(defaultType, error.message, {
|
||||
context: { originalError: error.name },
|
||||
})
|
||||
}
|
||||
|
||||
return new IpuaroError(defaultType, String(error))
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a default ErrorHandler instance.
|
||||
*/
|
||||
export function createErrorHandler(options?: ErrorHandlerOptions): ErrorHandler {
|
||||
return new ErrorHandler(options)
|
||||
}
|
||||
@@ -12,6 +12,72 @@ export type ErrorType =
|
||||
| "timeout"
|
||||
| "unknown"
|
||||
|
||||
/**
|
||||
* Available options for error recovery.
|
||||
*/
|
||||
export type ErrorOption = "retry" | "skip" | "abort" | "confirm" | "regenerate"
|
||||
|
||||
/**
|
||||
* Error metadata with available options.
|
||||
*/
|
||||
export interface ErrorMeta {
|
||||
type: ErrorType
|
||||
recoverable: boolean
|
||||
options: ErrorOption[]
|
||||
defaultOption: ErrorOption
|
||||
}
|
||||
|
||||
/**
|
||||
* Error handling matrix - defines behavior for each error type.
|
||||
*/
|
||||
export const ERROR_MATRIX: Record<ErrorType, Omit<ErrorMeta, "type">> = {
|
||||
redis: {
|
||||
recoverable: false,
|
||||
options: ["retry", "abort"],
|
||||
defaultOption: "abort",
|
||||
},
|
||||
parse: {
|
||||
recoverable: true,
|
||||
options: ["skip", "abort"],
|
||||
defaultOption: "skip",
|
||||
},
|
||||
llm: {
|
||||
recoverable: true,
|
||||
options: ["retry", "skip", "abort"],
|
||||
defaultOption: "retry",
|
||||
},
|
||||
file: {
|
||||
recoverable: true,
|
||||
options: ["skip", "abort"],
|
||||
defaultOption: "skip",
|
||||
},
|
||||
command: {
|
||||
recoverable: true,
|
||||
options: ["confirm", "skip", "abort"],
|
||||
defaultOption: "confirm",
|
||||
},
|
||||
conflict: {
|
||||
recoverable: true,
|
||||
options: ["skip", "regenerate", "abort"],
|
||||
defaultOption: "skip",
|
||||
},
|
||||
validation: {
|
||||
recoverable: true,
|
||||
options: ["skip", "abort"],
|
||||
defaultOption: "skip",
|
||||
},
|
||||
timeout: {
|
||||
recoverable: true,
|
||||
options: ["retry", "skip", "abort"],
|
||||
defaultOption: "retry",
|
||||
},
|
||||
unknown: {
|
||||
recoverable: false,
|
||||
options: ["abort"],
|
||||
defaultOption: "abort",
|
||||
},
|
||||
}
|
||||
|
||||
/**
|
||||
* Base error class for ipuaro.
|
||||
*/
|
||||
@@ -19,60 +85,142 @@ export class IpuaroError extends Error {
|
||||
readonly type: ErrorType
|
||||
readonly recoverable: boolean
|
||||
readonly suggestion?: string
|
||||
readonly options: ErrorOption[]
|
||||
readonly defaultOption: ErrorOption
|
||||
readonly context?: Record<string, unknown>
|
||||
|
||||
constructor(type: ErrorType, message: string, recoverable = true, suggestion?: string) {
|
||||
constructor(
|
||||
type: ErrorType,
|
||||
message: string,
|
||||
options?: {
|
||||
recoverable?: boolean
|
||||
suggestion?: string
|
||||
context?: Record<string, unknown>
|
||||
},
|
||||
) {
|
||||
super(message)
|
||||
this.name = "IpuaroError"
|
||||
this.type = type
|
||||
this.recoverable = recoverable
|
||||
this.suggestion = suggestion
|
||||
|
||||
const meta = ERROR_MATRIX[type]
|
||||
this.recoverable = options?.recoverable ?? meta.recoverable
|
||||
this.options = meta.options
|
||||
this.defaultOption = meta.defaultOption
|
||||
this.suggestion = options?.suggestion
|
||||
this.context = options?.context
|
||||
}
|
||||
|
||||
static redis(message: string): IpuaroError {
|
||||
return new IpuaroError(
|
||||
"redis",
|
||||
message,
|
||||
false,
|
||||
"Please ensure Redis is running: redis-server",
|
||||
)
|
||||
/**
|
||||
* Get error metadata.
|
||||
*/
|
||||
getMeta(): ErrorMeta {
|
||||
return {
|
||||
type: this.type,
|
||||
recoverable: this.recoverable,
|
||||
options: this.options,
|
||||
defaultOption: this.defaultOption,
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if an option is available for this error.
|
||||
*/
|
||||
hasOption(option: ErrorOption): boolean {
|
||||
return this.options.includes(option)
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a formatted error message with suggestion.
|
||||
*/
|
||||
toDisplayString(): string {
|
||||
let result = `[${this.type}] ${this.message}`
|
||||
if (this.suggestion) {
|
||||
result += `\n Suggestion: ${this.suggestion}`
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
static redis(message: string, context?: Record<string, unknown>): IpuaroError {
|
||||
return new IpuaroError("redis", message, {
|
||||
suggestion: "Please ensure Redis is running: redis-server",
|
||||
context,
|
||||
})
|
||||
}
|
||||
|
||||
static parse(message: string, filePath?: string): IpuaroError {
|
||||
const msg = filePath ? `${message} in ${filePath}` : message
|
||||
return new IpuaroError("parse", msg, true, "File will be skipped")
|
||||
return new IpuaroError("parse", msg, {
|
||||
suggestion: "File will be skipped during indexing",
|
||||
context: filePath ? { filePath } : undefined,
|
||||
})
|
||||
}
|
||||
|
||||
static llm(message: string): IpuaroError {
|
||||
return new IpuaroError(
|
||||
"llm",
|
||||
message,
|
||||
true,
|
||||
"Please ensure Ollama is running and model is available",
|
||||
)
|
||||
static llm(message: string, context?: Record<string, unknown>): IpuaroError {
|
||||
return new IpuaroError("llm", message, {
|
||||
suggestion: "Please ensure Ollama is running and model is available",
|
||||
context,
|
||||
})
|
||||
}
|
||||
|
||||
static file(message: string): IpuaroError {
|
||||
return new IpuaroError("file", message, true)
|
||||
static llmTimeout(message: string): IpuaroError {
|
||||
return new IpuaroError("timeout", message, {
|
||||
suggestion: "The LLM request timed out. Try again or check Ollama status.",
|
||||
})
|
||||
}
|
||||
|
||||
static command(message: string): IpuaroError {
|
||||
return new IpuaroError("command", message, true)
|
||||
static file(message: string, filePath?: string): IpuaroError {
|
||||
return new IpuaroError("file", message, {
|
||||
suggestion: "Check if the file exists and you have permission to access it",
|
||||
context: filePath ? { filePath } : undefined,
|
||||
})
|
||||
}
|
||||
|
||||
static conflict(message: string): IpuaroError {
|
||||
return new IpuaroError(
|
||||
"conflict",
|
||||
message,
|
||||
true,
|
||||
"File was modified externally. Regenerate or skip.",
|
||||
)
|
||||
static fileNotFound(filePath: string): IpuaroError {
|
||||
return new IpuaroError("file", `File not found: ${filePath}`, {
|
||||
suggestion: "Check the file path and try again",
|
||||
context: { filePath },
|
||||
})
|
||||
}
|
||||
|
||||
static validation(message: string): IpuaroError {
|
||||
return new IpuaroError("validation", message, true)
|
||||
static command(message: string, command?: string): IpuaroError {
|
||||
return new IpuaroError("command", message, {
|
||||
suggestion: "Command requires confirmation or is not in whitelist",
|
||||
context: command ? { command } : undefined,
|
||||
})
|
||||
}
|
||||
|
||||
static timeout(message: string): IpuaroError {
|
||||
return new IpuaroError("timeout", message, true, "Try again or increase timeout")
|
||||
static commandBlacklisted(command: string): IpuaroError {
|
||||
return new IpuaroError("command", `Command is blacklisted: ${command}`, {
|
||||
recoverable: false,
|
||||
suggestion: "This command is not allowed for security reasons",
|
||||
context: { command },
|
||||
})
|
||||
}
|
||||
|
||||
static conflict(message: string, filePath?: string): IpuaroError {
|
||||
return new IpuaroError("conflict", message, {
|
||||
suggestion: "File was modified externally. Regenerate or skip the change.",
|
||||
context: filePath ? { filePath } : undefined,
|
||||
})
|
||||
}
|
||||
|
||||
static validation(message: string, field?: string): IpuaroError {
|
||||
return new IpuaroError("validation", message, {
|
||||
suggestion: "Please check the input and try again",
|
||||
context: field ? { field } : undefined,
|
||||
})
|
||||
}
|
||||
|
||||
static timeout(message: string, timeoutMs?: number): IpuaroError {
|
||||
return new IpuaroError("timeout", message, {
|
||||
suggestion: "Try again or increase the timeout value",
|
||||
context: timeoutMs ? { timeoutMs } : undefined,
|
||||
})
|
||||
}
|
||||
|
||||
static unknown(message: string, originalError?: Error): IpuaroError {
|
||||
return new IpuaroError("unknown", message, {
|
||||
context: originalError ? { originalError: originalError.message } : undefined,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,2 +1,3 @@
|
||||
// Shared errors
|
||||
export * from "./IpuaroError.js"
|
||||
export * from "./ErrorHandler.js"
|
||||
|
||||
@@ -19,9 +19,16 @@ export type ConfirmChoice = "apply" | "cancel" | "edit"
|
||||
|
||||
/**
|
||||
* User choice for errors.
|
||||
* @deprecated Use ErrorOption from shared/errors instead
|
||||
*/
|
||||
export type ErrorChoice = "retry" | "skip" | "abort"
|
||||
|
||||
// Re-export ErrorOption for convenience
|
||||
export type { ErrorOption } from "../errors/IpuaroError.js"
|
||||
|
||||
// Re-export tool definition types
|
||||
export type { ToolDef, ToolParameter } from "./tool-definitions.js"
|
||||
|
||||
/**
|
||||
* Project structure node.
|
||||
*/
|
||||
|
||||
21
packages/ipuaro/src/shared/types/tool-definitions.ts
Normal file
21
packages/ipuaro/src/shared/types/tool-definitions.ts
Normal file
@@ -0,0 +1,21 @@
|
||||
/**
|
||||
* Tool parameter definition for LLM prompts.
|
||||
* Used to describe tools in system prompts.
|
||||
*/
|
||||
export interface ToolParameter {
|
||||
name: string
|
||||
type: "string" | "number" | "boolean" | "array" | "object"
|
||||
description: string
|
||||
required: boolean
|
||||
enum?: string[]
|
||||
}
|
||||
|
||||
/**
|
||||
* Tool definition for LLM prompts.
|
||||
* Used to describe available tools in the system prompt.
|
||||
*/
|
||||
export interface ToolDef {
|
||||
name: string
|
||||
description: string
|
||||
parameters: ToolParameter[]
|
||||
}
|
||||
300
packages/ipuaro/src/tui/App.tsx
Normal file
300
packages/ipuaro/src/tui/App.tsx
Normal file
@@ -0,0 +1,300 @@
|
||||
/**
|
||||
* Main TUI App component.
|
||||
* Orchestrates the terminal user interface.
|
||||
*/
|
||||
|
||||
import { Box, Text, useApp } from "ink"
|
||||
import React, { useCallback, useEffect, useState } from "react"
|
||||
import type { ILLMClient } from "../domain/services/ILLMClient.js"
|
||||
import type { ISessionStorage } from "../domain/services/ISessionStorage.js"
|
||||
import type { IStorage } from "../domain/services/IStorage.js"
|
||||
import type { DiffInfo } from "../domain/services/ITool.js"
|
||||
import type { ErrorOption } from "../shared/errors/IpuaroError.js"
|
||||
import type { Config } from "../shared/constants/config.js"
|
||||
import type { IToolRegistry } from "../application/interfaces/IToolRegistry.js"
|
||||
import type { ConfirmationResult } from "../application/use-cases/ExecuteTool.js"
|
||||
import type { ProjectStructure } from "../infrastructure/llm/prompts.js"
|
||||
import { Chat, ConfirmDialog, Input, StatusBar } from "./components/index.js"
|
||||
import { type CommandResult, useCommands, useHotkeys, useSession } from "./hooks/index.js"
|
||||
import type { AppProps, BranchInfo } from "./types.js"
|
||||
import type { ConfirmChoice } from "../shared/types/index.js"
|
||||
import { ringBell } from "./utils/bell.js"
|
||||
|
||||
export interface AppDependencies {
|
||||
storage: IStorage
|
||||
sessionStorage: ISessionStorage
|
||||
llm: ILLMClient
|
||||
tools: IToolRegistry
|
||||
projectStructure?: ProjectStructure
|
||||
config?: Config
|
||||
}
|
||||
|
||||
export interface ExtendedAppProps extends AppProps {
|
||||
deps: AppDependencies
|
||||
onExit?: () => void
|
||||
multiline?: boolean | "auto"
|
||||
syntaxHighlight?: boolean
|
||||
theme?: "dark" | "light"
|
||||
showStats?: boolean
|
||||
showToolCalls?: boolean
|
||||
bellOnComplete?: boolean
|
||||
}
|
||||
|
||||
function LoadingScreen(): React.JSX.Element {
|
||||
return (
|
||||
<Box flexDirection="column" padding={1}>
|
||||
<Text color="cyan">Loading session...</Text>
|
||||
</Box>
|
||||
)
|
||||
}
|
||||
|
||||
function ErrorScreen({ error }: { error: Error }): React.JSX.Element {
|
||||
return (
|
||||
<Box flexDirection="column" padding={1}>
|
||||
<Text color="red" bold>
|
||||
Error
|
||||
</Text>
|
||||
<Text color="red">{error.message}</Text>
|
||||
</Box>
|
||||
)
|
||||
}
|
||||
|
||||
async function handleErrorDefault(_error: Error): Promise<ErrorOption> {
|
||||
return Promise.resolve("skip")
|
||||
}
|
||||
|
||||
interface PendingConfirmation {
|
||||
message: string
|
||||
diff?: DiffInfo
|
||||
resolve: (result: boolean | ConfirmationResult) => void
|
||||
}
|
||||
|
||||
export function App({
|
||||
projectPath,
|
||||
autoApply: initialAutoApply = false,
|
||||
deps,
|
||||
onExit,
|
||||
multiline = false,
|
||||
syntaxHighlight = true,
|
||||
theme = "dark",
|
||||
showStats = true,
|
||||
showToolCalls = true,
|
||||
bellOnComplete = false,
|
||||
}: ExtendedAppProps): React.JSX.Element {
|
||||
const { exit } = useApp()
|
||||
|
||||
const [branch] = useState<BranchInfo>({ name: "main", isDetached: false })
|
||||
const [sessionTime, setSessionTime] = useState("0m")
|
||||
const [autoApply, setAutoApply] = useState(initialAutoApply)
|
||||
const [commandResult, setCommandResult] = useState<CommandResult | null>(null)
|
||||
const [pendingConfirmation, setPendingConfirmation] = useState<PendingConfirmation | null>(null)
|
||||
|
||||
const projectName = projectPath.split("/").pop() ?? "unknown"
|
||||
|
||||
const handleConfirmation = useCallback(
|
||||
async (message: string, diff?: DiffInfo): Promise<boolean | ConfirmationResult> => {
|
||||
return new Promise((resolve) => {
|
||||
setPendingConfirmation({ message, diff, resolve })
|
||||
})
|
||||
},
|
||||
[],
|
||||
)
|
||||
|
||||
const handleConfirmSelect = useCallback(
|
||||
(choice: ConfirmChoice, editedContent?: string[]) => {
|
||||
if (!pendingConfirmation) {
|
||||
return
|
||||
}
|
||||
|
||||
if (choice === "apply") {
|
||||
if (editedContent) {
|
||||
pendingConfirmation.resolve({ confirmed: true, editedContent })
|
||||
} else {
|
||||
pendingConfirmation.resolve(true)
|
||||
}
|
||||
} else {
|
||||
pendingConfirmation.resolve(false)
|
||||
}
|
||||
|
||||
setPendingConfirmation(null)
|
||||
},
|
||||
[pendingConfirmation],
|
||||
)
|
||||
|
||||
const { session, messages, status, isLoading, error, sendMessage, undo, clearHistory, abort } =
|
||||
useSession(
|
||||
{
|
||||
storage: deps.storage,
|
||||
sessionStorage: deps.sessionStorage,
|
||||
llm: deps.llm,
|
||||
tools: deps.tools,
|
||||
projectRoot: projectPath,
|
||||
projectName,
|
||||
projectStructure: deps.projectStructure,
|
||||
config: deps.config,
|
||||
},
|
||||
{
|
||||
autoApply,
|
||||
onConfirmation: handleConfirmation,
|
||||
onError: handleErrorDefault,
|
||||
},
|
||||
)
|
||||
|
||||
const reindex = useCallback(async (): Promise<void> => {
|
||||
const { IndexProject } = await import("../application/use-cases/IndexProject.js")
|
||||
const indexProject = new IndexProject(deps.storage, projectPath)
|
||||
await indexProject.execute(projectPath)
|
||||
}, [deps.storage, projectPath])
|
||||
|
||||
const { executeCommand, isCommand } = useCommands(
|
||||
{
|
||||
session,
|
||||
sessionStorage: deps.sessionStorage,
|
||||
storage: deps.storage,
|
||||
llm: deps.llm,
|
||||
tools: deps.tools,
|
||||
projectRoot: projectPath,
|
||||
projectName,
|
||||
},
|
||||
{
|
||||
clearHistory,
|
||||
undo,
|
||||
setAutoApply,
|
||||
reindex,
|
||||
},
|
||||
{ autoApply },
|
||||
)
|
||||
|
||||
const handleExit = useCallback((): void => {
|
||||
onExit?.()
|
||||
exit()
|
||||
}, [exit, onExit])
|
||||
|
||||
const handleInterrupt = useCallback((): void => {
|
||||
if (status === "thinking" || status === "tool_call") {
|
||||
abort()
|
||||
}
|
||||
}, [status, abort])
|
||||
|
||||
const handleUndo = useCallback((): void => {
|
||||
void undo()
|
||||
}, [undo])
|
||||
|
||||
useHotkeys(
|
||||
{
|
||||
onInterrupt: handleInterrupt,
|
||||
onExit: handleExit,
|
||||
onUndo: handleUndo,
|
||||
},
|
||||
{ enabled: !isLoading },
|
||||
)
|
||||
|
||||
useEffect(() => {
|
||||
if (!session) {
|
||||
return
|
||||
}
|
||||
|
||||
const interval = setInterval(() => {
|
||||
setSessionTime(session.getSessionDurationFormatted())
|
||||
}, 60_000)
|
||||
|
||||
setSessionTime(session.getSessionDurationFormatted())
|
||||
|
||||
return (): void => {
|
||||
clearInterval(interval)
|
||||
}
|
||||
}, [session])
|
||||
|
||||
useEffect(() => {
|
||||
if (bellOnComplete && status === "ready") {
|
||||
ringBell()
|
||||
}
|
||||
}, [bellOnComplete, status])
|
||||
|
||||
const handleSubmit = useCallback(
|
||||
(text: string): void => {
|
||||
if (isCommand(text)) {
|
||||
void executeCommand(text).then((result) => {
|
||||
setCommandResult(result)
|
||||
// Auto-clear command result after 5 seconds
|
||||
setTimeout(() => {
|
||||
setCommandResult(null)
|
||||
}, 5000)
|
||||
})
|
||||
return
|
||||
}
|
||||
void sendMessage(text)
|
||||
},
|
||||
[sendMessage, isCommand, executeCommand],
|
||||
)
|
||||
|
||||
if (isLoading) {
|
||||
return <LoadingScreen />
|
||||
}
|
||||
|
||||
if (error) {
|
||||
return <ErrorScreen error={error} />
|
||||
}
|
||||
|
||||
const isInputDisabled = status === "thinking" || status === "tool_call" || !!pendingConfirmation
|
||||
|
||||
return (
|
||||
<Box flexDirection="column" height="100%">
|
||||
<StatusBar
|
||||
contextUsage={session?.context.tokenUsage ?? 0}
|
||||
projectName={projectName}
|
||||
branch={branch}
|
||||
sessionTime={sessionTime}
|
||||
status={status}
|
||||
theme={theme}
|
||||
/>
|
||||
<Chat
|
||||
messages={messages}
|
||||
isThinking={status === "thinking"}
|
||||
theme={theme}
|
||||
showStats={showStats}
|
||||
showToolCalls={showToolCalls}
|
||||
/>
|
||||
{commandResult && (
|
||||
<Box
|
||||
borderStyle="round"
|
||||
borderColor={commandResult.success ? "green" : "red"}
|
||||
paddingX={1}
|
||||
marginY={1}
|
||||
>
|
||||
<Text color={commandResult.success ? "green" : "red"} wrap="wrap">
|
||||
{commandResult.message}
|
||||
</Text>
|
||||
</Box>
|
||||
)}
|
||||
{pendingConfirmation && (
|
||||
<ConfirmDialog
|
||||
message={pendingConfirmation.message}
|
||||
diff={
|
||||
pendingConfirmation.diff
|
||||
? {
|
||||
filePath: pendingConfirmation.diff.filePath,
|
||||
oldLines: pendingConfirmation.diff.oldLines,
|
||||
newLines: pendingConfirmation.diff.newLines,
|
||||
startLine: pendingConfirmation.diff.startLine,
|
||||
}
|
||||
: undefined
|
||||
}
|
||||
onSelect={handleConfirmSelect}
|
||||
editableContent={pendingConfirmation.diff?.newLines}
|
||||
syntaxHighlight={syntaxHighlight}
|
||||
/>
|
||||
)}
|
||||
<Input
|
||||
onSubmit={handleSubmit}
|
||||
history={session?.inputHistory ?? []}
|
||||
disabled={isInputDisabled}
|
||||
placeholder={isInputDisabled ? "Processing..." : "Type a message..."}
|
||||
storage={deps.storage}
|
||||
projectRoot={projectPath}
|
||||
autocompleteEnabled={true}
|
||||
multiline={multiline}
|
||||
/>
|
||||
</Box>
|
||||
)
|
||||
}
|
||||
208
packages/ipuaro/src/tui/components/Chat.tsx
Normal file
208
packages/ipuaro/src/tui/components/Chat.tsx
Normal file
@@ -0,0 +1,208 @@
|
||||
/**
|
||||
* Chat component for TUI.
|
||||
* Displays message history with tool calls and stats.
|
||||
*/
|
||||
|
||||
import { Box, Text } from "ink"
|
||||
import type React from "react"
|
||||
import type { ChatMessage } from "../../domain/value-objects/ChatMessage.js"
|
||||
import type { ToolCall } from "../../domain/value-objects/ToolCall.js"
|
||||
import { getRoleColor, type Theme } from "../utils/theme.js"
|
||||
|
||||
export interface ChatProps {
|
||||
messages: ChatMessage[]
|
||||
isThinking: boolean
|
||||
theme?: Theme
|
||||
showStats?: boolean
|
||||
showToolCalls?: boolean
|
||||
}
|
||||
|
||||
function formatTimestamp(timestamp: number): string {
|
||||
const date = new Date(timestamp)
|
||||
const hours = String(date.getHours()).padStart(2, "0")
|
||||
const minutes = String(date.getMinutes()).padStart(2, "0")
|
||||
return `${hours}:${minutes}`
|
||||
}
|
||||
|
||||
function formatStats(stats: ChatMessage["stats"]): string {
|
||||
if (!stats) {
|
||||
return ""
|
||||
}
|
||||
const time = (stats.timeMs / 1000).toFixed(1)
|
||||
const tokens = stats.tokens.toLocaleString()
|
||||
const tools = stats.toolCalls
|
||||
|
||||
const parts = [`${time}s`, `${tokens} tokens`]
|
||||
if (tools > 0) {
|
||||
parts.push(`${String(tools)} tool${tools > 1 ? "s" : ""}`)
|
||||
}
|
||||
return parts.join(" | ")
|
||||
}
|
||||
|
||||
function formatToolCall(call: ToolCall): string {
|
||||
const params = Object.entries(call.params)
|
||||
.map(([k, v]) => `${k}=${JSON.stringify(v)}`)
|
||||
.join(" ")
|
||||
return `[${call.name} ${params}]`
|
||||
}
|
||||
|
||||
interface MessageComponentProps {
|
||||
message: ChatMessage
|
||||
theme: Theme
|
||||
showStats: boolean
|
||||
showToolCalls: boolean
|
||||
}
|
||||
|
||||
function UserMessage({ message, theme }: MessageComponentProps): React.JSX.Element {
|
||||
const roleColor = getRoleColor("user", theme)
|
||||
|
||||
return (
|
||||
<Box flexDirection="column" marginBottom={1}>
|
||||
<Box gap={1}>
|
||||
<Text color={roleColor} bold>
|
||||
You
|
||||
</Text>
|
||||
<Text color="gray" dimColor>
|
||||
{formatTimestamp(message.timestamp)}
|
||||
</Text>
|
||||
</Box>
|
||||
<Box marginLeft={2}>
|
||||
<Text>{message.content}</Text>
|
||||
</Box>
|
||||
</Box>
|
||||
)
|
||||
}
|
||||
|
||||
function AssistantMessage({
|
||||
message,
|
||||
theme,
|
||||
showStats,
|
||||
showToolCalls,
|
||||
}: MessageComponentProps): React.JSX.Element {
|
||||
const stats = formatStats(message.stats)
|
||||
const roleColor = getRoleColor("assistant", theme)
|
||||
|
||||
return (
|
||||
<Box flexDirection="column" marginBottom={1}>
|
||||
<Box gap={1}>
|
||||
<Text color={roleColor} bold>
|
||||
Assistant
|
||||
</Text>
|
||||
<Text color="gray" dimColor>
|
||||
{formatTimestamp(message.timestamp)}
|
||||
</Text>
|
||||
</Box>
|
||||
|
||||
{showToolCalls && message.toolCalls && message.toolCalls.length > 0 && (
|
||||
<Box flexDirection="column" marginLeft={2} marginBottom={1}>
|
||||
{message.toolCalls.map((call) => (
|
||||
<Text key={call.id} color="yellow">
|
||||
{formatToolCall(call)}
|
||||
</Text>
|
||||
))}
|
||||
</Box>
|
||||
)}
|
||||
|
||||
{message.content && (
|
||||
<Box marginLeft={2}>
|
||||
<Text>{message.content}</Text>
|
||||
</Box>
|
||||
)}
|
||||
|
||||
{showStats && stats && (
|
||||
<Box marginLeft={2} marginTop={1}>
|
||||
<Text color="gray" dimColor>
|
||||
{stats}
|
||||
</Text>
|
||||
</Box>
|
||||
)}
|
||||
</Box>
|
||||
)
|
||||
}
|
||||
|
||||
function ToolMessage({ message }: MessageComponentProps): React.JSX.Element {
|
||||
return (
|
||||
<Box flexDirection="column" marginBottom={1} marginLeft={2}>
|
||||
{message.toolResults?.map((result) => (
|
||||
<Box key={result.callId} flexDirection="column">
|
||||
<Text color={result.success ? "green" : "red"}>
|
||||
{result.success ? "+" : "x"} {result.callId.slice(0, 8)}
|
||||
</Text>
|
||||
</Box>
|
||||
))}
|
||||
</Box>
|
||||
)
|
||||
}
|
||||
|
||||
function SystemMessage({ message, theme }: MessageComponentProps): React.JSX.Element {
|
||||
const isError = message.content.toLowerCase().startsWith("error")
|
||||
const roleColor = getRoleColor("system", theme)
|
||||
|
||||
return (
|
||||
<Box marginBottom={1} marginLeft={2}>
|
||||
<Text color={isError ? "red" : roleColor} dimColor={!isError}>
|
||||
{message.content}
|
||||
</Text>
|
||||
</Box>
|
||||
)
|
||||
}
|
||||
|
||||
function MessageComponent({
|
||||
message,
|
||||
theme,
|
||||
showStats,
|
||||
showToolCalls,
|
||||
}: MessageComponentProps): React.JSX.Element {
|
||||
const props = { message, theme, showStats, showToolCalls }
|
||||
|
||||
switch (message.role) {
|
||||
case "user": {
|
||||
return <UserMessage {...props} />
|
||||
}
|
||||
case "assistant": {
|
||||
return <AssistantMessage {...props} />
|
||||
}
|
||||
case "tool": {
|
||||
return <ToolMessage {...props} />
|
||||
}
|
||||
case "system": {
|
||||
return <SystemMessage {...props} />
|
||||
}
|
||||
default: {
|
||||
return <></>
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
function ThinkingIndicator({ theme }: { theme: Theme }): React.JSX.Element {
|
||||
const color = getRoleColor("assistant", theme)
|
||||
|
||||
return (
|
||||
<Box marginBottom={1}>
|
||||
<Text color={color}>Thinking...</Text>
|
||||
</Box>
|
||||
)
|
||||
}
|
||||
|
||||
export function Chat({
|
||||
messages,
|
||||
isThinking,
|
||||
theme = "dark",
|
||||
showStats = true,
|
||||
showToolCalls = true,
|
||||
}: ChatProps): React.JSX.Element {
|
||||
return (
|
||||
<Box flexDirection="column" flexGrow={1} paddingX={1}>
|
||||
{messages.map((message, index) => (
|
||||
<MessageComponent
|
||||
key={`${String(message.timestamp)}-${String(index)}`}
|
||||
message={message}
|
||||
theme={theme}
|
||||
showStats={showStats}
|
||||
showToolCalls={showToolCalls}
|
||||
/>
|
||||
))}
|
||||
{isThinking && <ThinkingIndicator theme={theme} />}
|
||||
</Box>
|
||||
)
|
||||
}
|
||||
137
packages/ipuaro/src/tui/components/ConfirmDialog.tsx
Normal file
137
packages/ipuaro/src/tui/components/ConfirmDialog.tsx
Normal file
@@ -0,0 +1,137 @@
|
||||
/**
|
||||
* ConfirmDialog component for TUI.
|
||||
* Displays a confirmation dialog with [Y] Apply / [N] Cancel / [E] Edit options.
|
||||
* Supports inline editing when user selects Edit.
|
||||
*/
|
||||
|
||||
import { Box, Text, useInput } from "ink"
|
||||
import React, { useCallback, useState } from "react"
|
||||
import type { ConfirmChoice } from "../../shared/types/index.js"
|
||||
import { DiffView, type DiffViewProps } from "./DiffView.js"
|
||||
import { EditableContent } from "./EditableContent.js"
|
||||
|
||||
export interface ConfirmDialogProps {
|
||||
message: string
|
||||
diff?: DiffViewProps
|
||||
onSelect: (choice: ConfirmChoice, editedContent?: string[]) => void
|
||||
editableContent?: string[]
|
||||
syntaxHighlight?: boolean
|
||||
}
|
||||
|
||||
type DialogMode = "confirm" | "edit"
|
||||
|
||||
function ChoiceButton({
|
||||
hotkey,
|
||||
label,
|
||||
isSelected,
|
||||
}: {
|
||||
hotkey: string
|
||||
label: string
|
||||
isSelected: boolean
|
||||
}): React.JSX.Element {
|
||||
return (
|
||||
<Box>
|
||||
<Text color={isSelected ? "cyan" : "gray"}>
|
||||
[<Text bold>{hotkey}</Text>] {label}
|
||||
</Text>
|
||||
</Box>
|
||||
)
|
||||
}
|
||||
|
||||
export function ConfirmDialog({
|
||||
message,
|
||||
diff,
|
||||
onSelect,
|
||||
editableContent,
|
||||
syntaxHighlight = false,
|
||||
}: ConfirmDialogProps): React.JSX.Element {
|
||||
const [mode, setMode] = useState<DialogMode>("confirm")
|
||||
const [selected, setSelected] = useState<ConfirmChoice | null>(null)
|
||||
|
||||
const linesToEdit = editableContent ?? diff?.newLines ?? []
|
||||
const canEdit = linesToEdit.length > 0
|
||||
|
||||
const handleEditSubmit = useCallback(
|
||||
(editedLines: string[]) => {
|
||||
setSelected("apply")
|
||||
onSelect("apply", editedLines)
|
||||
},
|
||||
[onSelect],
|
||||
)
|
||||
|
||||
const handleEditCancel = useCallback(() => {
|
||||
setMode("confirm")
|
||||
setSelected(null)
|
||||
}, [])
|
||||
|
||||
useInput(
|
||||
(input, key) => {
|
||||
if (mode === "edit") {
|
||||
return
|
||||
}
|
||||
|
||||
const lowerInput = input.toLowerCase()
|
||||
|
||||
if (lowerInput === "y") {
|
||||
setSelected("apply")
|
||||
onSelect("apply")
|
||||
} else if (lowerInput === "n") {
|
||||
setSelected("cancel")
|
||||
onSelect("cancel")
|
||||
} else if (lowerInput === "e" && canEdit) {
|
||||
setSelected("edit")
|
||||
setMode("edit")
|
||||
} else if (key.escape) {
|
||||
setSelected("cancel")
|
||||
onSelect("cancel")
|
||||
}
|
||||
},
|
||||
{ isActive: mode === "confirm" },
|
||||
)
|
||||
|
||||
if (mode === "edit") {
|
||||
return (
|
||||
<EditableContent
|
||||
lines={linesToEdit}
|
||||
onSubmit={handleEditSubmit}
|
||||
onCancel={handleEditCancel}
|
||||
/>
|
||||
)
|
||||
}
|
||||
|
||||
return (
|
||||
<Box
|
||||
flexDirection="column"
|
||||
borderStyle="round"
|
||||
borderColor="yellow"
|
||||
paddingX={1}
|
||||
paddingY={1}
|
||||
>
|
||||
<Box marginBottom={1}>
|
||||
<Text color="yellow" bold>
|
||||
⚠ {message}
|
||||
</Text>
|
||||
</Box>
|
||||
|
||||
{diff && (
|
||||
<Box marginBottom={1}>
|
||||
<DiffView {...diff} syntaxHighlight={syntaxHighlight} />
|
||||
</Box>
|
||||
)}
|
||||
|
||||
<Box gap={2}>
|
||||
<ChoiceButton hotkey="Y" label="Apply" isSelected={selected === "apply"} />
|
||||
<ChoiceButton hotkey="N" label="Cancel" isSelected={selected === "cancel"} />
|
||||
{canEdit ? (
|
||||
<ChoiceButton hotkey="E" label="Edit" isSelected={selected === "edit"} />
|
||||
) : (
|
||||
<Box>
|
||||
<Text color="gray" dimColor>
|
||||
[E] Edit (disabled)
|
||||
</Text>
|
||||
</Box>
|
||||
)}
|
||||
</Box>
|
||||
</Box>
|
||||
)
|
||||
}
|
||||
219
packages/ipuaro/src/tui/components/DiffView.tsx
Normal file
219
packages/ipuaro/src/tui/components/DiffView.tsx
Normal file
@@ -0,0 +1,219 @@
|
||||
/**
|
||||
* DiffView component for TUI.
|
||||
* Displays inline diff with green (added) and red (removed) highlighting.
|
||||
*/
|
||||
|
||||
import { Box, Text } from "ink"
|
||||
import type React from "react"
|
||||
import { detectLanguage, highlightLine, type Language } from "../utils/syntax-highlighter.js"
|
||||
|
||||
export interface DiffViewProps {
|
||||
filePath: string
|
||||
oldLines: string[]
|
||||
newLines: string[]
|
||||
startLine: number
|
||||
language?: Language
|
||||
syntaxHighlight?: boolean
|
||||
}
|
||||
|
||||
interface DiffLine {
|
||||
type: "add" | "remove" | "context"
|
||||
content: string
|
||||
lineNumber?: number
|
||||
}
|
||||
|
||||
function computeDiff(oldLines: string[], newLines: string[], startLine: number): DiffLine[] {
|
||||
const result: DiffLine[] = []
|
||||
|
||||
let oldIdx = 0
|
||||
let newIdx = 0
|
||||
|
||||
while (oldIdx < oldLines.length || newIdx < newLines.length) {
|
||||
const oldLine = oldIdx < oldLines.length ? oldLines[oldIdx] : undefined
|
||||
const newLine = newIdx < newLines.length ? newLines[newIdx] : undefined
|
||||
|
||||
if (oldLine === newLine) {
|
||||
result.push({
|
||||
type: "context",
|
||||
content: oldLine ?? "",
|
||||
lineNumber: startLine + newIdx,
|
||||
})
|
||||
oldIdx++
|
||||
newIdx++
|
||||
} else {
|
||||
if (oldLine !== undefined) {
|
||||
result.push({
|
||||
type: "remove",
|
||||
content: oldLine,
|
||||
})
|
||||
oldIdx++
|
||||
}
|
||||
if (newLine !== undefined) {
|
||||
result.push({
|
||||
type: "add",
|
||||
content: newLine,
|
||||
lineNumber: startLine + newIdx,
|
||||
})
|
||||
newIdx++
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
function getLinePrefix(line: DiffLine): string {
|
||||
switch (line.type) {
|
||||
case "add": {
|
||||
return "+"
|
||||
}
|
||||
case "remove": {
|
||||
return "-"
|
||||
}
|
||||
case "context": {
|
||||
return " "
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
function getLineColor(line: DiffLine): string {
|
||||
switch (line.type) {
|
||||
case "add": {
|
||||
return "green"
|
||||
}
|
||||
case "remove": {
|
||||
return "red"
|
||||
}
|
||||
case "context": {
|
||||
return "gray"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
function formatLineNumber(num: number | undefined, width: number): string {
|
||||
if (num === undefined) {
|
||||
return " ".repeat(width)
|
||||
}
|
||||
return String(num).padStart(width, " ")
|
||||
}
|
||||
|
||||
function DiffLine({
|
||||
line,
|
||||
lineNumberWidth,
|
||||
language,
|
||||
syntaxHighlight,
|
||||
}: {
|
||||
line: DiffLine
|
||||
lineNumberWidth: number
|
||||
language?: Language
|
||||
syntaxHighlight?: boolean
|
||||
}): React.JSX.Element {
|
||||
const prefix = getLinePrefix(line)
|
||||
const color = getLineColor(line)
|
||||
const lineNum = formatLineNumber(line.lineNumber, lineNumberWidth)
|
||||
|
||||
const shouldHighlight = syntaxHighlight && language && line.type === "add"
|
||||
|
||||
return (
|
||||
<Box>
|
||||
<Text color="gray">{lineNum} </Text>
|
||||
{shouldHighlight ? (
|
||||
<Box>
|
||||
<Text color={color}>{prefix} </Text>
|
||||
{highlightLine(line.content, language).map((token, idx) => (
|
||||
<Text key={idx} color={token.color}>
|
||||
{token.text}
|
||||
</Text>
|
||||
))}
|
||||
</Box>
|
||||
) : (
|
||||
<Text color={color}>
|
||||
{prefix} {line.content}
|
||||
</Text>
|
||||
)}
|
||||
</Box>
|
||||
)
|
||||
}
|
||||
|
||||
function DiffHeader({
|
||||
filePath,
|
||||
startLine,
|
||||
endLine,
|
||||
}: {
|
||||
filePath: string
|
||||
startLine: number
|
||||
endLine: number
|
||||
}): React.JSX.Element {
|
||||
const lineRange =
|
||||
startLine === endLine
|
||||
? `line ${String(startLine)}`
|
||||
: `lines ${String(startLine)}-${String(endLine)}`
|
||||
|
||||
return (
|
||||
<Box>
|
||||
<Text color="gray">┌─── </Text>
|
||||
<Text color="cyan">{filePath}</Text>
|
||||
<Text color="gray"> ({lineRange}) ───┐</Text>
|
||||
</Box>
|
||||
)
|
||||
}
|
||||
|
||||
function DiffFooter(): React.JSX.Element {
|
||||
return (
|
||||
<Box>
|
||||
<Text color="gray">└───────────────────────────────────────┘</Text>
|
||||
</Box>
|
||||
)
|
||||
}
|
||||
|
||||
function DiffStats({
|
||||
additions,
|
||||
deletions,
|
||||
}: {
|
||||
additions: number
|
||||
deletions: number
|
||||
}): React.JSX.Element {
|
||||
return (
|
||||
<Box gap={1} marginTop={1}>
|
||||
<Text color="green">+{String(additions)}</Text>
|
||||
<Text color="red">-{String(deletions)}</Text>
|
||||
</Box>
|
||||
)
|
||||
}
|
||||
|
||||
export function DiffView({
|
||||
filePath,
|
||||
oldLines,
|
||||
newLines,
|
||||
startLine,
|
||||
language,
|
||||
syntaxHighlight = false,
|
||||
}: DiffViewProps): React.JSX.Element {
|
||||
const diffLines = computeDiff(oldLines, newLines, startLine)
|
||||
const endLine = startLine + newLines.length - 1
|
||||
const lineNumberWidth = String(endLine).length
|
||||
|
||||
const additions = diffLines.filter((l) => l.type === "add").length
|
||||
const deletions = diffLines.filter((l) => l.type === "remove").length
|
||||
|
||||
const detectedLanguage = language ?? detectLanguage(filePath)
|
||||
|
||||
return (
|
||||
<Box flexDirection="column" paddingX={1}>
|
||||
<DiffHeader filePath={filePath} startLine={startLine} endLine={endLine} />
|
||||
<Box flexDirection="column" paddingX={1}>
|
||||
{diffLines.map((line, index) => (
|
||||
<DiffLine
|
||||
key={`${line.type}-${String(index)}`}
|
||||
line={line}
|
||||
lineNumberWidth={lineNumberWidth}
|
||||
language={detectedLanguage}
|
||||
syntaxHighlight={syntaxHighlight}
|
||||
/>
|
||||
))}
|
||||
</Box>
|
||||
<DiffFooter />
|
||||
<DiffStats additions={additions} deletions={deletions} />
|
||||
</Box>
|
||||
)
|
||||
}
|
||||
146
packages/ipuaro/src/tui/components/EditableContent.tsx
Normal file
146
packages/ipuaro/src/tui/components/EditableContent.tsx
Normal file
@@ -0,0 +1,146 @@
|
||||
/**
|
||||
* EditableContent component for TUI.
|
||||
* Displays editable multi-line text with line-by-line navigation.
|
||||
*/
|
||||
|
||||
import { Box, Text, useInput } from "ink"
|
||||
import TextInput from "ink-text-input"
|
||||
import React, { useCallback, useState } from "react"
|
||||
|
||||
export interface EditableContentProps {
|
||||
/** Initial lines to edit */
|
||||
lines: string[]
|
||||
/** Called when user finishes editing (Enter key) */
|
||||
onSubmit: (editedLines: string[]) => void
|
||||
/** Called when user cancels editing (Escape key) */
|
||||
onCancel: () => void
|
||||
/** Maximum visible lines before scrolling */
|
||||
maxVisibleLines?: number
|
||||
}
|
||||
|
||||
/**
|
||||
* EditableContent component.
|
||||
* Allows line-by-line editing of multi-line text.
|
||||
* - Up/Down: Navigate between lines
|
||||
* - Enter (on last line): Submit changes
|
||||
* - Ctrl+Enter: Submit changes from any line
|
||||
* - Escape: Cancel editing
|
||||
*/
|
||||
export function EditableContent({
|
||||
lines: initialLines,
|
||||
onSubmit,
|
||||
onCancel,
|
||||
maxVisibleLines = 20,
|
||||
}: EditableContentProps): React.JSX.Element {
|
||||
const [lines, setLines] = useState<string[]>(initialLines.length > 0 ? initialLines : [""])
|
||||
const [currentLineIndex, setCurrentLineIndex] = useState(0)
|
||||
const [currentLineValue, setCurrentLineValue] = useState(lines[0] ?? "")
|
||||
|
||||
const updateCurrentLine = useCallback(
|
||||
(value: string) => {
|
||||
const newLines = [...lines]
|
||||
newLines[currentLineIndex] = value
|
||||
setLines(newLines)
|
||||
setCurrentLineValue(value)
|
||||
},
|
||||
[lines, currentLineIndex],
|
||||
)
|
||||
|
||||
const handleLineSubmit = useCallback(() => {
|
||||
updateCurrentLine(currentLineValue)
|
||||
|
||||
if (currentLineIndex === lines.length - 1) {
|
||||
onSubmit(lines)
|
||||
} else {
|
||||
const nextIndex = currentLineIndex + 1
|
||||
setCurrentLineIndex(nextIndex)
|
||||
setCurrentLineValue(lines[nextIndex] ?? "")
|
||||
}
|
||||
}, [currentLineValue, currentLineIndex, lines, updateCurrentLine, onSubmit])
|
||||
|
||||
const handleMoveUp = useCallback(() => {
|
||||
if (currentLineIndex > 0) {
|
||||
updateCurrentLine(currentLineValue)
|
||||
const prevIndex = currentLineIndex - 1
|
||||
setCurrentLineIndex(prevIndex)
|
||||
setCurrentLineValue(lines[prevIndex] ?? "")
|
||||
}
|
||||
}, [currentLineIndex, currentLineValue, lines, updateCurrentLine])
|
||||
|
||||
const handleMoveDown = useCallback(() => {
|
||||
if (currentLineIndex < lines.length - 1) {
|
||||
updateCurrentLine(currentLineValue)
|
||||
const nextIndex = currentLineIndex + 1
|
||||
setCurrentLineIndex(nextIndex)
|
||||
setCurrentLineValue(lines[nextIndex] ?? "")
|
||||
}
|
||||
}, [currentLineIndex, currentLineValue, lines, updateCurrentLine])
|
||||
|
||||
const handleCtrlEnter = useCallback(() => {
|
||||
updateCurrentLine(currentLineValue)
|
||||
onSubmit(lines)
|
||||
}, [currentLineValue, lines, updateCurrentLine, onSubmit])
|
||||
|
||||
useInput(
|
||||
(input, key) => {
|
||||
if (key.escape) {
|
||||
onCancel()
|
||||
} else if (key.upArrow) {
|
||||
handleMoveUp()
|
||||
} else if (key.downArrow) {
|
||||
handleMoveDown()
|
||||
} else if (key.ctrl && key.return) {
|
||||
handleCtrlEnter()
|
||||
}
|
||||
},
|
||||
{ isActive: true },
|
||||
)
|
||||
|
||||
const startLine = Math.max(0, currentLineIndex - Math.floor(maxVisibleLines / 2))
|
||||
const endLine = Math.min(lines.length, startLine + maxVisibleLines)
|
||||
const visibleLines = lines.slice(startLine, endLine)
|
||||
|
||||
return (
|
||||
<Box flexDirection="column" borderStyle="round" borderColor="cyan" paddingX={1}>
|
||||
<Box marginBottom={1}>
|
||||
<Text color="cyan" bold>
|
||||
Edit Content (Line {currentLineIndex + 1}/{lines.length})
|
||||
</Text>
|
||||
</Box>
|
||||
|
||||
<Box flexDirection="column" marginBottom={1}>
|
||||
{visibleLines.map((line, idx) => {
|
||||
const actualIndex = startLine + idx
|
||||
const isCurrentLine = actualIndex === currentLineIndex
|
||||
|
||||
return (
|
||||
<Box key={actualIndex}>
|
||||
<Text color="gray" dimColor>
|
||||
{String(actualIndex + 1).padStart(3, " ")}:{" "}
|
||||
</Text>
|
||||
{isCurrentLine ? (
|
||||
<Box>
|
||||
<Text color="cyan">▶ </Text>
|
||||
<TextInput
|
||||
value={currentLineValue}
|
||||
onChange={setCurrentLineValue}
|
||||
onSubmit={handleLineSubmit}
|
||||
/>
|
||||
</Box>
|
||||
) : (
|
||||
<Text color={isCurrentLine ? "cyan" : "white"}>{line}</Text>
|
||||
)}
|
||||
</Box>
|
||||
)
|
||||
})}
|
||||
</Box>
|
||||
|
||||
<Box flexDirection="column" borderStyle="single" borderColor="gray" paddingX={1}>
|
||||
<Text dimColor>↑/↓: Navigate lines</Text>
|
||||
<Text dimColor>Enter: Next line / Submit (last line)</Text>
|
||||
<Text dimColor>Ctrl+Enter: Submit from any line</Text>
|
||||
<Text dimColor>Escape: Cancel</Text>
|
||||
</Box>
|
||||
</Box>
|
||||
)
|
||||
}
|
||||
105
packages/ipuaro/src/tui/components/ErrorDialog.tsx
Normal file
105
packages/ipuaro/src/tui/components/ErrorDialog.tsx
Normal file
@@ -0,0 +1,105 @@
|
||||
/**
|
||||
* ErrorDialog component for TUI.
|
||||
* Displays an error with [R] Retry / [S] Skip / [A] Abort options.
|
||||
*/
|
||||
|
||||
import { Box, Text, useInput } from "ink"
|
||||
import React, { useState } from "react"
|
||||
import type { ErrorOption } from "../../shared/errors/IpuaroError.js"
|
||||
|
||||
export interface ErrorInfo {
|
||||
type: string
|
||||
message: string
|
||||
recoverable: boolean
|
||||
}
|
||||
|
||||
export interface ErrorDialogProps {
|
||||
error: ErrorInfo
|
||||
onChoice: (choice: ErrorOption) => void
|
||||
}
|
||||
|
||||
function ChoiceButton({
|
||||
hotkey,
|
||||
label,
|
||||
isSelected,
|
||||
disabled,
|
||||
}: {
|
||||
hotkey: string
|
||||
label: string
|
||||
isSelected: boolean
|
||||
disabled?: boolean
|
||||
}): React.JSX.Element {
|
||||
if (disabled) {
|
||||
return (
|
||||
<Box>
|
||||
<Text color="gray" dimColor>
|
||||
[{hotkey}] {label}
|
||||
</Text>
|
||||
</Box>
|
||||
)
|
||||
}
|
||||
|
||||
return (
|
||||
<Box>
|
||||
<Text color={isSelected ? "cyan" : "gray"}>
|
||||
[<Text bold>{hotkey}</Text>] {label}
|
||||
</Text>
|
||||
</Box>
|
||||
)
|
||||
}
|
||||
|
||||
export function ErrorDialog({ error, onChoice }: ErrorDialogProps): React.JSX.Element {
|
||||
const [selected, setSelected] = useState<ErrorOption | null>(null)
|
||||
|
||||
useInput((input, key) => {
|
||||
const lowerInput = input.toLowerCase()
|
||||
|
||||
if (lowerInput === "r" && error.recoverable) {
|
||||
setSelected("retry")
|
||||
onChoice("retry")
|
||||
} else if (lowerInput === "s" && error.recoverable) {
|
||||
setSelected("skip")
|
||||
onChoice("skip")
|
||||
} else if (lowerInput === "a") {
|
||||
setSelected("abort")
|
||||
onChoice("abort")
|
||||
} else if (key.escape) {
|
||||
setSelected("abort")
|
||||
onChoice("abort")
|
||||
}
|
||||
})
|
||||
|
||||
return (
|
||||
<Box flexDirection="column" borderStyle="round" borderColor="red" paddingX={1} paddingY={1}>
|
||||
<Box marginBottom={1}>
|
||||
<Text color="red" bold>
|
||||
x {error.type}: {error.message}
|
||||
</Text>
|
||||
</Box>
|
||||
|
||||
<Box gap={2}>
|
||||
<ChoiceButton
|
||||
hotkey="R"
|
||||
label="Retry"
|
||||
isSelected={selected === "retry"}
|
||||
disabled={!error.recoverable}
|
||||
/>
|
||||
<ChoiceButton
|
||||
hotkey="S"
|
||||
label="Skip"
|
||||
isSelected={selected === "skip"}
|
||||
disabled={!error.recoverable}
|
||||
/>
|
||||
<ChoiceButton hotkey="A" label="Abort" isSelected={selected === "abort"} />
|
||||
</Box>
|
||||
|
||||
{!error.recoverable && (
|
||||
<Box marginTop={1}>
|
||||
<Text color="gray" dimColor>
|
||||
This error is not recoverable. Press [A] to abort.
|
||||
</Text>
|
||||
</Box>
|
||||
)}
|
||||
</Box>
|
||||
)
|
||||
}
|
||||
250
packages/ipuaro/src/tui/components/Input.tsx
Normal file
250
packages/ipuaro/src/tui/components/Input.tsx
Normal file
@@ -0,0 +1,250 @@
|
||||
/**
|
||||
* Input component for TUI.
|
||||
* Prompt with history navigation (up/down) and path autocomplete (tab).
|
||||
*/
|
||||
|
||||
import { Box, Text, useInput } from "ink"
|
||||
import TextInput from "ink-text-input"
|
||||
import React, { useCallback, useState } from "react"
|
||||
import type { IStorage } from "../../domain/services/IStorage.js"
|
||||
import { useAutocomplete } from "../hooks/useAutocomplete.js"
|
||||
|
||||
export interface InputProps {
|
||||
onSubmit: (text: string) => void
|
||||
history: string[]
|
||||
disabled: boolean
|
||||
placeholder?: string
|
||||
storage?: IStorage
|
||||
projectRoot?: string
|
||||
autocompleteEnabled?: boolean
|
||||
multiline?: boolean | "auto"
|
||||
}
|
||||
|
||||
export function Input({
|
||||
onSubmit,
|
||||
history,
|
||||
disabled,
|
||||
placeholder = "Type a message...",
|
||||
storage,
|
||||
projectRoot = "",
|
||||
autocompleteEnabled = true,
|
||||
multiline = false,
|
||||
}: InputProps): React.JSX.Element {
|
||||
const [value, setValue] = useState("")
|
||||
const [historyIndex, setHistoryIndex] = useState(-1)
|
||||
const [savedInput, setSavedInput] = useState("")
|
||||
const [lines, setLines] = useState<string[]>([""])
|
||||
const [currentLineIndex, setCurrentLineIndex] = useState(0)
|
||||
|
||||
const isMultilineActive = multiline === true || (multiline === "auto" && lines.length > 1)
|
||||
|
||||
/*
|
||||
* Initialize autocomplete hook if storage is provided
|
||||
* Create a dummy storage object if storage is not provided (autocomplete will be disabled)
|
||||
*/
|
||||
const dummyStorage = {} as IStorage
|
||||
const autocomplete = useAutocomplete({
|
||||
storage: storage ?? dummyStorage,
|
||||
projectRoot,
|
||||
enabled: autocompleteEnabled && !!storage,
|
||||
})
|
||||
|
||||
const handleChange = useCallback(
|
||||
(newValue: string) => {
|
||||
setValue(newValue)
|
||||
setHistoryIndex(-1)
|
||||
// Update autocomplete suggestions as user types
|
||||
if (storage && autocompleteEnabled) {
|
||||
autocomplete.complete(newValue)
|
||||
}
|
||||
},
|
||||
[storage, autocompleteEnabled, autocomplete],
|
||||
)
|
||||
|
||||
const handleSubmit = useCallback(
|
||||
(text: string) => {
|
||||
if (disabled || !text.trim()) {
|
||||
return
|
||||
}
|
||||
onSubmit(text)
|
||||
setValue("")
|
||||
setLines([""])
|
||||
setCurrentLineIndex(0)
|
||||
setHistoryIndex(-1)
|
||||
setSavedInput("")
|
||||
autocomplete.reset()
|
||||
},
|
||||
[disabled, onSubmit, autocomplete],
|
||||
)
|
||||
|
||||
const handleLineChange = useCallback(
|
||||
(newValue: string) => {
|
||||
const newLines = [...lines]
|
||||
newLines[currentLineIndex] = newValue
|
||||
setLines(newLines)
|
||||
setValue(newLines.join("\n"))
|
||||
},
|
||||
[lines, currentLineIndex],
|
||||
)
|
||||
|
||||
const handleAddLine = useCallback(() => {
|
||||
const newLines = [...lines]
|
||||
newLines.splice(currentLineIndex + 1, 0, "")
|
||||
setLines(newLines)
|
||||
setCurrentLineIndex(currentLineIndex + 1)
|
||||
setValue(newLines.join("\n"))
|
||||
}, [lines, currentLineIndex])
|
||||
|
||||
const handleMultilineSubmit = useCallback(() => {
|
||||
const fullText = lines.join("\n").trim()
|
||||
if (fullText) {
|
||||
handleSubmit(fullText)
|
||||
}
|
||||
}, [lines, handleSubmit])
|
||||
|
||||
const handleTabKey = useCallback(() => {
|
||||
if (storage && autocompleteEnabled && value.trim()) {
|
||||
const suggestions = autocomplete.suggestions
|
||||
if (suggestions.length > 0) {
|
||||
const completed = autocomplete.accept(value)
|
||||
setValue(completed)
|
||||
autocomplete.complete(completed)
|
||||
}
|
||||
}
|
||||
}, [storage, autocompleteEnabled, value, autocomplete])
|
||||
|
||||
const handleUpArrow = useCallback(() => {
|
||||
if (history.length > 0) {
|
||||
if (historyIndex === -1) {
|
||||
setSavedInput(value)
|
||||
}
|
||||
const newIndex =
|
||||
historyIndex === -1 ? history.length - 1 : Math.max(0, historyIndex - 1)
|
||||
setHistoryIndex(newIndex)
|
||||
setValue(history[newIndex] ?? "")
|
||||
autocomplete.reset()
|
||||
}
|
||||
}, [history, historyIndex, value, autocomplete])
|
||||
|
||||
const handleDownArrow = useCallback(() => {
|
||||
if (historyIndex === -1) {
|
||||
return
|
||||
}
|
||||
if (historyIndex >= history.length - 1) {
|
||||
setHistoryIndex(-1)
|
||||
setValue(savedInput)
|
||||
} else {
|
||||
const newIndex = historyIndex + 1
|
||||
setHistoryIndex(newIndex)
|
||||
setValue(history[newIndex] ?? "")
|
||||
}
|
||||
autocomplete.reset()
|
||||
}, [historyIndex, history, savedInput, autocomplete])
|
||||
|
||||
useInput(
|
||||
(input, key) => {
|
||||
if (disabled) {
|
||||
return
|
||||
}
|
||||
if (key.tab) {
|
||||
handleTabKey()
|
||||
}
|
||||
if (key.return && key.shift && isMultilineActive) {
|
||||
handleAddLine()
|
||||
}
|
||||
if (key.upArrow) {
|
||||
if (isMultilineActive && currentLineIndex > 0) {
|
||||
setCurrentLineIndex(currentLineIndex - 1)
|
||||
} else if (!isMultilineActive) {
|
||||
handleUpArrow()
|
||||
}
|
||||
}
|
||||
if (key.downArrow) {
|
||||
if (isMultilineActive && currentLineIndex < lines.length - 1) {
|
||||
setCurrentLineIndex(currentLineIndex + 1)
|
||||
} else if (!isMultilineActive) {
|
||||
handleDownArrow()
|
||||
}
|
||||
}
|
||||
},
|
||||
{ isActive: !disabled },
|
||||
)
|
||||
|
||||
const hasSuggestions = autocomplete.suggestions.length > 0
|
||||
|
||||
return (
|
||||
<Box flexDirection="column">
|
||||
<Box
|
||||
borderStyle="single"
|
||||
borderColor={disabled ? "gray" : "cyan"}
|
||||
paddingX={1}
|
||||
flexDirection="column"
|
||||
>
|
||||
{disabled ? (
|
||||
<Box>
|
||||
<Text color="gray" bold>
|
||||
{">"}{" "}
|
||||
</Text>
|
||||
<Text color="gray" dimColor>
|
||||
{placeholder}
|
||||
</Text>
|
||||
</Box>
|
||||
) : isMultilineActive ? (
|
||||
<Box flexDirection="column">
|
||||
{lines.map((line, index) => (
|
||||
<Box key={index}>
|
||||
<Text color="green" bold>
|
||||
{index === currentLineIndex ? ">" : " "}{" "}
|
||||
</Text>
|
||||
{index === currentLineIndex ? (
|
||||
<TextInput
|
||||
value={line}
|
||||
onChange={handleLineChange}
|
||||
onSubmit={handleMultilineSubmit}
|
||||
placeholder={index === 0 ? placeholder : ""}
|
||||
/>
|
||||
) : (
|
||||
<Text>{line}</Text>
|
||||
)}
|
||||
</Box>
|
||||
))}
|
||||
<Box marginTop={1}>
|
||||
<Text dimColor>Shift+Enter: new line | Enter: submit</Text>
|
||||
</Box>
|
||||
</Box>
|
||||
) : (
|
||||
<Box>
|
||||
<Text color="green" bold>
|
||||
{">"}{" "}
|
||||
</Text>
|
||||
<TextInput
|
||||
value={value}
|
||||
onChange={handleChange}
|
||||
onSubmit={handleSubmit}
|
||||
placeholder={placeholder}
|
||||
/>
|
||||
</Box>
|
||||
)}
|
||||
</Box>
|
||||
{hasSuggestions && !disabled && (
|
||||
<Box paddingLeft={2} flexDirection="column">
|
||||
<Text dimColor>
|
||||
{autocomplete.suggestions.length === 1
|
||||
? "Press Tab to complete"
|
||||
: `${String(autocomplete.suggestions.length)} suggestions (Tab to complete)`}
|
||||
</Text>
|
||||
{autocomplete.suggestions.slice(0, 5).map((suggestion, i) => (
|
||||
<Text key={i} dimColor color="cyan">
|
||||
{" "}• {suggestion}
|
||||
</Text>
|
||||
))}
|
||||
{autocomplete.suggestions.length > 5 && (
|
||||
<Text dimColor>
|
||||
{" "}... and {String(autocomplete.suggestions.length - 5)} more
|
||||
</Text>
|
||||
)}
|
||||
</Box>
|
||||
)}
|
||||
</Box>
|
||||
)
|
||||
}
|
||||
62
packages/ipuaro/src/tui/components/Progress.tsx
Normal file
62
packages/ipuaro/src/tui/components/Progress.tsx
Normal file
@@ -0,0 +1,62 @@
|
||||
/**
|
||||
* Progress component for TUI.
|
||||
* Displays a progress bar: [=====> ] 45% (120/267 files)
|
||||
*/
|
||||
|
||||
import { Box, Text } from "ink"
|
||||
import type React from "react"
|
||||
|
||||
export interface ProgressProps {
|
||||
current: number
|
||||
total: number
|
||||
label: string
|
||||
width?: number
|
||||
}
|
||||
|
||||
function calculatePercentage(current: number, total: number): number {
|
||||
if (total === 0) {
|
||||
return 0
|
||||
}
|
||||
return Math.min(100, Math.round((current / total) * 100))
|
||||
}
|
||||
|
||||
function createProgressBar(percentage: number, width: number): { filled: string; empty: string } {
|
||||
const filledWidth = Math.round((percentage / 100) * width)
|
||||
const emptyWidth = width - filledWidth
|
||||
|
||||
const filled = "=".repeat(Math.max(0, filledWidth - 1)) + (filledWidth > 0 ? ">" : "")
|
||||
const empty = " ".repeat(Math.max(0, emptyWidth))
|
||||
|
||||
return { filled, empty }
|
||||
}
|
||||
|
||||
function getProgressColor(percentage: number): string {
|
||||
if (percentage >= 100) {
|
||||
return "green"
|
||||
}
|
||||
if (percentage >= 50) {
|
||||
return "yellow"
|
||||
}
|
||||
return "cyan"
|
||||
}
|
||||
|
||||
export function Progress({ current, total, label, width = 30 }: ProgressProps): React.JSX.Element {
|
||||
const percentage = calculatePercentage(current, total)
|
||||
const { filled, empty } = createProgressBar(percentage, width)
|
||||
const color = getProgressColor(percentage)
|
||||
|
||||
return (
|
||||
<Box gap={1}>
|
||||
<Text color="gray">[</Text>
|
||||
<Text color={color}>{filled}</Text>
|
||||
<Text color="gray">{empty}</Text>
|
||||
<Text color="gray">]</Text>
|
||||
<Text color={color} bold>
|
||||
{String(percentage)}%
|
||||
</Text>
|
||||
<Text color="gray">
|
||||
({String(current)}/{String(total)} {label})
|
||||
</Text>
|
||||
</Box>
|
||||
)
|
||||
}
|
||||
83
packages/ipuaro/src/tui/components/StatusBar.tsx
Normal file
83
packages/ipuaro/src/tui/components/StatusBar.tsx
Normal file
@@ -0,0 +1,83 @@
|
||||
/**
|
||||
* StatusBar component for TUI.
|
||||
* Displays: [ipuaro] [ctx: 12%] [project: myapp] [main] [47m] status
|
||||
*/
|
||||
|
||||
import { Box, Text } from "ink"
|
||||
import type React from "react"
|
||||
import type { BranchInfo, TuiStatus } from "../types.js"
|
||||
import { getContextColor, getStatusColor, type Theme } from "../utils/theme.js"
|
||||
|
||||
export interface StatusBarProps {
|
||||
contextUsage: number
|
||||
projectName: string
|
||||
branch: BranchInfo
|
||||
sessionTime: string
|
||||
status: TuiStatus
|
||||
theme?: Theme
|
||||
}
|
||||
|
||||
function getStatusIndicator(status: TuiStatus, theme: Theme): { text: string; color: string } {
|
||||
const color = getStatusColor(status, theme)
|
||||
|
||||
switch (status) {
|
||||
case "ready": {
|
||||
return { text: "ready", color }
|
||||
}
|
||||
case "thinking": {
|
||||
return { text: "thinking...", color }
|
||||
}
|
||||
case "tool_call": {
|
||||
return { text: "executing...", color }
|
||||
}
|
||||
case "awaiting_confirmation": {
|
||||
return { text: "confirm?", color }
|
||||
}
|
||||
case "error": {
|
||||
return { text: "error", color }
|
||||
}
|
||||
default: {
|
||||
return { text: "ready", color }
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
function formatContextUsage(usage: number): string {
|
||||
return `${String(Math.round(usage * 100))}%`
|
||||
}
|
||||
|
||||
export function StatusBar({
|
||||
contextUsage,
|
||||
projectName,
|
||||
branch,
|
||||
sessionTime,
|
||||
status,
|
||||
theme = "dark",
|
||||
}: StatusBarProps): React.JSX.Element {
|
||||
const statusIndicator = getStatusIndicator(status, theme)
|
||||
const branchDisplay = branch.isDetached ? `HEAD@${branch.name.slice(0, 7)}` : branch.name
|
||||
const contextColor = getContextColor(contextUsage, theme)
|
||||
|
||||
return (
|
||||
<Box borderStyle="single" borderColor="gray" paddingX={1} justifyContent="space-between">
|
||||
<Box gap={1}>
|
||||
<Text color="cyan" bold>
|
||||
[ipuaro]
|
||||
</Text>
|
||||
<Text color="gray">
|
||||
[ctx: <Text color={contextColor}>{formatContextUsage(contextUsage)}</Text>]
|
||||
</Text>
|
||||
<Text color="gray">
|
||||
[<Text color="blue">{projectName}</Text>]
|
||||
</Text>
|
||||
<Text color="gray">
|
||||
[<Text color="green">{branchDisplay}</Text>]
|
||||
</Text>
|
||||
<Text color="gray">
|
||||
[<Text color="white">{sessionTime}</Text>]
|
||||
</Text>
|
||||
</Box>
|
||||
<Text color={statusIndicator.color}>{statusIndicator.text}</Text>
|
||||
</Box>
|
||||
)
|
||||
}
|
||||
12
packages/ipuaro/src/tui/components/index.ts
Normal file
12
packages/ipuaro/src/tui/components/index.ts
Normal file
@@ -0,0 +1,12 @@
|
||||
/**
|
||||
* TUI components.
|
||||
*/
|
||||
|
||||
export { StatusBar, type StatusBarProps } from "./StatusBar.js"
|
||||
export { Chat, type ChatProps } from "./Chat.js"
|
||||
export { Input, type InputProps } from "./Input.js"
|
||||
export { DiffView, type DiffViewProps } from "./DiffView.js"
|
||||
export { ConfirmDialog, type ConfirmDialogProps } from "./ConfirmDialog.js"
|
||||
export { ErrorDialog, type ErrorDialogProps, type ErrorInfo } from "./ErrorDialog.js"
|
||||
export { Progress, type ProgressProps } from "./Progress.js"
|
||||
export { EditableContent, type EditableContentProps } from "./EditableContent.js"
|
||||
26
packages/ipuaro/src/tui/hooks/index.ts
Normal file
26
packages/ipuaro/src/tui/hooks/index.ts
Normal file
@@ -0,0 +1,26 @@
|
||||
/**
|
||||
* TUI hooks.
|
||||
*/
|
||||
|
||||
export {
|
||||
useSession,
|
||||
type UseSessionDependencies,
|
||||
type UseSessionOptions,
|
||||
type UseSessionReturn,
|
||||
} from "./useSession.js"
|
||||
export { useHotkeys, type HotkeyHandlers, type UseHotkeysOptions } from "./useHotkeys.js"
|
||||
export {
|
||||
useCommands,
|
||||
parseCommand,
|
||||
type UseCommandsDependencies,
|
||||
type UseCommandsActions,
|
||||
type UseCommandsOptions,
|
||||
type UseCommandsReturn,
|
||||
type CommandResult,
|
||||
type CommandDefinition,
|
||||
} from "./useCommands.js"
|
||||
export {
|
||||
useAutocomplete,
|
||||
type UseAutocompleteOptions,
|
||||
type UseAutocompleteReturn,
|
||||
} from "./useAutocomplete.js"
|
||||
204
packages/ipuaro/src/tui/hooks/useAutocomplete.ts
Normal file
204
packages/ipuaro/src/tui/hooks/useAutocomplete.ts
Normal file
@@ -0,0 +1,204 @@
|
||||
/**
|
||||
* useAutocomplete hook for file path autocomplete.
|
||||
* Provides Tab completion for file paths using Redis index.
|
||||
*/
|
||||
|
||||
import { useCallback, useEffect, useState } from "react"
|
||||
import type { IStorage } from "../../domain/services/IStorage.js"
|
||||
import type { AutocompleteConfig } from "../../shared/constants/config.js"
|
||||
import path from "node:path"
|
||||
|
||||
export interface UseAutocompleteOptions {
|
||||
storage: IStorage
|
||||
projectRoot: string
|
||||
enabled?: boolean
|
||||
maxSuggestions?: number
|
||||
config?: AutocompleteConfig
|
||||
}
|
||||
|
||||
export interface UseAutocompleteReturn {
|
||||
suggestions: string[]
|
||||
complete: (partial: string) => string[]
|
||||
accept: (suggestion: string) => string
|
||||
reset: () => void
|
||||
}
|
||||
|
||||
/**
|
||||
* Normalizes a path by removing leading ./ and trailing /
|
||||
*/
|
||||
function normalizePath(p: string): string {
|
||||
let normalized = p.trim()
|
||||
if (normalized.startsWith("./")) {
|
||||
normalized = normalized.slice(2)
|
||||
}
|
||||
if (normalized.endsWith("/") && normalized.length > 1) {
|
||||
normalized = normalized.slice(0, -1)
|
||||
}
|
||||
return normalized
|
||||
}
|
||||
|
||||
/**
|
||||
* Calculates fuzzy match score between partial and candidate.
|
||||
* Returns 0 if no match, higher score for better matches.
|
||||
*/
|
||||
function fuzzyScore(partial: string, candidate: string): number {
|
||||
const partialLower = partial.toLowerCase()
|
||||
const candidateLower = candidate.toLowerCase()
|
||||
|
||||
// Exact prefix match gets highest score
|
||||
if (candidateLower.startsWith(partialLower)) {
|
||||
return 1000 + (1000 - partial.length)
|
||||
}
|
||||
|
||||
// Check if all characters from partial appear in order in candidate
|
||||
let partialIndex = 0
|
||||
let candidateIndex = 0
|
||||
let lastMatchIndex = -1
|
||||
let consecutiveMatches = 0
|
||||
|
||||
while (partialIndex < partialLower.length && candidateIndex < candidateLower.length) {
|
||||
if (partialLower[partialIndex] === candidateLower[candidateIndex]) {
|
||||
// Bonus for consecutive matches
|
||||
if (candidateIndex === lastMatchIndex + 1) {
|
||||
consecutiveMatches++
|
||||
} else {
|
||||
consecutiveMatches = 0
|
||||
}
|
||||
lastMatchIndex = candidateIndex
|
||||
partialIndex++
|
||||
}
|
||||
candidateIndex++
|
||||
}
|
||||
|
||||
// If we didn't match all characters, no match
|
||||
if (partialIndex < partialLower.length) {
|
||||
return 0
|
||||
}
|
||||
|
||||
// Score based on how tight the match is
|
||||
const matchSpread = lastMatchIndex - (partialLower.length - 1)
|
||||
const score = 100 + consecutiveMatches * 10 - matchSpread
|
||||
|
||||
return Math.max(0, score)
|
||||
}
|
||||
|
||||
/**
|
||||
* Gets the common prefix of all suggestions
|
||||
*/
|
||||
function getCommonPrefix(suggestions: string[]): string {
|
||||
if (suggestions.length === 0) {
|
||||
return ""
|
||||
}
|
||||
if (suggestions.length === 1) {
|
||||
return suggestions[0] ?? ""
|
||||
}
|
||||
|
||||
let prefix = suggestions[0] ?? ""
|
||||
for (let i = 1; i < suggestions.length; i++) {
|
||||
const current = suggestions[i] ?? ""
|
||||
let j = 0
|
||||
while (j < prefix.length && j < current.length && prefix[j] === current[j]) {
|
||||
j++
|
||||
}
|
||||
prefix = prefix.slice(0, j)
|
||||
if (prefix.length === 0) {
|
||||
break
|
||||
}
|
||||
}
|
||||
return prefix
|
||||
}
|
||||
|
||||
export function useAutocomplete(options: UseAutocompleteOptions): UseAutocompleteReturn {
|
||||
const { storage, projectRoot, enabled, maxSuggestions, config } = options
|
||||
|
||||
// Read from config if provided, otherwise use options, otherwise use defaults
|
||||
const isEnabled = config?.enabled ?? enabled ?? true
|
||||
const maxSuggestionsCount = config?.maxSuggestions ?? maxSuggestions ?? 10
|
||||
|
||||
const [filePaths, setFilePaths] = useState<string[]>([])
|
||||
const [suggestions, setSuggestions] = useState<string[]>([])
|
||||
|
||||
// Load file paths from storage
|
||||
useEffect(() => {
|
||||
if (!isEnabled) {
|
||||
return
|
||||
}
|
||||
|
||||
const loadPaths = async (): Promise<void> => {
|
||||
try {
|
||||
const files = await storage.getAllFiles()
|
||||
const paths = Array.from(files.keys()).map((p) => {
|
||||
// Make paths relative to project root
|
||||
const relative = path.relative(projectRoot, p)
|
||||
return normalizePath(relative)
|
||||
})
|
||||
setFilePaths(paths.sort())
|
||||
} catch {
|
||||
// Silently fail - autocomplete is non-critical
|
||||
setFilePaths([])
|
||||
}
|
||||
}
|
||||
|
||||
loadPaths().catch(() => {
|
||||
// Ignore errors
|
||||
})
|
||||
}, [storage, projectRoot, isEnabled])
|
||||
|
||||
const complete = useCallback(
|
||||
(partial: string): string[] => {
|
||||
if (!isEnabled || !partial.trim()) {
|
||||
setSuggestions([])
|
||||
return []
|
||||
}
|
||||
|
||||
const normalized = normalizePath(partial)
|
||||
|
||||
// Score and filter matches
|
||||
const scored = filePaths
|
||||
.map((p) => ({
|
||||
path: p,
|
||||
score: fuzzyScore(normalized, p),
|
||||
}))
|
||||
.filter((item) => item.score > 0)
|
||||
.sort((a, b) => b.score - a.score)
|
||||
.slice(0, maxSuggestionsCount)
|
||||
.map((item) => item.path)
|
||||
|
||||
setSuggestions(scored)
|
||||
return scored
|
||||
},
|
||||
[isEnabled, filePaths, maxSuggestionsCount],
|
||||
)
|
||||
|
||||
const accept = useCallback(
|
||||
(suggestion: string): string => {
|
||||
// If there's only one suggestion, complete with it
|
||||
if (suggestions.length === 1) {
|
||||
setSuggestions([])
|
||||
return suggestions[0] ?? ""
|
||||
}
|
||||
|
||||
// If there are multiple suggestions, complete with common prefix
|
||||
if (suggestions.length > 1) {
|
||||
const prefix = getCommonPrefix(suggestions)
|
||||
if (prefix.length > suggestion.length) {
|
||||
return prefix
|
||||
}
|
||||
}
|
||||
|
||||
return suggestion
|
||||
},
|
||||
[suggestions],
|
||||
)
|
||||
|
||||
const reset = useCallback(() => {
|
||||
setSuggestions([])
|
||||
}, [])
|
||||
|
||||
return {
|
||||
suggestions,
|
||||
complete,
|
||||
accept,
|
||||
reset,
|
||||
}
|
||||
}
|
||||
444
packages/ipuaro/src/tui/hooks/useCommands.ts
Normal file
444
packages/ipuaro/src/tui/hooks/useCommands.ts
Normal file
@@ -0,0 +1,444 @@
|
||||
/**
|
||||
* useCommands hook for TUI.
|
||||
* Handles slash commands (/help, /clear, /undo, etc.)
|
||||
*/
|
||||
|
||||
import { useCallback, useMemo } from "react"
|
||||
import type { Session } from "../../domain/entities/Session.js"
|
||||
import type { ILLMClient } from "../../domain/services/ILLMClient.js"
|
||||
import type { ISessionStorage } from "../../domain/services/ISessionStorage.js"
|
||||
import type { IStorage } from "../../domain/services/IStorage.js"
|
||||
import type { IToolRegistry } from "../../application/interfaces/IToolRegistry.js"
|
||||
|
||||
/**
|
||||
* Command result returned after execution.
|
||||
*/
|
||||
export interface CommandResult {
|
||||
success: boolean
|
||||
message: string
|
||||
data?: unknown
|
||||
}
|
||||
|
||||
/**
|
||||
* Command definition.
|
||||
*/
|
||||
export interface CommandDefinition {
|
||||
name: string
|
||||
description: string
|
||||
usage: string
|
||||
execute: (args: string[]) => Promise<CommandResult>
|
||||
}
|
||||
|
||||
/**
|
||||
* Dependencies for useCommands hook.
|
||||
*/
|
||||
export interface UseCommandsDependencies {
|
||||
session: Session | null
|
||||
sessionStorage: ISessionStorage
|
||||
storage: IStorage
|
||||
llm: ILLMClient
|
||||
tools: IToolRegistry
|
||||
projectRoot: string
|
||||
projectName: string
|
||||
}
|
||||
|
||||
/**
|
||||
* Actions provided by the parent component.
|
||||
*/
|
||||
export interface UseCommandsActions {
|
||||
clearHistory: () => void
|
||||
undo: () => Promise<boolean>
|
||||
setAutoApply: (value: boolean) => void
|
||||
reindex: () => Promise<void>
|
||||
}
|
||||
|
||||
/**
|
||||
* Options for useCommands hook.
|
||||
*/
|
||||
export interface UseCommandsOptions {
|
||||
autoApply: boolean
|
||||
}
|
||||
|
||||
/**
|
||||
* Return type for useCommands hook.
|
||||
*/
|
||||
export interface UseCommandsReturn {
|
||||
executeCommand: (input: string) => Promise<CommandResult | null>
|
||||
isCommand: (input: string) => boolean
|
||||
getCommands: () => CommandDefinition[]
|
||||
}
|
||||
|
||||
/**
|
||||
* Parses command input into command name and arguments.
|
||||
*/
|
||||
export function parseCommand(input: string): { command: string; args: string[] } | null {
|
||||
const trimmed = input.trim()
|
||||
if (!trimmed.startsWith("/")) {
|
||||
return null
|
||||
}
|
||||
|
||||
const parts = trimmed.slice(1).split(/\s+/)
|
||||
const command = parts[0]?.toLowerCase() ?? ""
|
||||
const args = parts.slice(1)
|
||||
|
||||
return { command, args }
|
||||
}
|
||||
|
||||
// Command factory functions to keep the hook clean and under line limits
|
||||
|
||||
function createHelpCommand(map: Map<string, CommandDefinition>): CommandDefinition {
|
||||
return {
|
||||
name: "help",
|
||||
description: "Shows all commands and hotkeys",
|
||||
usage: "/help",
|
||||
execute: async (): Promise<CommandResult> => {
|
||||
const commandList = Array.from(map.values())
|
||||
.map((cmd) => ` ${cmd.usage.padEnd(25)} ${cmd.description}`)
|
||||
.join("\n")
|
||||
|
||||
const hotkeys = [
|
||||
" Ctrl+C (1x) Interrupt current operation",
|
||||
" Ctrl+C (2x) Exit ipuaro",
|
||||
" Ctrl+D Exit with session save",
|
||||
" Ctrl+Z Undo last change",
|
||||
" ↑/↓ Navigate input history",
|
||||
].join("\n")
|
||||
|
||||
const message = ["Available commands:", commandList, "", "Hotkeys:", hotkeys].join("\n")
|
||||
|
||||
return Promise.resolve({ success: true, message })
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
function createClearCommand(actions: UseCommandsActions): CommandDefinition {
|
||||
return {
|
||||
name: "clear",
|
||||
description: "Clears chat history (keeps session)",
|
||||
usage: "/clear",
|
||||
execute: async (): Promise<CommandResult> => {
|
||||
actions.clearHistory()
|
||||
return Promise.resolve({ success: true, message: "Chat history cleared." })
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
function createUndoCommand(
|
||||
deps: UseCommandsDependencies,
|
||||
actions: UseCommandsActions,
|
||||
): CommandDefinition {
|
||||
return {
|
||||
name: "undo",
|
||||
description: "Reverts last file change",
|
||||
usage: "/undo",
|
||||
execute: async (): Promise<CommandResult> => {
|
||||
if (!deps.session) {
|
||||
return { success: false, message: "No active session." }
|
||||
}
|
||||
|
||||
const undoStack = deps.session.undoStack
|
||||
if (undoStack.length === 0) {
|
||||
return { success: false, message: "Nothing to undo." }
|
||||
}
|
||||
|
||||
const result = await actions.undo()
|
||||
if (result) {
|
||||
return { success: true, message: "Last change reverted." }
|
||||
}
|
||||
return { success: false, message: "Failed to undo. File may have been modified." }
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
function createSessionsCommand(deps: UseCommandsDependencies): CommandDefinition {
|
||||
return {
|
||||
name: "sessions",
|
||||
description: "Manage sessions (list, load <id>, delete <id>)",
|
||||
usage: "/sessions [list|load|delete] [id]",
|
||||
execute: async (args: string[]): Promise<CommandResult> => {
|
||||
const subCommand = args[0]?.toLowerCase() ?? "list"
|
||||
|
||||
if (subCommand === "list") {
|
||||
return handleSessionsList(deps)
|
||||
}
|
||||
|
||||
if (subCommand === "load") {
|
||||
return handleSessionsLoad(deps, args[1])
|
||||
}
|
||||
|
||||
if (subCommand === "delete") {
|
||||
return handleSessionsDelete(deps, args[1])
|
||||
}
|
||||
|
||||
return { success: false, message: "Usage: /sessions [list|load|delete] [id]" }
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
async function handleSessionsList(deps: UseCommandsDependencies): Promise<CommandResult> {
|
||||
const sessions = await deps.sessionStorage.listSessions(deps.projectName)
|
||||
if (sessions.length === 0) {
|
||||
return { success: true, message: "No sessions found." }
|
||||
}
|
||||
|
||||
const currentId = deps.session?.id
|
||||
const sessionList = sessions
|
||||
.map((s) => {
|
||||
const current = s.id === currentId ? " (current)" : ""
|
||||
const date = new Date(s.createdAt).toLocaleString()
|
||||
return ` ${s.id.slice(0, 8)}${current} - ${date} - ${String(s.messageCount)} messages`
|
||||
})
|
||||
.join("\n")
|
||||
|
||||
return {
|
||||
success: true,
|
||||
message: `Sessions for ${deps.projectName}:\n${sessionList}`,
|
||||
data: sessions,
|
||||
}
|
||||
}
|
||||
|
||||
async function handleSessionsLoad(
|
||||
deps: UseCommandsDependencies,
|
||||
sessionId: string | undefined,
|
||||
): Promise<CommandResult> {
|
||||
if (!sessionId) {
|
||||
return { success: false, message: "Usage: /sessions load <id>" }
|
||||
}
|
||||
|
||||
const exists = await deps.sessionStorage.sessionExists(sessionId)
|
||||
if (!exists) {
|
||||
return { success: false, message: `Session ${sessionId} not found.` }
|
||||
}
|
||||
|
||||
return {
|
||||
success: true,
|
||||
message: `To load session ${sessionId}, restart ipuaro with --session ${sessionId}`,
|
||||
data: { sessionId },
|
||||
}
|
||||
}
|
||||
|
||||
async function handleSessionsDelete(
|
||||
deps: UseCommandsDependencies,
|
||||
sessionId: string | undefined,
|
||||
): Promise<CommandResult> {
|
||||
if (!sessionId) {
|
||||
return { success: false, message: "Usage: /sessions delete <id>" }
|
||||
}
|
||||
|
||||
if (deps.session?.id === sessionId) {
|
||||
return { success: false, message: "Cannot delete current session." }
|
||||
}
|
||||
|
||||
const exists = await deps.sessionStorage.sessionExists(sessionId)
|
||||
if (!exists) {
|
||||
return { success: false, message: `Session ${sessionId} not found.` }
|
||||
}
|
||||
|
||||
await deps.sessionStorage.deleteSession(sessionId)
|
||||
return { success: true, message: `Session ${sessionId} deleted.` }
|
||||
}
|
||||
|
||||
function createStatusCommand(
|
||||
deps: UseCommandsDependencies,
|
||||
options: UseCommandsOptions,
|
||||
): CommandDefinition {
|
||||
return {
|
||||
name: "status",
|
||||
description: "Shows system and session status",
|
||||
usage: "/status",
|
||||
execute: async (): Promise<CommandResult> => {
|
||||
const llmAvailable = await deps.llm.isAvailable()
|
||||
const llmStatus = llmAvailable ? "connected" : "unavailable"
|
||||
|
||||
const contextUsage = deps.session?.context.tokenUsage ?? 0
|
||||
const contextPercent = Math.round(contextUsage * 100)
|
||||
|
||||
const sessionStats = deps.session?.stats ?? {
|
||||
totalTokens: 0,
|
||||
totalTime: 0,
|
||||
toolCalls: 0,
|
||||
editsApplied: 0,
|
||||
editsRejected: 0,
|
||||
}
|
||||
|
||||
const undoCount = deps.session?.undoStack.length ?? 0
|
||||
|
||||
const message = [
|
||||
"System Status:",
|
||||
` LLM: ${llmStatus}`,
|
||||
` Context: ${String(contextPercent)}% used`,
|
||||
` Auto-apply: ${options.autoApply ? "on" : "off"}`,
|
||||
"",
|
||||
"Session Stats:",
|
||||
` Tokens: ${sessionStats.totalTokens.toLocaleString()}`,
|
||||
` Tool calls: ${String(sessionStats.toolCalls)}`,
|
||||
` Edits: ${String(sessionStats.editsApplied)} applied, ${String(sessionStats.editsRejected)} rejected`,
|
||||
` Undo stack: ${String(undoCount)} entries`,
|
||||
"",
|
||||
"Project:",
|
||||
` Name: ${deps.projectName}`,
|
||||
` Root: ${deps.projectRoot}`,
|
||||
].join("\n")
|
||||
|
||||
return { success: true, message }
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
function createReindexCommand(actions: UseCommandsActions): CommandDefinition {
|
||||
return {
|
||||
name: "reindex",
|
||||
description: "Forces full project reindexation",
|
||||
usage: "/reindex",
|
||||
execute: async (): Promise<CommandResult> => {
|
||||
try {
|
||||
await actions.reindex()
|
||||
return { success: true, message: "Project reindexed successfully." }
|
||||
} catch (err) {
|
||||
const errorMessage = err instanceof Error ? err.message : String(err)
|
||||
return { success: false, message: `Reindex failed: ${errorMessage}` }
|
||||
}
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
function createEvalCommand(deps: UseCommandsDependencies): CommandDefinition {
|
||||
return {
|
||||
name: "eval",
|
||||
description: "LLM self-check for hallucinations",
|
||||
usage: "/eval",
|
||||
execute: async (): Promise<CommandResult> => {
|
||||
if (!deps.session || deps.session.history.length === 0) {
|
||||
return { success: false, message: "No conversation to evaluate." }
|
||||
}
|
||||
|
||||
const lastAssistantMessage = [...deps.session.history]
|
||||
.reverse()
|
||||
.find((m) => m.role === "assistant")
|
||||
|
||||
if (!lastAssistantMessage) {
|
||||
return { success: false, message: "No assistant response to evaluate." }
|
||||
}
|
||||
|
||||
const evalPrompt = [
|
||||
"Review your last response for potential issues:",
|
||||
"1. Are there any factual errors or hallucinations?",
|
||||
"2. Did you reference files or code that might not exist?",
|
||||
"3. Are there any assumptions that should be verified?",
|
||||
"",
|
||||
"Last response to evaluate:",
|
||||
lastAssistantMessage.content.slice(0, 2000),
|
||||
].join("\n")
|
||||
|
||||
try {
|
||||
const response = await deps.llm.chat([
|
||||
{ role: "user", content: evalPrompt, timestamp: Date.now() },
|
||||
])
|
||||
|
||||
return {
|
||||
success: true,
|
||||
message: `Self-evaluation:\n${response.content}`,
|
||||
data: { evaluated: lastAssistantMessage.content.slice(0, 100) },
|
||||
}
|
||||
} catch (err) {
|
||||
const errorMessage = err instanceof Error ? err.message : String(err)
|
||||
return { success: false, message: `Evaluation failed: ${errorMessage}` }
|
||||
}
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
function createAutoApplyCommand(
|
||||
actions: UseCommandsActions,
|
||||
options: UseCommandsOptions,
|
||||
): CommandDefinition {
|
||||
return {
|
||||
name: "auto-apply",
|
||||
description: "Toggle auto-apply mode (on/off)",
|
||||
usage: "/auto-apply [on|off]",
|
||||
execute: async (args: string[]): Promise<CommandResult> => {
|
||||
const arg = args[0]?.toLowerCase()
|
||||
|
||||
if (arg === "on") {
|
||||
actions.setAutoApply(true)
|
||||
return Promise.resolve({ success: true, message: "Auto-apply enabled." })
|
||||
}
|
||||
|
||||
if (arg === "off") {
|
||||
actions.setAutoApply(false)
|
||||
return Promise.resolve({ success: true, message: "Auto-apply disabled." })
|
||||
}
|
||||
|
||||
if (!arg) {
|
||||
const current = options.autoApply ? "on" : "off"
|
||||
return Promise.resolve({
|
||||
success: true,
|
||||
message: `Auto-apply is currently: ${current}`,
|
||||
})
|
||||
}
|
||||
|
||||
return Promise.resolve({ success: false, message: "Usage: /auto-apply [on|off]" })
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Hook for handling slash commands in TUI.
|
||||
*/
|
||||
export function useCommands(
|
||||
deps: UseCommandsDependencies,
|
||||
actions: UseCommandsActions,
|
||||
options: UseCommandsOptions,
|
||||
): UseCommandsReturn {
|
||||
const commands = useMemo((): Map<string, CommandDefinition> => {
|
||||
const map = new Map<string, CommandDefinition>()
|
||||
|
||||
// Register all commands
|
||||
const helpCmd = createHelpCommand(map)
|
||||
map.set("help", helpCmd)
|
||||
map.set("clear", createClearCommand(actions))
|
||||
map.set("undo", createUndoCommand(deps, actions))
|
||||
map.set("sessions", createSessionsCommand(deps))
|
||||
map.set("status", createStatusCommand(deps, options))
|
||||
map.set("reindex", createReindexCommand(actions))
|
||||
map.set("eval", createEvalCommand(deps))
|
||||
map.set("auto-apply", createAutoApplyCommand(actions, options))
|
||||
|
||||
return map
|
||||
}, [deps, actions, options])
|
||||
|
||||
const isCommand = useCallback((input: string): boolean => {
|
||||
return input.trim().startsWith("/")
|
||||
}, [])
|
||||
|
||||
const executeCommand = useCallback(
|
||||
async (input: string): Promise<CommandResult | null> => {
|
||||
const parsed = parseCommand(input)
|
||||
if (!parsed) {
|
||||
return null
|
||||
}
|
||||
|
||||
const command = commands.get(parsed.command)
|
||||
if (!command) {
|
||||
const available = Array.from(commands.keys()).join(", ")
|
||||
return {
|
||||
success: false,
|
||||
message: `Unknown command: /${parsed.command}\nAvailable: ${available}`,
|
||||
}
|
||||
}
|
||||
|
||||
return command.execute(parsed.args)
|
||||
},
|
||||
[commands],
|
||||
)
|
||||
|
||||
const getCommands = useCallback((): CommandDefinition[] => {
|
||||
return Array.from(commands.values())
|
||||
}, [commands])
|
||||
|
||||
return {
|
||||
executeCommand,
|
||||
isCommand,
|
||||
getCommands,
|
||||
}
|
||||
}
|
||||
59
packages/ipuaro/src/tui/hooks/useHotkeys.ts
Normal file
59
packages/ipuaro/src/tui/hooks/useHotkeys.ts
Normal file
@@ -0,0 +1,59 @@
|
||||
/**
|
||||
* useHotkeys hook for TUI.
|
||||
* Handles global keyboard shortcuts.
|
||||
*/
|
||||
|
||||
import { useInput } from "ink"
|
||||
import { useCallback, useRef } from "react"
|
||||
|
||||
export interface HotkeyHandlers {
|
||||
onInterrupt?: () => void
|
||||
onExit?: () => void
|
||||
onUndo?: () => void
|
||||
}
|
||||
|
||||
export interface UseHotkeysOptions {
|
||||
enabled?: boolean
|
||||
}
|
||||
|
||||
export function useHotkeys(handlers: HotkeyHandlers, options: UseHotkeysOptions = {}): void {
|
||||
const { enabled = true } = options
|
||||
const interruptCount = useRef(0)
|
||||
const interruptTimer = useRef<ReturnType<typeof setTimeout> | null>(null)
|
||||
|
||||
const resetInterruptCount = useCallback((): void => {
|
||||
interruptCount.current = 0
|
||||
if (interruptTimer.current) {
|
||||
clearTimeout(interruptTimer.current)
|
||||
interruptTimer.current = null
|
||||
}
|
||||
}, [])
|
||||
|
||||
useInput(
|
||||
(_input, key) => {
|
||||
if (key.ctrl && _input === "c") {
|
||||
interruptCount.current++
|
||||
|
||||
if (interruptCount.current === 1) {
|
||||
handlers.onInterrupt?.()
|
||||
|
||||
interruptTimer.current = setTimeout(() => {
|
||||
resetInterruptCount()
|
||||
}, 1000)
|
||||
} else if (interruptCount.current >= 2) {
|
||||
resetInterruptCount()
|
||||
handlers.onExit?.()
|
||||
}
|
||||
}
|
||||
|
||||
if (key.ctrl && _input === "d") {
|
||||
handlers.onExit?.()
|
||||
}
|
||||
|
||||
if (key.ctrl && _input === "z") {
|
||||
handlers.onUndo?.()
|
||||
}
|
||||
},
|
||||
{ isActive: enabled },
|
||||
)
|
||||
}
|
||||
214
packages/ipuaro/src/tui/hooks/useSession.ts
Normal file
214
packages/ipuaro/src/tui/hooks/useSession.ts
Normal file
@@ -0,0 +1,214 @@
|
||||
/**
|
||||
* useSession hook for TUI.
|
||||
* Manages session state and message handling.
|
||||
*/
|
||||
|
||||
import { useCallback, useEffect, useRef, useState } from "react"
|
||||
import type { Session } from "../../domain/entities/Session.js"
|
||||
import type { ILLMClient } from "../../domain/services/ILLMClient.js"
|
||||
import type { ISessionStorage } from "../../domain/services/ISessionStorage.js"
|
||||
import type { IStorage } from "../../domain/services/IStorage.js"
|
||||
import type { DiffInfo } from "../../domain/services/ITool.js"
|
||||
import type { ChatMessage } from "../../domain/value-objects/ChatMessage.js"
|
||||
import type { ErrorOption } from "../../shared/errors/IpuaroError.js"
|
||||
import type { Config } from "../../shared/constants/config.js"
|
||||
import type { IToolRegistry } from "../../application/interfaces/IToolRegistry.js"
|
||||
import {
|
||||
HandleMessage,
|
||||
type HandleMessageStatus,
|
||||
} from "../../application/use-cases/HandleMessage.js"
|
||||
import { StartSession } from "../../application/use-cases/StartSession.js"
|
||||
import { UndoChange } from "../../application/use-cases/UndoChange.js"
|
||||
import type { ConfirmationResult } from "../../application/use-cases/ExecuteTool.js"
|
||||
import type { ProjectStructure } from "../../infrastructure/llm/prompts.js"
|
||||
import type { TuiStatus } from "../types.js"
|
||||
|
||||
export interface UseSessionDependencies {
|
||||
storage: IStorage
|
||||
sessionStorage: ISessionStorage
|
||||
llm: ILLMClient
|
||||
tools: IToolRegistry
|
||||
projectRoot: string
|
||||
projectName: string
|
||||
projectStructure?: ProjectStructure
|
||||
config?: Config
|
||||
}
|
||||
|
||||
export interface UseSessionOptions {
|
||||
autoApply?: boolean
|
||||
onConfirmation?: (message: string, diff?: DiffInfo) => Promise<boolean | ConfirmationResult>
|
||||
onError?: (error: Error) => Promise<ErrorOption>
|
||||
}
|
||||
|
||||
export interface UseSessionReturn {
|
||||
session: Session | null
|
||||
messages: ChatMessage[]
|
||||
status: TuiStatus
|
||||
isLoading: boolean
|
||||
error: Error | null
|
||||
sendMessage: (message: string) => Promise<void>
|
||||
undo: () => Promise<boolean>
|
||||
clearHistory: () => void
|
||||
abort: () => void
|
||||
}
|
||||
|
||||
interface SessionRefs {
|
||||
session: Session | null
|
||||
handleMessage: HandleMessage | null
|
||||
undoChange: UndoChange | null
|
||||
}
|
||||
|
||||
type SetStatus = React.Dispatch<React.SetStateAction<TuiStatus>>
|
||||
type SetMessages = React.Dispatch<React.SetStateAction<ChatMessage[]>>
|
||||
|
||||
interface StateSetters {
|
||||
setMessages: SetMessages
|
||||
setStatus: SetStatus
|
||||
forceUpdate: () => void
|
||||
}
|
||||
|
||||
function createEventHandlers(
|
||||
setters: StateSetters,
|
||||
options: UseSessionOptions,
|
||||
): Parameters<HandleMessage["setEvents"]>[0] {
|
||||
return {
|
||||
onMessage: (msg) => {
|
||||
setters.setMessages((prev) => [...prev, msg])
|
||||
},
|
||||
onToolCall: () => {
|
||||
setters.setStatus("tool_call")
|
||||
},
|
||||
onToolResult: () => {
|
||||
setters.setStatus("thinking")
|
||||
},
|
||||
onConfirmation: options.onConfirmation,
|
||||
onError: options.onError,
|
||||
onStatusChange: (s: HandleMessageStatus) => {
|
||||
setters.setStatus(s)
|
||||
},
|
||||
onUndoEntry: () => {
|
||||
setters.forceUpdate()
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
async function initializeSession(
|
||||
deps: UseSessionDependencies,
|
||||
options: UseSessionOptions,
|
||||
refs: React.MutableRefObject<SessionRefs>,
|
||||
setters: StateSetters,
|
||||
): Promise<void> {
|
||||
const startSession = new StartSession(deps.sessionStorage)
|
||||
const result = await startSession.execute(deps.projectName)
|
||||
refs.current.session = result.session
|
||||
setters.setMessages([...result.session.history])
|
||||
|
||||
const handleMessage = new HandleMessage(
|
||||
deps.storage,
|
||||
deps.sessionStorage,
|
||||
deps.llm,
|
||||
deps.tools,
|
||||
deps.projectRoot,
|
||||
deps.config?.context,
|
||||
)
|
||||
if (deps.projectStructure) {
|
||||
handleMessage.setProjectStructure(deps.projectStructure)
|
||||
}
|
||||
handleMessage.setOptions({
|
||||
autoApply: options.autoApply,
|
||||
maxHistoryMessages: deps.config?.session.maxHistoryMessages,
|
||||
saveInputHistory: deps.config?.session.saveInputHistory,
|
||||
contextConfig: deps.config?.context,
|
||||
})
|
||||
handleMessage.setEvents(createEventHandlers(setters, options))
|
||||
refs.current.handleMessage = handleMessage
|
||||
refs.current.undoChange = new UndoChange(deps.sessionStorage, deps.storage)
|
||||
setters.forceUpdate()
|
||||
}
|
||||
|
||||
export function useSession(
|
||||
deps: UseSessionDependencies,
|
||||
options: UseSessionOptions = {},
|
||||
): UseSessionReturn {
|
||||
const [messages, setMessages] = useState<ChatMessage[]>([])
|
||||
const [status, setStatus] = useState<TuiStatus>("ready")
|
||||
const [isLoading, setIsLoading] = useState(true)
|
||||
const [error, setError] = useState<Error | null>(null)
|
||||
const [, setTrigger] = useState(0)
|
||||
const refs = useRef<SessionRefs>({ session: null, handleMessage: null, undoChange: null })
|
||||
const forceUpdate = useCallback(() => {
|
||||
setTrigger((v) => v + 1)
|
||||
}, [])
|
||||
|
||||
useEffect(() => {
|
||||
setIsLoading(true)
|
||||
const setters: StateSetters = { setMessages, setStatus, forceUpdate }
|
||||
initializeSession(deps, options, refs, setters)
|
||||
.then(() => {
|
||||
setError(null)
|
||||
})
|
||||
.catch((err: unknown) => {
|
||||
setError(err instanceof Error ? err : new Error(String(err)))
|
||||
})
|
||||
.finally(() => {
|
||||
setIsLoading(false)
|
||||
})
|
||||
}, [deps.projectName, forceUpdate])
|
||||
|
||||
const sendMessage = useCallback(async (message: string): Promise<void> => {
|
||||
const { session, handleMessage } = refs.current
|
||||
if (!session || !handleMessage) {
|
||||
return
|
||||
}
|
||||
try {
|
||||
setStatus("thinking")
|
||||
await handleMessage.execute(session, message)
|
||||
} catch (err) {
|
||||
setError(err instanceof Error ? err : new Error(String(err)))
|
||||
setStatus("error")
|
||||
}
|
||||
}, [])
|
||||
|
||||
const undo = useCallback(async (): Promise<boolean> => {
|
||||
const { session, undoChange } = refs.current
|
||||
if (!session || !undoChange) {
|
||||
return false
|
||||
}
|
||||
try {
|
||||
const result = await undoChange.execute(session)
|
||||
if (result.success) {
|
||||
forceUpdate()
|
||||
return true
|
||||
}
|
||||
return false
|
||||
} catch {
|
||||
return false
|
||||
}
|
||||
}, [forceUpdate])
|
||||
|
||||
const clearHistory = useCallback(() => {
|
||||
if (!refs.current.session) {
|
||||
return
|
||||
}
|
||||
refs.current.session.clearHistory()
|
||||
setMessages([])
|
||||
forceUpdate()
|
||||
}, [forceUpdate])
|
||||
|
||||
const abort = useCallback(() => {
|
||||
refs.current.handleMessage?.abort()
|
||||
setStatus("ready")
|
||||
}, [])
|
||||
|
||||
return {
|
||||
session: refs.current.session,
|
||||
messages,
|
||||
status,
|
||||
isLoading,
|
||||
error,
|
||||
sendMessage,
|
||||
undo,
|
||||
clearHistory,
|
||||
abort,
|
||||
}
|
||||
}
|
||||
8
packages/ipuaro/src/tui/index.ts
Normal file
8
packages/ipuaro/src/tui/index.ts
Normal file
@@ -0,0 +1,8 @@
|
||||
/**
|
||||
* TUI module - Terminal User Interface.
|
||||
*/
|
||||
|
||||
export { App, type AppDependencies, type ExtendedAppProps } from "./App.js"
|
||||
export * from "./components/index.js"
|
||||
export * from "./hooks/index.js"
|
||||
export * from "./types.js"
|
||||
38
packages/ipuaro/src/tui/types.ts
Normal file
38
packages/ipuaro/src/tui/types.ts
Normal file
@@ -0,0 +1,38 @@
|
||||
/**
|
||||
* TUI types and interfaces.
|
||||
*/
|
||||
|
||||
import type { HandleMessageStatus } from "../application/use-cases/HandleMessage.js"
|
||||
|
||||
/**
|
||||
* TUI status - maps to HandleMessageStatus.
|
||||
*/
|
||||
export type TuiStatus = HandleMessageStatus
|
||||
|
||||
/**
|
||||
* Git branch information.
|
||||
*/
|
||||
export interface BranchInfo {
|
||||
name: string
|
||||
isDetached: boolean
|
||||
}
|
||||
|
||||
/**
|
||||
* Props for the main App component.
|
||||
*/
|
||||
export interface AppProps {
|
||||
projectPath: string
|
||||
autoApply?: boolean
|
||||
model?: string
|
||||
}
|
||||
|
||||
/**
|
||||
* Status bar display data.
|
||||
*/
|
||||
export interface StatusBarData {
|
||||
contextUsage: number
|
||||
projectName: string
|
||||
branch: BranchInfo
|
||||
sessionTime: string
|
||||
status: TuiStatus
|
||||
}
|
||||
11
packages/ipuaro/src/tui/utils/bell.ts
Normal file
11
packages/ipuaro/src/tui/utils/bell.ts
Normal file
@@ -0,0 +1,11 @@
|
||||
/**
|
||||
* Bell notification utility for terminal.
|
||||
*/
|
||||
|
||||
/**
|
||||
* Ring the terminal bell.
|
||||
* Works by outputting the ASCII bell character (\u0007).
|
||||
*/
|
||||
export function ringBell(): void {
|
||||
process.stdout.write("\u0007")
|
||||
}
|
||||
167
packages/ipuaro/src/tui/utils/syntax-highlighter.ts
Normal file
167
packages/ipuaro/src/tui/utils/syntax-highlighter.ts
Normal file
@@ -0,0 +1,167 @@
|
||||
/**
|
||||
* Simple syntax highlighter for terminal UI.
|
||||
* Highlights keywords, strings, comments, numbers, and operators.
|
||||
*/
|
||||
|
||||
export type Language = "typescript" | "javascript" | "tsx" | "jsx" | "json" | "yaml" | "unknown"
|
||||
|
||||
export interface HighlightedToken {
|
||||
text: string
|
||||
color: string
|
||||
}
|
||||
|
||||
const KEYWORDS = new Set([
|
||||
"abstract",
|
||||
"any",
|
||||
"as",
|
||||
"async",
|
||||
"await",
|
||||
"boolean",
|
||||
"break",
|
||||
"case",
|
||||
"catch",
|
||||
"class",
|
||||
"const",
|
||||
"constructor",
|
||||
"continue",
|
||||
"debugger",
|
||||
"declare",
|
||||
"default",
|
||||
"delete",
|
||||
"do",
|
||||
"else",
|
||||
"enum",
|
||||
"export",
|
||||
"extends",
|
||||
"false",
|
||||
"finally",
|
||||
"for",
|
||||
"from",
|
||||
"function",
|
||||
"get",
|
||||
"if",
|
||||
"implements",
|
||||
"import",
|
||||
"in",
|
||||
"instanceof",
|
||||
"interface",
|
||||
"let",
|
||||
"module",
|
||||
"namespace",
|
||||
"new",
|
||||
"null",
|
||||
"number",
|
||||
"of",
|
||||
"package",
|
||||
"private",
|
||||
"protected",
|
||||
"public",
|
||||
"readonly",
|
||||
"require",
|
||||
"return",
|
||||
"set",
|
||||
"static",
|
||||
"string",
|
||||
"super",
|
||||
"switch",
|
||||
"this",
|
||||
"throw",
|
||||
"true",
|
||||
"try",
|
||||
"type",
|
||||
"typeof",
|
||||
"undefined",
|
||||
"var",
|
||||
"void",
|
||||
"while",
|
||||
"with",
|
||||
"yield",
|
||||
])
|
||||
|
||||
export function detectLanguage(filePath: string): Language {
|
||||
const ext = filePath.split(".").pop()?.toLowerCase()
|
||||
switch (ext) {
|
||||
case "ts":
|
||||
return "typescript"
|
||||
case "tsx":
|
||||
return "tsx"
|
||||
case "js":
|
||||
return "javascript"
|
||||
case "jsx":
|
||||
return "jsx"
|
||||
case "json":
|
||||
return "json"
|
||||
case "yaml":
|
||||
case "yml":
|
||||
return "yaml"
|
||||
default:
|
||||
return "unknown"
|
||||
}
|
||||
}
|
||||
|
||||
const COMMENT_REGEX = /^(\/\/.*|\/\*[\s\S]*?\*\/)/
|
||||
const STRING_REGEX = /^("(?:[^"\\]|\\.)*"|'(?:[^'\\]|\\.)*'|`(?:[^`\\]|\\.)*`)/
|
||||
const NUMBER_REGEX = /^(\b\d+\.?\d*\b)/
|
||||
const WORD_REGEX = /^([a-zA-Z_$][a-zA-Z0-9_$]*)/
|
||||
const OPERATOR_REGEX = /^([+\-*/%=<>!&|^~?:;,.()[\]{}])/
|
||||
const WHITESPACE_REGEX = /^(\s+)/
|
||||
|
||||
export function highlightLine(line: string, language: Language): HighlightedToken[] {
|
||||
if (language === "unknown" || language === "json" || language === "yaml") {
|
||||
return [{ text: line, color: "white" }]
|
||||
}
|
||||
|
||||
const tokens: HighlightedToken[] = []
|
||||
let remaining = line
|
||||
|
||||
while (remaining.length > 0) {
|
||||
const commentMatch = COMMENT_REGEX.exec(remaining)
|
||||
if (commentMatch) {
|
||||
tokens.push({ text: commentMatch[0], color: "gray" })
|
||||
remaining = remaining.slice(commentMatch[0].length)
|
||||
continue
|
||||
}
|
||||
|
||||
const stringMatch = STRING_REGEX.exec(remaining)
|
||||
if (stringMatch) {
|
||||
tokens.push({ text: stringMatch[0], color: "green" })
|
||||
remaining = remaining.slice(stringMatch[0].length)
|
||||
continue
|
||||
}
|
||||
|
||||
const numberMatch = NUMBER_REGEX.exec(remaining)
|
||||
if (numberMatch) {
|
||||
tokens.push({ text: numberMatch[0], color: "cyan" })
|
||||
remaining = remaining.slice(numberMatch[0].length)
|
||||
continue
|
||||
}
|
||||
|
||||
const wordMatch = WORD_REGEX.exec(remaining)
|
||||
if (wordMatch) {
|
||||
const word = wordMatch[0]
|
||||
const color = KEYWORDS.has(word) ? "magenta" : "white"
|
||||
tokens.push({ text: word, color })
|
||||
remaining = remaining.slice(word.length)
|
||||
continue
|
||||
}
|
||||
|
||||
const operatorMatch = OPERATOR_REGEX.exec(remaining)
|
||||
if (operatorMatch) {
|
||||
tokens.push({ text: operatorMatch[0], color: "yellow" })
|
||||
remaining = remaining.slice(operatorMatch[0].length)
|
||||
continue
|
||||
}
|
||||
|
||||
const whitespaceMatch = WHITESPACE_REGEX.exec(remaining)
|
||||
if (whitespaceMatch) {
|
||||
tokens.push({ text: whitespaceMatch[0], color: "white" })
|
||||
remaining = remaining.slice(whitespaceMatch[0].length)
|
||||
continue
|
||||
}
|
||||
|
||||
tokens.push({ text: remaining[0] ?? "", color: "white" })
|
||||
remaining = remaining.slice(1)
|
||||
}
|
||||
|
||||
return tokens
|
||||
}
|
||||
115
packages/ipuaro/src/tui/utils/theme.ts
Normal file
115
packages/ipuaro/src/tui/utils/theme.ts
Normal file
@@ -0,0 +1,115 @@
|
||||
/**
|
||||
* Theme color utilities for TUI.
|
||||
*/
|
||||
|
||||
export type Theme = "dark" | "light"
|
||||
|
||||
/**
|
||||
* Color scheme for a theme.
|
||||
*/
|
||||
export interface ColorScheme {
|
||||
primary: string
|
||||
secondary: string
|
||||
success: string
|
||||
warning: string
|
||||
error: string
|
||||
info: string
|
||||
muted: string
|
||||
background: string
|
||||
foreground: string
|
||||
}
|
||||
|
||||
/**
|
||||
* Dark theme color scheme (default).
|
||||
*/
|
||||
const DARK_THEME: ColorScheme = {
|
||||
primary: "cyan",
|
||||
secondary: "blue",
|
||||
success: "green",
|
||||
warning: "yellow",
|
||||
error: "red",
|
||||
info: "cyan",
|
||||
muted: "gray",
|
||||
background: "black",
|
||||
foreground: "white",
|
||||
}
|
||||
|
||||
/**
|
||||
* Light theme color scheme.
|
||||
*/
|
||||
const LIGHT_THEME: ColorScheme = {
|
||||
primary: "blue",
|
||||
secondary: "cyan",
|
||||
success: "green",
|
||||
warning: "yellow",
|
||||
error: "red",
|
||||
info: "blue",
|
||||
muted: "gray",
|
||||
background: "white",
|
||||
foreground: "black",
|
||||
}
|
||||
|
||||
/**
|
||||
* Get color scheme for a theme.
|
||||
*/
|
||||
export function getColorScheme(theme: Theme): ColorScheme {
|
||||
return theme === "dark" ? DARK_THEME : LIGHT_THEME
|
||||
}
|
||||
|
||||
/**
|
||||
* Get color for a status.
|
||||
*/
|
||||
export function getStatusColor(
|
||||
status: "ready" | "thinking" | "error" | "tool_call" | "awaiting_confirmation",
|
||||
theme: Theme = "dark",
|
||||
): string {
|
||||
const scheme = getColorScheme(theme)
|
||||
|
||||
switch (status) {
|
||||
case "ready":
|
||||
return scheme.success
|
||||
case "thinking":
|
||||
case "tool_call":
|
||||
return scheme.warning
|
||||
case "awaiting_confirmation":
|
||||
return scheme.info
|
||||
case "error":
|
||||
return scheme.error
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get color for a message role.
|
||||
*/
|
||||
export function getRoleColor(
|
||||
role: "user" | "assistant" | "system" | "tool",
|
||||
theme: Theme = "dark",
|
||||
): string {
|
||||
const scheme = getColorScheme(theme)
|
||||
|
||||
switch (role) {
|
||||
case "user":
|
||||
return scheme.success
|
||||
case "assistant":
|
||||
return scheme.primary
|
||||
case "system":
|
||||
return scheme.muted
|
||||
case "tool":
|
||||
return scheme.secondary
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get color for context usage percentage.
|
||||
*/
|
||||
export function getContextColor(usage: number, theme: Theme = "dark"): string {
|
||||
const scheme = getColorScheme(theme)
|
||||
|
||||
if (usage >= 0.8) {
|
||||
return scheme.error
|
||||
}
|
||||
if (usage >= 0.6) {
|
||||
return scheme.warning
|
||||
}
|
||||
return scheme.success
|
||||
}
|
||||
1506
packages/ipuaro/tests/e2e/full-workflow.test.ts
Normal file
1506
packages/ipuaro/tests/e2e/full-workflow.test.ts
Normal file
File diff suppressed because it is too large
Load Diff
351
packages/ipuaro/tests/e2e/test-helpers.ts
Normal file
351
packages/ipuaro/tests/e2e/test-helpers.ts
Normal file
@@ -0,0 +1,351 @@
|
||||
/**
|
||||
* E2E Test Helpers
|
||||
* Provides dependencies for testing the full flow with REAL LLM.
|
||||
*/
|
||||
|
||||
import { vi } from "vitest"
|
||||
import * as fs from "node:fs/promises"
|
||||
import * as path from "node:path"
|
||||
import * as os from "node:os"
|
||||
import type { IStorage, SymbolIndex, DepsGraph } from "../../src/domain/services/IStorage.js"
|
||||
import type { ISessionStorage, SessionListItem } from "../../src/domain/services/ISessionStorage.js"
|
||||
import type { FileData } from "../../src/domain/value-objects/FileData.js"
|
||||
import type { FileAST } from "../../src/domain/value-objects/FileAST.js"
|
||||
import type { FileMeta } from "../../src/domain/value-objects/FileMeta.js"
|
||||
import type { UndoEntry } from "../../src/domain/value-objects/UndoEntry.js"
|
||||
import { Session } from "../../src/domain/entities/Session.js"
|
||||
import { ToolRegistry } from "../../src/infrastructure/tools/registry.js"
|
||||
import { OllamaClient } from "../../src/infrastructure/llm/OllamaClient.js"
|
||||
import { registerAllTools } from "../../src/cli/commands/tools-setup.js"
|
||||
import type { LLMConfig } from "../../src/shared/constants/config.js"
|
||||
|
||||
/**
|
||||
* Default LLM config for tests.
|
||||
*/
|
||||
export const DEFAULT_TEST_LLM_CONFIG: LLMConfig = {
|
||||
model: "qwen2.5-coder:14b-instruct-q4_K_M",
|
||||
contextWindow: 128_000,
|
||||
temperature: 0.1,
|
||||
host: "http://localhost:11434",
|
||||
timeout: 180_000,
|
||||
useNativeTools: true,
|
||||
}
|
||||
|
||||
/**
|
||||
* In-memory storage implementation for testing.
|
||||
* Stores all data in Maps, no Redis required.
|
||||
*/
|
||||
export function createInMemoryStorage(): IStorage {
|
||||
const files = new Map<string, FileData>()
|
||||
const asts = new Map<string, FileAST>()
|
||||
const metas = new Map<string, FileMeta>()
|
||||
let symbolIndex: SymbolIndex = new Map()
|
||||
let depsGraph: DepsGraph = { imports: new Map(), importedBy: new Map() }
|
||||
const projectConfig = new Map<string, unknown>()
|
||||
let connected = false
|
||||
|
||||
return {
|
||||
getFile: vi.fn(async (filePath: string) => files.get(filePath) ?? null),
|
||||
setFile: vi.fn(async (filePath: string, data: FileData) => {
|
||||
files.set(filePath, data)
|
||||
}),
|
||||
deleteFile: vi.fn(async (filePath: string) => {
|
||||
files.delete(filePath)
|
||||
}),
|
||||
getAllFiles: vi.fn(async () => new Map(files)),
|
||||
getFileCount: vi.fn(async () => files.size),
|
||||
|
||||
getAST: vi.fn(async (filePath: string) => asts.get(filePath) ?? null),
|
||||
setAST: vi.fn(async (filePath: string, ast: FileAST) => {
|
||||
asts.set(filePath, ast)
|
||||
}),
|
||||
deleteAST: vi.fn(async (filePath: string) => {
|
||||
asts.delete(filePath)
|
||||
}),
|
||||
getAllASTs: vi.fn(async () => new Map(asts)),
|
||||
|
||||
getMeta: vi.fn(async (filePath: string) => metas.get(filePath) ?? null),
|
||||
setMeta: vi.fn(async (filePath: string, meta: FileMeta) => {
|
||||
metas.set(filePath, meta)
|
||||
}),
|
||||
deleteMeta: vi.fn(async (filePath: string) => {
|
||||
metas.delete(filePath)
|
||||
}),
|
||||
getAllMetas: vi.fn(async () => new Map(metas)),
|
||||
|
||||
getSymbolIndex: vi.fn(async () => symbolIndex),
|
||||
setSymbolIndex: vi.fn(async (index: SymbolIndex) => {
|
||||
symbolIndex = index
|
||||
}),
|
||||
getDepsGraph: vi.fn(async () => depsGraph),
|
||||
setDepsGraph: vi.fn(async (graph: DepsGraph) => {
|
||||
depsGraph = graph
|
||||
}),
|
||||
|
||||
getProjectConfig: vi.fn(async (key: string) => projectConfig.get(key) ?? null),
|
||||
setProjectConfig: vi.fn(async (key: string, value: unknown) => {
|
||||
projectConfig.set(key, value)
|
||||
}),
|
||||
|
||||
connect: vi.fn(async () => {
|
||||
connected = true
|
||||
}),
|
||||
disconnect: vi.fn(async () => {
|
||||
connected = false
|
||||
}),
|
||||
isConnected: vi.fn(() => connected),
|
||||
clear: vi.fn(async () => {
|
||||
files.clear()
|
||||
asts.clear()
|
||||
metas.clear()
|
||||
symbolIndex = new Map()
|
||||
depsGraph = { imports: new Map(), importedBy: new Map() }
|
||||
projectConfig.clear()
|
||||
}),
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* In-memory session storage for testing.
|
||||
*/
|
||||
export function createInMemorySessionStorage(): ISessionStorage {
|
||||
const sessions = new Map<string, Session>()
|
||||
const undoStacks = new Map<string, UndoEntry[]>()
|
||||
|
||||
return {
|
||||
saveSession: vi.fn(async (session: Session) => {
|
||||
sessions.set(session.id, session)
|
||||
}),
|
||||
loadSession: vi.fn(async (sessionId: string) => sessions.get(sessionId) ?? null),
|
||||
deleteSession: vi.fn(async (sessionId: string) => {
|
||||
sessions.delete(sessionId)
|
||||
undoStacks.delete(sessionId)
|
||||
}),
|
||||
listSessions: vi.fn(async (projectName?: string): Promise<SessionListItem[]> => {
|
||||
const items: SessionListItem[] = []
|
||||
for (const session of sessions.values()) {
|
||||
if (!projectName || session.projectName === projectName) {
|
||||
items.push({
|
||||
id: session.id,
|
||||
projectName: session.projectName,
|
||||
createdAt: session.createdAt,
|
||||
lastActivityAt: session.lastActivityAt,
|
||||
messageCount: session.history.length,
|
||||
})
|
||||
}
|
||||
}
|
||||
return items
|
||||
}),
|
||||
getLatestSession: vi.fn(async (projectName: string) => {
|
||||
let latest: Session | null = null
|
||||
for (const session of sessions.values()) {
|
||||
if (session.projectName === projectName) {
|
||||
if (!latest || session.lastActivityAt > latest.lastActivityAt) {
|
||||
latest = session
|
||||
}
|
||||
}
|
||||
}
|
||||
return latest
|
||||
}),
|
||||
sessionExists: vi.fn(async (sessionId: string) => sessions.has(sessionId)),
|
||||
pushUndoEntry: vi.fn(async (sessionId: string, entry: UndoEntry) => {
|
||||
const stack = undoStacks.get(sessionId) ?? []
|
||||
stack.push(entry)
|
||||
undoStacks.set(sessionId, stack)
|
||||
}),
|
||||
popUndoEntry: vi.fn(async (sessionId: string) => {
|
||||
const stack = undoStacks.get(sessionId) ?? []
|
||||
return stack.pop() ?? null
|
||||
}),
|
||||
getUndoStack: vi.fn(async (sessionId: string) => undoStacks.get(sessionId) ?? []),
|
||||
touchSession: vi.fn(async (sessionId: string) => {
|
||||
const session = sessions.get(sessionId)
|
||||
if (session) {
|
||||
session.lastActivityAt = Date.now()
|
||||
}
|
||||
}),
|
||||
clearAllSessions: vi.fn(async () => {
|
||||
sessions.clear()
|
||||
undoStacks.clear()
|
||||
}),
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Create REAL Ollama client for E2E tests.
|
||||
*/
|
||||
export function createRealOllamaClient(config?: Partial<LLMConfig>): OllamaClient {
|
||||
return new OllamaClient({
|
||||
...DEFAULT_TEST_LLM_CONFIG,
|
||||
...config,
|
||||
})
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a tool registry with all 18 tools registered.
|
||||
*/
|
||||
export function createRealToolRegistry(): ToolRegistry {
|
||||
const registry = new ToolRegistry()
|
||||
registerAllTools(registry)
|
||||
return registry
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a new test session.
|
||||
*/
|
||||
export function createTestSession(projectName = "test-project"): Session {
|
||||
return new Session(`test-${Date.now()}`, projectName)
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a temporary test project directory with sample files.
|
||||
*/
|
||||
export async function createTestProject(): Promise<string> {
|
||||
const tempDir = await fs.mkdtemp(path.join(os.tmpdir(), "ipuaro-e2e-"))
|
||||
|
||||
await fs.mkdir(path.join(tempDir, "src"), { recursive: true })
|
||||
|
||||
await fs.writeFile(
|
||||
path.join(tempDir, "src", "index.ts"),
|
||||
`/**
|
||||
* Main entry point
|
||||
*/
|
||||
export function main(): void {
|
||||
console.log("Hello, world!")
|
||||
}
|
||||
|
||||
export function add(a: number, b: number): number {
|
||||
return a + b
|
||||
}
|
||||
|
||||
export function multiply(a: number, b: number): number {
|
||||
return a * b
|
||||
}
|
||||
|
||||
// TODO: Add more math functions
|
||||
main()
|
||||
`,
|
||||
)
|
||||
|
||||
await fs.writeFile(
|
||||
path.join(tempDir, "src", "utils.ts"),
|
||||
`/**
|
||||
* Utility functions
|
||||
*/
|
||||
import { add } from "./index.js"
|
||||
|
||||
export function sum(numbers: number[]): number {
|
||||
return numbers.reduce((acc, n) => add(acc, n), 0)
|
||||
}
|
||||
|
||||
export class Calculator {
|
||||
private result: number = 0
|
||||
|
||||
add(n: number): this {
|
||||
this.result += n
|
||||
return this
|
||||
}
|
||||
|
||||
subtract(n: number): this {
|
||||
this.result -= n
|
||||
return this
|
||||
}
|
||||
|
||||
getResult(): number {
|
||||
return this.result
|
||||
}
|
||||
|
||||
reset(): void {
|
||||
this.result = 0
|
||||
}
|
||||
}
|
||||
|
||||
// FIXME: Handle edge cases for negative numbers
|
||||
`,
|
||||
)
|
||||
|
||||
await fs.writeFile(
|
||||
path.join(tempDir, "package.json"),
|
||||
JSON.stringify(
|
||||
{
|
||||
name: "test-project",
|
||||
version: "1.0.0",
|
||||
type: "module",
|
||||
scripts: {
|
||||
test: "echo 'Tests passed!'",
|
||||
},
|
||||
},
|
||||
null,
|
||||
4,
|
||||
),
|
||||
)
|
||||
|
||||
await fs.writeFile(
|
||||
path.join(tempDir, "README.md"),
|
||||
`# Test Project
|
||||
|
||||
A sample project for E2E testing.
|
||||
|
||||
## Features
|
||||
- Basic math functions
|
||||
- Calculator class
|
||||
`,
|
||||
)
|
||||
|
||||
return tempDir
|
||||
}
|
||||
|
||||
/**
|
||||
* Clean up test project directory.
|
||||
*/
|
||||
export async function cleanupTestProject(projectDir: string): Promise<void> {
|
||||
await fs.rm(projectDir, { recursive: true, force: true })
|
||||
}
|
||||
|
||||
/**
|
||||
* All test dependencies bundled together.
|
||||
*/
|
||||
export interface E2ETestDependencies {
|
||||
storage: IStorage
|
||||
sessionStorage: ISessionStorage
|
||||
llm: OllamaClient
|
||||
tools: ToolRegistry
|
||||
session: Session
|
||||
projectRoot: string
|
||||
}
|
||||
|
||||
/**
|
||||
* Create all dependencies for E2E testing with REAL Ollama.
|
||||
*/
|
||||
export async function createE2ETestDependencies(
|
||||
llmConfig?: Partial<LLMConfig>,
|
||||
): Promise<E2ETestDependencies> {
|
||||
const projectRoot = await createTestProject()
|
||||
|
||||
return {
|
||||
storage: createInMemoryStorage(),
|
||||
sessionStorage: createInMemorySessionStorage(),
|
||||
llm: createRealOllamaClient(llmConfig),
|
||||
tools: createRealToolRegistry(),
|
||||
session: createTestSession(),
|
||||
projectRoot,
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if Ollama is available.
|
||||
*/
|
||||
export async function isOllamaAvailable(): Promise<boolean> {
|
||||
const client = createRealOllamaClient()
|
||||
return client.isAvailable()
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if required model is available.
|
||||
*/
|
||||
export async function isModelAvailable(
|
||||
model = "qwen2.5-coder:14b-instruct-q4_K_M",
|
||||
): Promise<boolean> {
|
||||
const client = createRealOllamaClient()
|
||||
return client.hasModel(model)
|
||||
}
|
||||
@@ -245,4 +245,65 @@ describe("ContextManager", () => {
|
||||
expect(state.needsCompression).toBe(false)
|
||||
})
|
||||
})
|
||||
|
||||
describe("configuration", () => {
|
||||
it("should use default compression threshold when no config provided", () => {
|
||||
const manager = new ContextManager(CONTEXT_SIZE)
|
||||
manager.addToContext("test.ts", CONTEXT_SIZE * 0.85)
|
||||
|
||||
expect(manager.needsCompression()).toBe(true)
|
||||
})
|
||||
|
||||
it("should use custom compression threshold from config", () => {
|
||||
const manager = new ContextManager(CONTEXT_SIZE, { autoCompressAt: 0.9 })
|
||||
manager.addToContext("test.ts", CONTEXT_SIZE * 0.85)
|
||||
|
||||
expect(manager.needsCompression()).toBe(false)
|
||||
})
|
||||
|
||||
it("should trigger compression at custom threshold", () => {
|
||||
const manager = new ContextManager(CONTEXT_SIZE, { autoCompressAt: 0.9 })
|
||||
manager.addToContext("test.ts", CONTEXT_SIZE * 0.95)
|
||||
|
||||
expect(manager.needsCompression()).toBe(true)
|
||||
})
|
||||
|
||||
it("should accept compression method in config", () => {
|
||||
const manager = new ContextManager(CONTEXT_SIZE, { compressionMethod: "truncate" })
|
||||
|
||||
expect(manager).toBeDefined()
|
||||
})
|
||||
|
||||
it("should use default compression method when not specified", () => {
|
||||
const manager = new ContextManager(CONTEXT_SIZE, {})
|
||||
|
||||
expect(manager).toBeDefined()
|
||||
})
|
||||
|
||||
it("should accept full context config", () => {
|
||||
const manager = new ContextManager(CONTEXT_SIZE, {
|
||||
systemPromptTokens: 3000,
|
||||
maxContextUsage: 0.9,
|
||||
autoCompressAt: 0.85,
|
||||
compressionMethod: "llm-summary",
|
||||
})
|
||||
|
||||
manager.addToContext("test.ts", CONTEXT_SIZE * 0.87)
|
||||
expect(manager.needsCompression()).toBe(true)
|
||||
})
|
||||
|
||||
it("should handle edge case: autoCompressAt = 0", () => {
|
||||
const manager = new ContextManager(CONTEXT_SIZE, { autoCompressAt: 0 })
|
||||
manager.addToContext("test.ts", 1)
|
||||
|
||||
expect(manager.needsCompression()).toBe(true)
|
||||
})
|
||||
|
||||
it("should handle edge case: autoCompressAt = 1", () => {
|
||||
const manager = new ContextManager(CONTEXT_SIZE, { autoCompressAt: 1 })
|
||||
manager.addToContext("test.ts", CONTEXT_SIZE * 0.99)
|
||||
|
||||
expect(manager.needsCompression()).toBe(false)
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
@@ -198,12 +198,12 @@ describe("HandleMessage", () => {
|
||||
expect(toolMessages.length).toBeGreaterThan(0)
|
||||
})
|
||||
|
||||
it("should return error for unknown tools", async () => {
|
||||
it("should return error for unregistered tools", async () => {
|
||||
vi.mocked(mockTools.get).mockReturnValue(undefined)
|
||||
vi.mocked(mockLLM.chat)
|
||||
.mockResolvedValueOnce(
|
||||
createMockLLMResponse(
|
||||
'<tool_call name="unknown_tool"><param>value</param></tool_call>',
|
||||
'<tool_call name="get_complexity"><path>src</path></tool_call>',
|
||||
true,
|
||||
),
|
||||
)
|
||||
|
||||
@@ -0,0 +1,318 @@
|
||||
import { describe, it, expect, vi, beforeEach } from "vitest"
|
||||
import { IndexProject } from "../../../../src/application/use-cases/IndexProject.js"
|
||||
import type { IStorage, SymbolIndex, DepsGraph } from "../../../../src/domain/services/IStorage.js"
|
||||
import type { IndexProgress } from "../../../../src/domain/services/IIndexer.js"
|
||||
import { createFileData } from "../../../../src/domain/value-objects/FileData.js"
|
||||
import { createEmptyFileAST } from "../../../../src/domain/value-objects/FileAST.js"
|
||||
import { createFileMeta } from "../../../../src/domain/value-objects/FileMeta.js"
|
||||
|
||||
vi.mock("../../../../src/infrastructure/indexer/FileScanner.js", () => ({
|
||||
FileScanner: class {
|
||||
async scanAll() {
|
||||
return [
|
||||
{ path: "src/index.ts", type: "file", size: 100, lastModified: Date.now() },
|
||||
{ path: "src/utils.ts", type: "file", size: 200, lastModified: Date.now() },
|
||||
]
|
||||
}
|
||||
static async readFileContent(path: string) {
|
||||
if (path.includes("index.ts")) {
|
||||
return 'export function main() { return "hello" }'
|
||||
}
|
||||
if (path.includes("utils.ts")) {
|
||||
return "export const add = (a: number, b: number) => a + b"
|
||||
}
|
||||
return null
|
||||
}
|
||||
},
|
||||
}))
|
||||
|
||||
vi.mock("../../../../src/infrastructure/indexer/ASTParser.js", () => ({
|
||||
ASTParser: class {
|
||||
parse() {
|
||||
return {
|
||||
...createEmptyFileAST(),
|
||||
functions: [
|
||||
{
|
||||
name: "test",
|
||||
lineStart: 1,
|
||||
lineEnd: 5,
|
||||
params: [],
|
||||
isAsync: false,
|
||||
isExported: true,
|
||||
},
|
||||
],
|
||||
}
|
||||
}
|
||||
},
|
||||
}))
|
||||
|
||||
vi.mock("../../../../src/infrastructure/indexer/MetaAnalyzer.js", () => ({
|
||||
MetaAnalyzer: class {
|
||||
constructor() {}
|
||||
analyze() {
|
||||
return createFileMeta()
|
||||
}
|
||||
},
|
||||
}))
|
||||
|
||||
vi.mock("../../../../src/infrastructure/indexer/IndexBuilder.js", () => ({
|
||||
IndexBuilder: class {
|
||||
constructor() {}
|
||||
buildSymbolIndex() {
|
||||
return new Map([
|
||||
["test", [{ path: "src/index.ts", line: 1, type: "function" }]],
|
||||
]) as SymbolIndex
|
||||
}
|
||||
buildDepsGraph() {
|
||||
return {
|
||||
imports: new Map(),
|
||||
importedBy: new Map(),
|
||||
} as DepsGraph
|
||||
}
|
||||
},
|
||||
}))
|
||||
|
||||
describe("IndexProject", () => {
|
||||
let useCase: IndexProject
|
||||
let mockStorage: IStorage
|
||||
|
||||
beforeEach(() => {
|
||||
mockStorage = {
|
||||
getFile: vi.fn().mockResolvedValue(null),
|
||||
setFile: vi.fn().mockResolvedValue(undefined),
|
||||
deleteFile: vi.fn().mockResolvedValue(undefined),
|
||||
getAllFiles: vi.fn().mockResolvedValue(new Map()),
|
||||
getFileCount: vi.fn().mockResolvedValue(0),
|
||||
getAST: vi.fn().mockResolvedValue(null),
|
||||
setAST: vi.fn().mockResolvedValue(undefined),
|
||||
deleteAST: vi.fn().mockResolvedValue(undefined),
|
||||
getAllASTs: vi.fn().mockResolvedValue(new Map()),
|
||||
getMeta: vi.fn().mockResolvedValue(null),
|
||||
setMeta: vi.fn().mockResolvedValue(undefined),
|
||||
deleteMeta: vi.fn().mockResolvedValue(undefined),
|
||||
getAllMetas: vi.fn().mockResolvedValue(new Map()),
|
||||
getSymbolIndex: vi.fn().mockResolvedValue(new Map()),
|
||||
setSymbolIndex: vi.fn().mockResolvedValue(undefined),
|
||||
getDepsGraph: vi.fn().mockResolvedValue({ imports: new Map(), importedBy: new Map() }),
|
||||
setDepsGraph: vi.fn().mockResolvedValue(undefined),
|
||||
getProjectConfig: vi.fn().mockResolvedValue(null),
|
||||
setProjectConfig: vi.fn().mockResolvedValue(undefined),
|
||||
connect: vi.fn().mockResolvedValue(undefined),
|
||||
disconnect: vi.fn().mockResolvedValue(undefined),
|
||||
isConnected: vi.fn().mockReturnValue(true),
|
||||
clear: vi.fn().mockResolvedValue(undefined),
|
||||
}
|
||||
|
||||
useCase = new IndexProject(mockStorage, "/test/project")
|
||||
})
|
||||
|
||||
describe("execute", () => {
|
||||
it("should index project and return stats", async () => {
|
||||
const stats = await useCase.execute("/test/project")
|
||||
|
||||
expect(stats.filesScanned).toBe(2)
|
||||
expect(stats.filesParsed).toBe(2)
|
||||
expect(stats.parseErrors).toBe(0)
|
||||
expect(stats.timeMs).toBeGreaterThanOrEqual(0)
|
||||
})
|
||||
|
||||
it("should store file data for all scanned files", async () => {
|
||||
await useCase.execute("/test/project")
|
||||
|
||||
expect(mockStorage.setFile).toHaveBeenCalledTimes(2)
|
||||
expect(mockStorage.setFile).toHaveBeenCalledWith(
|
||||
"src/index.ts",
|
||||
expect.objectContaining({
|
||||
hash: expect.any(String),
|
||||
lines: expect.any(Array),
|
||||
}),
|
||||
)
|
||||
})
|
||||
|
||||
it("should store AST for all parsed files", async () => {
|
||||
await useCase.execute("/test/project")
|
||||
|
||||
expect(mockStorage.setAST).toHaveBeenCalledTimes(2)
|
||||
expect(mockStorage.setAST).toHaveBeenCalledWith(
|
||||
"src/index.ts",
|
||||
expect.objectContaining({
|
||||
functions: expect.any(Array),
|
||||
}),
|
||||
)
|
||||
})
|
||||
|
||||
it("should store metadata for all files", async () => {
|
||||
await useCase.execute("/test/project")
|
||||
|
||||
expect(mockStorage.setMeta).toHaveBeenCalledTimes(2)
|
||||
expect(mockStorage.setMeta).toHaveBeenCalledWith("src/index.ts", expect.any(Object))
|
||||
})
|
||||
|
||||
it("should build and store symbol index", async () => {
|
||||
await useCase.execute("/test/project")
|
||||
|
||||
expect(mockStorage.setSymbolIndex).toHaveBeenCalledTimes(1)
|
||||
expect(mockStorage.setSymbolIndex).toHaveBeenCalledWith(expect.any(Map))
|
||||
})
|
||||
|
||||
it("should build and store dependency graph", async () => {
|
||||
await useCase.execute("/test/project")
|
||||
|
||||
expect(mockStorage.setDepsGraph).toHaveBeenCalledTimes(1)
|
||||
expect(mockStorage.setDepsGraph).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
imports: expect.any(Map),
|
||||
importedBy: expect.any(Map),
|
||||
}),
|
||||
)
|
||||
})
|
||||
|
||||
it("should store last indexed timestamp", async () => {
|
||||
await useCase.execute("/test/project")
|
||||
|
||||
expect(mockStorage.setProjectConfig).toHaveBeenCalledWith(
|
||||
"last_indexed",
|
||||
expect.any(Number),
|
||||
)
|
||||
})
|
||||
|
||||
it("should call progress callback during indexing", async () => {
|
||||
const progressCallback = vi.fn()
|
||||
|
||||
await useCase.execute("/test/project", {
|
||||
onProgress: progressCallback,
|
||||
})
|
||||
|
||||
expect(progressCallback).toHaveBeenCalled()
|
||||
expect(progressCallback).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
current: expect.any(Number),
|
||||
total: expect.any(Number),
|
||||
currentFile: expect.any(String),
|
||||
phase: expect.stringMatching(/scanning|parsing|analyzing|indexing/),
|
||||
}),
|
||||
)
|
||||
})
|
||||
|
||||
it("should report scanning phase", async () => {
|
||||
const progressCallback = vi.fn()
|
||||
|
||||
await useCase.execute("/test/project", {
|
||||
onProgress: progressCallback,
|
||||
})
|
||||
|
||||
const scanningCalls = progressCallback.mock.calls.filter(
|
||||
(call) => call[0].phase === "scanning",
|
||||
)
|
||||
expect(scanningCalls.length).toBeGreaterThan(0)
|
||||
})
|
||||
|
||||
it("should report parsing phase", async () => {
|
||||
const progressCallback = vi.fn()
|
||||
|
||||
await useCase.execute("/test/project", {
|
||||
onProgress: progressCallback,
|
||||
})
|
||||
|
||||
const parsingCalls = progressCallback.mock.calls.filter(
|
||||
(call) => call[0].phase === "parsing",
|
||||
)
|
||||
expect(parsingCalls.length).toBeGreaterThan(0)
|
||||
})
|
||||
|
||||
it("should report analyzing phase", async () => {
|
||||
const progressCallback = vi.fn()
|
||||
|
||||
await useCase.execute("/test/project", {
|
||||
onProgress: progressCallback,
|
||||
})
|
||||
|
||||
const analyzingCalls = progressCallback.mock.calls.filter(
|
||||
(call) => call[0].phase === "analyzing",
|
||||
)
|
||||
expect(analyzingCalls.length).toBeGreaterThan(0)
|
||||
})
|
||||
|
||||
it("should report indexing phase", async () => {
|
||||
const progressCallback = vi.fn()
|
||||
|
||||
await useCase.execute("/test/project", {
|
||||
onProgress: progressCallback,
|
||||
})
|
||||
|
||||
const indexingCalls = progressCallback.mock.calls.filter(
|
||||
(call) => call[0].phase === "indexing",
|
||||
)
|
||||
expect(indexingCalls.length).toBeGreaterThan(0)
|
||||
})
|
||||
|
||||
it("should detect TypeScript files", async () => {
|
||||
await useCase.execute("/test/project")
|
||||
|
||||
expect(mockStorage.setAST).toHaveBeenCalledWith("src/index.ts", expect.any(Object))
|
||||
})
|
||||
|
||||
it("should handle files without parseable language", async () => {
|
||||
vi.mocked(mockStorage.setFile).mockClear()
|
||||
|
||||
await useCase.execute("/test/project")
|
||||
|
||||
const stats = await useCase.execute("/test/project")
|
||||
expect(stats.filesScanned).toBeGreaterThanOrEqual(0)
|
||||
})
|
||||
|
||||
it("should calculate indexing duration", async () => {
|
||||
const startTime = Date.now()
|
||||
const stats = await useCase.execute("/test/project")
|
||||
const endTime = Date.now()
|
||||
|
||||
expect(stats.timeMs).toBeGreaterThanOrEqual(0)
|
||||
expect(stats.timeMs).toBeLessThanOrEqual(endTime - startTime + 10)
|
||||
})
|
||||
})
|
||||
|
||||
describe("language detection", () => {
|
||||
it("should detect .ts files", async () => {
|
||||
await useCase.execute("/test/project")
|
||||
|
||||
expect(mockStorage.setAST).toHaveBeenCalledWith(
|
||||
expect.stringContaining(".ts"),
|
||||
expect.any(Object),
|
||||
)
|
||||
})
|
||||
})
|
||||
|
||||
describe("progress reporting", () => {
|
||||
it("should not fail if progress callback is not provided", async () => {
|
||||
await expect(useCase.execute("/test/project")).resolves.toBeDefined()
|
||||
})
|
||||
|
||||
it("should include current file in progress updates", async () => {
|
||||
const progressCallback = vi.fn()
|
||||
|
||||
await useCase.execute("/test/project", {
|
||||
onProgress: progressCallback,
|
||||
})
|
||||
|
||||
const callsWithFiles = progressCallback.mock.calls.filter(
|
||||
(call) => call[0].currentFile && call[0].currentFile.length > 0,
|
||||
)
|
||||
expect(callsWithFiles.length).toBeGreaterThan(0)
|
||||
})
|
||||
|
||||
it("should report correct total count", async () => {
|
||||
const progressCallback = vi.fn()
|
||||
|
||||
await useCase.execute("/test/project", {
|
||||
onProgress: progressCallback,
|
||||
})
|
||||
|
||||
const parsingCalls = progressCallback.mock.calls.filter(
|
||||
(call) => call[0].phase === "parsing",
|
||||
)
|
||||
if (parsingCalls.length > 0) {
|
||||
expect(parsingCalls[0][0].total).toBe(2)
|
||||
}
|
||||
})
|
||||
})
|
||||
})
|
||||
117
packages/ipuaro/tests/unit/cli/commands/init.test.ts
Normal file
117
packages/ipuaro/tests/unit/cli/commands/init.test.ts
Normal file
@@ -0,0 +1,117 @@
|
||||
import { describe, it, expect, vi, beforeEach, afterEach } from "vitest"
|
||||
import * as fs from "node:fs/promises"
|
||||
import * as path from "node:path"
|
||||
import { executeInit } from "../../../../src/cli/commands/init.js"
|
||||
|
||||
vi.mock("node:fs/promises")
|
||||
|
||||
describe("executeInit", () => {
|
||||
const testPath = "/test/project"
|
||||
const configPath = path.join(testPath, ".ipuaro.json")
|
||||
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks()
|
||||
vi.spyOn(console, "warn").mockImplementation(() => {})
|
||||
vi.spyOn(console, "error").mockImplementation(() => {})
|
||||
})
|
||||
|
||||
afterEach(() => {
|
||||
vi.restoreAllMocks()
|
||||
})
|
||||
|
||||
it("should create .ipuaro.json file successfully", async () => {
|
||||
vi.mocked(fs.access).mockRejectedValue(new Error("ENOENT"))
|
||||
vi.mocked(fs.mkdir).mockResolvedValue(undefined)
|
||||
vi.mocked(fs.writeFile).mockResolvedValue(undefined)
|
||||
|
||||
const result = await executeInit(testPath)
|
||||
|
||||
expect(result.success).toBe(true)
|
||||
expect(result.filePath).toBe(configPath)
|
||||
expect(fs.writeFile).toHaveBeenCalledWith(
|
||||
configPath,
|
||||
expect.stringContaining('"redis"'),
|
||||
"utf-8",
|
||||
)
|
||||
})
|
||||
|
||||
it("should skip existing file without force option", async () => {
|
||||
vi.mocked(fs.access).mockResolvedValue(undefined)
|
||||
|
||||
const result = await executeInit(testPath)
|
||||
|
||||
expect(result.success).toBe(true)
|
||||
expect(result.skipped).toBe(true)
|
||||
expect(fs.writeFile).not.toHaveBeenCalled()
|
||||
})
|
||||
|
||||
it("should overwrite existing file with force option", async () => {
|
||||
vi.mocked(fs.access).mockResolvedValue(undefined)
|
||||
vi.mocked(fs.writeFile).mockResolvedValue(undefined)
|
||||
|
||||
const result = await executeInit(testPath, { force: true })
|
||||
|
||||
expect(result.success).toBe(true)
|
||||
expect(result.skipped).toBeUndefined()
|
||||
expect(fs.writeFile).toHaveBeenCalled()
|
||||
})
|
||||
|
||||
it("should handle write errors", async () => {
|
||||
vi.mocked(fs.access).mockRejectedValue(new Error("ENOENT"))
|
||||
vi.mocked(fs.mkdir).mockResolvedValue(undefined)
|
||||
vi.mocked(fs.writeFile).mockRejectedValue(new Error("Permission denied"))
|
||||
|
||||
const result = await executeInit(testPath)
|
||||
|
||||
expect(result.success).toBe(false)
|
||||
expect(result.error).toContain("Permission denied")
|
||||
})
|
||||
|
||||
it("should create parent directories if needed", async () => {
|
||||
vi.mocked(fs.access)
|
||||
.mockRejectedValueOnce(new Error("ENOENT"))
|
||||
.mockRejectedValueOnce(new Error("ENOENT"))
|
||||
vi.mocked(fs.mkdir).mockResolvedValue(undefined)
|
||||
vi.mocked(fs.writeFile).mockResolvedValue(undefined)
|
||||
|
||||
const result = await executeInit(testPath)
|
||||
|
||||
expect(result.success).toBe(true)
|
||||
expect(fs.mkdir).toHaveBeenCalledWith(expect.any(String), { recursive: true })
|
||||
})
|
||||
|
||||
it("should use current directory as default", async () => {
|
||||
vi.mocked(fs.access).mockRejectedValue(new Error("ENOENT"))
|
||||
vi.mocked(fs.mkdir).mockResolvedValue(undefined)
|
||||
vi.mocked(fs.writeFile).mockResolvedValue(undefined)
|
||||
|
||||
const result = await executeInit()
|
||||
|
||||
expect(result.success).toBe(true)
|
||||
expect(result.filePath).toContain(".ipuaro.json")
|
||||
})
|
||||
|
||||
it("should include expected config sections", async () => {
|
||||
vi.mocked(fs.access).mockRejectedValue(new Error("ENOENT"))
|
||||
vi.mocked(fs.mkdir).mockResolvedValue(undefined)
|
||||
vi.mocked(fs.writeFile).mockResolvedValue(undefined)
|
||||
|
||||
await executeInit(testPath)
|
||||
|
||||
const writeCall = vi.mocked(fs.writeFile).mock.calls[0]
|
||||
const content = writeCall[1] as string
|
||||
const config = JSON.parse(content) as {
|
||||
redis: unknown
|
||||
llm: unknown
|
||||
edit: unknown
|
||||
}
|
||||
|
||||
expect(config).toHaveProperty("redis")
|
||||
expect(config).toHaveProperty("llm")
|
||||
expect(config).toHaveProperty("edit")
|
||||
expect(config.redis).toHaveProperty("host", "localhost")
|
||||
expect(config.redis).toHaveProperty("port", 6379)
|
||||
expect(config.llm).toHaveProperty("model", "qwen2.5-coder:7b-instruct")
|
||||
expect(config.edit).toHaveProperty("autoApply", false)
|
||||
})
|
||||
})
|
||||
353
packages/ipuaro/tests/unit/cli/commands/onboarding.test.ts
Normal file
353
packages/ipuaro/tests/unit/cli/commands/onboarding.test.ts
Normal file
@@ -0,0 +1,353 @@
|
||||
import { describe, it, expect, vi, beforeEach, afterEach } from "vitest"
|
||||
import {
|
||||
checkRedis,
|
||||
checkOllama,
|
||||
checkModel,
|
||||
checkProjectSize,
|
||||
runOnboarding,
|
||||
} from "../../../../src/cli/commands/onboarding.js"
|
||||
import { RedisClient } from "../../../../src/infrastructure/storage/RedisClient.js"
|
||||
import { OllamaClient } from "../../../../src/infrastructure/llm/OllamaClient.js"
|
||||
import { FileScanner } from "../../../../src/infrastructure/indexer/FileScanner.js"
|
||||
|
||||
vi.mock("../../../../src/infrastructure/storage/RedisClient.js")
|
||||
vi.mock("../../../../src/infrastructure/llm/OllamaClient.js")
|
||||
vi.mock("../../../../src/infrastructure/indexer/FileScanner.js")
|
||||
|
||||
describe("onboarding", () => {
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks()
|
||||
})
|
||||
|
||||
afterEach(() => {
|
||||
vi.restoreAllMocks()
|
||||
})
|
||||
|
||||
describe("checkRedis", () => {
|
||||
it("should return ok when Redis connects and pings successfully", async () => {
|
||||
const mockConnect = vi.fn().mockResolvedValue(undefined)
|
||||
const mockPing = vi.fn().mockResolvedValue(true)
|
||||
const mockDisconnect = vi.fn().mockResolvedValue(undefined)
|
||||
|
||||
vi.mocked(RedisClient).mockImplementation(
|
||||
() =>
|
||||
({
|
||||
connect: mockConnect,
|
||||
ping: mockPing,
|
||||
disconnect: mockDisconnect,
|
||||
}) as unknown as RedisClient,
|
||||
)
|
||||
|
||||
const result = await checkRedis({
|
||||
host: "localhost",
|
||||
port: 6379,
|
||||
db: 0,
|
||||
keyPrefix: "ipuaro:",
|
||||
})
|
||||
|
||||
expect(result.ok).toBe(true)
|
||||
expect(result.error).toBeUndefined()
|
||||
expect(mockConnect).toHaveBeenCalled()
|
||||
expect(mockPing).toHaveBeenCalled()
|
||||
expect(mockDisconnect).toHaveBeenCalled()
|
||||
})
|
||||
|
||||
it("should return error when Redis connection fails", async () => {
|
||||
vi.mocked(RedisClient).mockImplementation(
|
||||
() =>
|
||||
({
|
||||
connect: vi.fn().mockRejectedValue(new Error("Connection refused")),
|
||||
}) as unknown as RedisClient,
|
||||
)
|
||||
|
||||
const result = await checkRedis({
|
||||
host: "localhost",
|
||||
port: 6379,
|
||||
db: 0,
|
||||
keyPrefix: "ipuaro:",
|
||||
})
|
||||
|
||||
expect(result.ok).toBe(false)
|
||||
expect(result.error).toContain("Cannot connect to Redis")
|
||||
})
|
||||
|
||||
it("should return error when ping fails", async () => {
|
||||
vi.mocked(RedisClient).mockImplementation(
|
||||
() =>
|
||||
({
|
||||
connect: vi.fn().mockResolvedValue(undefined),
|
||||
ping: vi.fn().mockResolvedValue(false),
|
||||
disconnect: vi.fn().mockResolvedValue(undefined),
|
||||
}) as unknown as RedisClient,
|
||||
)
|
||||
|
||||
const result = await checkRedis({
|
||||
host: "localhost",
|
||||
port: 6379,
|
||||
db: 0,
|
||||
keyPrefix: "ipuaro:",
|
||||
})
|
||||
|
||||
expect(result.ok).toBe(false)
|
||||
expect(result.error).toContain("Redis ping failed")
|
||||
})
|
||||
})
|
||||
|
||||
describe("checkOllama", () => {
|
||||
it("should return ok when Ollama is available", async () => {
|
||||
vi.mocked(OllamaClient).mockImplementation(
|
||||
() =>
|
||||
({
|
||||
isAvailable: vi.fn().mockResolvedValue(true),
|
||||
}) as unknown as OllamaClient,
|
||||
)
|
||||
|
||||
const result = await checkOllama({
|
||||
model: "qwen2.5-coder:7b-instruct",
|
||||
contextWindow: 128_000,
|
||||
temperature: 0.1,
|
||||
host: "http://localhost:11434",
|
||||
timeout: 120_000,
|
||||
})
|
||||
|
||||
expect(result.ok).toBe(true)
|
||||
expect(result.error).toBeUndefined()
|
||||
})
|
||||
|
||||
it("should return error when Ollama is not available", async () => {
|
||||
vi.mocked(OllamaClient).mockImplementation(
|
||||
() =>
|
||||
({
|
||||
isAvailable: vi.fn().mockResolvedValue(false),
|
||||
}) as unknown as OllamaClient,
|
||||
)
|
||||
|
||||
const result = await checkOllama({
|
||||
model: "qwen2.5-coder:7b-instruct",
|
||||
contextWindow: 128_000,
|
||||
temperature: 0.1,
|
||||
host: "http://localhost:11434",
|
||||
timeout: 120_000,
|
||||
})
|
||||
|
||||
expect(result.ok).toBe(false)
|
||||
expect(result.error).toContain("Cannot connect to Ollama")
|
||||
})
|
||||
})
|
||||
|
||||
describe("checkModel", () => {
|
||||
it("should return ok when model is available", async () => {
|
||||
vi.mocked(OllamaClient).mockImplementation(
|
||||
() =>
|
||||
({
|
||||
hasModel: vi.fn().mockResolvedValue(true),
|
||||
}) as unknown as OllamaClient,
|
||||
)
|
||||
|
||||
const result = await checkModel({
|
||||
model: "qwen2.5-coder:7b-instruct",
|
||||
contextWindow: 128_000,
|
||||
temperature: 0.1,
|
||||
host: "http://localhost:11434",
|
||||
timeout: 120_000,
|
||||
})
|
||||
|
||||
expect(result.ok).toBe(true)
|
||||
expect(result.needsPull).toBe(false)
|
||||
})
|
||||
|
||||
it("should return needsPull when model is not available", async () => {
|
||||
vi.mocked(OllamaClient).mockImplementation(
|
||||
() =>
|
||||
({
|
||||
hasModel: vi.fn().mockResolvedValue(false),
|
||||
}) as unknown as OllamaClient,
|
||||
)
|
||||
|
||||
const result = await checkModel({
|
||||
model: "qwen2.5-coder:7b-instruct",
|
||||
contextWindow: 128_000,
|
||||
temperature: 0.1,
|
||||
host: "http://localhost:11434",
|
||||
timeout: 120_000,
|
||||
})
|
||||
|
||||
expect(result.ok).toBe(false)
|
||||
expect(result.needsPull).toBe(true)
|
||||
expect(result.error).toContain("not installed")
|
||||
})
|
||||
})
|
||||
|
||||
describe("checkProjectSize", () => {
|
||||
it("should return ok when file count is within limits", async () => {
|
||||
vi.mocked(FileScanner).mockImplementation(
|
||||
() =>
|
||||
({
|
||||
scanAll: vi.fn().mockResolvedValue(
|
||||
Array.from({ length: 100 }, (_, i) => ({
|
||||
path: `file${String(i)}.ts`,
|
||||
type: "file" as const,
|
||||
size: 1000,
|
||||
lastModified: Date.now(),
|
||||
})),
|
||||
),
|
||||
}) as unknown as FileScanner,
|
||||
)
|
||||
|
||||
const result = await checkProjectSize("/test/path")
|
||||
|
||||
expect(result.ok).toBe(true)
|
||||
expect(result.fileCount).toBe(100)
|
||||
expect(result.warning).toBeUndefined()
|
||||
})
|
||||
|
||||
it("should return warning when file count exceeds limit", async () => {
|
||||
vi.mocked(FileScanner).mockImplementation(
|
||||
() =>
|
||||
({
|
||||
scanAll: vi.fn().mockResolvedValue(
|
||||
Array.from({ length: 15000 }, (_, i) => ({
|
||||
path: `file${String(i)}.ts`,
|
||||
type: "file" as const,
|
||||
size: 1000,
|
||||
lastModified: Date.now(),
|
||||
})),
|
||||
),
|
||||
}) as unknown as FileScanner,
|
||||
)
|
||||
|
||||
const result = await checkProjectSize("/test/path", 10_000)
|
||||
|
||||
expect(result.ok).toBe(true)
|
||||
expect(result.fileCount).toBe(15000)
|
||||
expect(result.warning).toContain("15")
|
||||
expect(result.warning).toContain("000 files")
|
||||
})
|
||||
|
||||
it("should return error when no files found", async () => {
|
||||
vi.mocked(FileScanner).mockImplementation(
|
||||
() =>
|
||||
({
|
||||
scanAll: vi.fn().mockResolvedValue([]),
|
||||
}) as unknown as FileScanner,
|
||||
)
|
||||
|
||||
const result = await checkProjectSize("/test/path")
|
||||
|
||||
expect(result.ok).toBe(false)
|
||||
expect(result.fileCount).toBe(0)
|
||||
expect(result.warning).toContain("No supported files found")
|
||||
})
|
||||
})
|
||||
|
||||
describe("runOnboarding", () => {
|
||||
it("should return success when all checks pass", async () => {
|
||||
vi.mocked(RedisClient).mockImplementation(
|
||||
() =>
|
||||
({
|
||||
connect: vi.fn().mockResolvedValue(undefined),
|
||||
ping: vi.fn().mockResolvedValue(true),
|
||||
disconnect: vi.fn().mockResolvedValue(undefined),
|
||||
}) as unknown as RedisClient,
|
||||
)
|
||||
|
||||
vi.mocked(OllamaClient).mockImplementation(
|
||||
() =>
|
||||
({
|
||||
isAvailable: vi.fn().mockResolvedValue(true),
|
||||
hasModel: vi.fn().mockResolvedValue(true),
|
||||
}) as unknown as OllamaClient,
|
||||
)
|
||||
|
||||
vi.mocked(FileScanner).mockImplementation(
|
||||
() =>
|
||||
({
|
||||
scanAll: vi.fn().mockResolvedValue([{ path: "file.ts" }]),
|
||||
}) as unknown as FileScanner,
|
||||
)
|
||||
|
||||
const result = await runOnboarding({
|
||||
redisConfig: { host: "localhost", port: 6379, db: 0, keyPrefix: "ipuaro:" },
|
||||
llmConfig: {
|
||||
model: "test",
|
||||
contextWindow: 128_000,
|
||||
temperature: 0.1,
|
||||
host: "http://localhost:11434",
|
||||
timeout: 120_000,
|
||||
},
|
||||
projectPath: "/test/path",
|
||||
})
|
||||
|
||||
expect(result.success).toBe(true)
|
||||
expect(result.redisOk).toBe(true)
|
||||
expect(result.ollamaOk).toBe(true)
|
||||
expect(result.modelOk).toBe(true)
|
||||
expect(result.projectOk).toBe(true)
|
||||
expect(result.errors).toHaveLength(0)
|
||||
})
|
||||
|
||||
it("should return failure when Redis fails", async () => {
|
||||
vi.mocked(RedisClient).mockImplementation(
|
||||
() =>
|
||||
({
|
||||
connect: vi.fn().mockRejectedValue(new Error("Connection refused")),
|
||||
}) as unknown as RedisClient,
|
||||
)
|
||||
|
||||
vi.mocked(OllamaClient).mockImplementation(
|
||||
() =>
|
||||
({
|
||||
isAvailable: vi.fn().mockResolvedValue(true),
|
||||
hasModel: vi.fn().mockResolvedValue(true),
|
||||
}) as unknown as OllamaClient,
|
||||
)
|
||||
|
||||
vi.mocked(FileScanner).mockImplementation(
|
||||
() =>
|
||||
({
|
||||
scanAll: vi.fn().mockResolvedValue([{ path: "file.ts" }]),
|
||||
}) as unknown as FileScanner,
|
||||
)
|
||||
|
||||
const result = await runOnboarding({
|
||||
redisConfig: { host: "localhost", port: 6379, db: 0, keyPrefix: "ipuaro:" },
|
||||
llmConfig: {
|
||||
model: "test",
|
||||
contextWindow: 128_000,
|
||||
temperature: 0.1,
|
||||
host: "http://localhost:11434",
|
||||
timeout: 120_000,
|
||||
},
|
||||
projectPath: "/test/path",
|
||||
})
|
||||
|
||||
expect(result.success).toBe(false)
|
||||
expect(result.redisOk).toBe(false)
|
||||
expect(result.errors.length).toBeGreaterThan(0)
|
||||
})
|
||||
|
||||
it("should skip checks when skip options are set", async () => {
|
||||
const result = await runOnboarding({
|
||||
redisConfig: { host: "localhost", port: 6379, db: 0, keyPrefix: "ipuaro:" },
|
||||
llmConfig: {
|
||||
model: "test",
|
||||
contextWindow: 128_000,
|
||||
temperature: 0.1,
|
||||
host: "http://localhost:11434",
|
||||
timeout: 120_000,
|
||||
},
|
||||
projectPath: "/test/path",
|
||||
skipRedis: true,
|
||||
skipOllama: true,
|
||||
skipModel: true,
|
||||
skipProject: true,
|
||||
})
|
||||
|
||||
expect(result.success).toBe(true)
|
||||
expect(result.redisOk).toBe(true)
|
||||
expect(result.ollamaOk).toBe(true)
|
||||
expect(result.modelOk).toBe(true)
|
||||
expect(result.projectOk).toBe(true)
|
||||
})
|
||||
})
|
||||
})
|
||||
111
packages/ipuaro/tests/unit/cli/commands/tools-setup.test.ts
Normal file
111
packages/ipuaro/tests/unit/cli/commands/tools-setup.test.ts
Normal file
@@ -0,0 +1,111 @@
|
||||
import { describe, it, expect } from "vitest"
|
||||
import { registerAllTools } from "../../../../src/cli/commands/tools-setup.js"
|
||||
import { ToolRegistry } from "../../../../src/infrastructure/tools/registry.js"
|
||||
|
||||
describe("registerAllTools", () => {
|
||||
it("should register all 18 tools", () => {
|
||||
const registry = new ToolRegistry()
|
||||
|
||||
registerAllTools(registry)
|
||||
|
||||
expect(registry.size).toBe(18)
|
||||
})
|
||||
|
||||
it("should register all read tools", () => {
|
||||
const registry = new ToolRegistry()
|
||||
|
||||
registerAllTools(registry)
|
||||
|
||||
expect(registry.has("get_lines")).toBe(true)
|
||||
expect(registry.has("get_function")).toBe(true)
|
||||
expect(registry.has("get_class")).toBe(true)
|
||||
expect(registry.has("get_structure")).toBe(true)
|
||||
})
|
||||
|
||||
it("should register all edit tools", () => {
|
||||
const registry = new ToolRegistry()
|
||||
|
||||
registerAllTools(registry)
|
||||
|
||||
expect(registry.has("edit_lines")).toBe(true)
|
||||
expect(registry.has("create_file")).toBe(true)
|
||||
expect(registry.has("delete_file")).toBe(true)
|
||||
})
|
||||
|
||||
it("should register all search tools", () => {
|
||||
const registry = new ToolRegistry()
|
||||
|
||||
registerAllTools(registry)
|
||||
|
||||
expect(registry.has("find_references")).toBe(true)
|
||||
expect(registry.has("find_definition")).toBe(true)
|
||||
})
|
||||
|
||||
it("should register all analysis tools", () => {
|
||||
const registry = new ToolRegistry()
|
||||
|
||||
registerAllTools(registry)
|
||||
|
||||
expect(registry.has("get_dependencies")).toBe(true)
|
||||
expect(registry.has("get_dependents")).toBe(true)
|
||||
expect(registry.has("get_complexity")).toBe(true)
|
||||
expect(registry.has("get_todos")).toBe(true)
|
||||
})
|
||||
|
||||
it("should register all git tools", () => {
|
||||
const registry = new ToolRegistry()
|
||||
|
||||
registerAllTools(registry)
|
||||
|
||||
expect(registry.has("git_status")).toBe(true)
|
||||
expect(registry.has("git_diff")).toBe(true)
|
||||
expect(registry.has("git_commit")).toBe(true)
|
||||
})
|
||||
|
||||
it("should register all run tools", () => {
|
||||
const registry = new ToolRegistry()
|
||||
|
||||
registerAllTools(registry)
|
||||
|
||||
expect(registry.has("run_command")).toBe(true)
|
||||
expect(registry.has("run_tests")).toBe(true)
|
||||
})
|
||||
|
||||
it("should register tools with correct categories", () => {
|
||||
const registry = new ToolRegistry()
|
||||
|
||||
registerAllTools(registry)
|
||||
|
||||
const readTools = registry.getByCategory("read")
|
||||
const editTools = registry.getByCategory("edit")
|
||||
const searchTools = registry.getByCategory("search")
|
||||
const analysisTools = registry.getByCategory("analysis")
|
||||
const gitTools = registry.getByCategory("git")
|
||||
const runTools = registry.getByCategory("run")
|
||||
|
||||
expect(readTools.length).toBe(4)
|
||||
expect(editTools.length).toBe(3)
|
||||
expect(searchTools.length).toBe(2)
|
||||
expect(analysisTools.length).toBe(4)
|
||||
expect(gitTools.length).toBe(3)
|
||||
expect(runTools.length).toBe(2)
|
||||
})
|
||||
|
||||
it("should register tools with requiresConfirmation flag", () => {
|
||||
const registry = new ToolRegistry()
|
||||
|
||||
registerAllTools(registry)
|
||||
|
||||
const confirmationTools = registry.getConfirmationTools()
|
||||
const safeTools = registry.getSafeTools()
|
||||
|
||||
expect(confirmationTools.length).toBeGreaterThan(0)
|
||||
expect(safeTools.length).toBeGreaterThan(0)
|
||||
|
||||
const confirmNames = confirmationTools.map((t) => t.name)
|
||||
expect(confirmNames).toContain("edit_lines")
|
||||
expect(confirmNames).toContain("create_file")
|
||||
expect(confirmNames).toContain("delete_file")
|
||||
expect(confirmNames).toContain("git_commit")
|
||||
})
|
||||
})
|
||||
@@ -1,5 +1,9 @@
|
||||
import { describe, it, expect } from "vitest"
|
||||
import { createFileMeta, isHubFile } from "../../../../src/domain/value-objects/FileMeta.js"
|
||||
import {
|
||||
calculateImpactScore,
|
||||
createFileMeta,
|
||||
isHubFile,
|
||||
} from "../../../../src/domain/value-objects/FileMeta.js"
|
||||
|
||||
describe("FileMeta", () => {
|
||||
describe("createFileMeta", () => {
|
||||
@@ -15,6 +19,7 @@ describe("FileMeta", () => {
|
||||
expect(meta.isHub).toBe(false)
|
||||
expect(meta.isEntryPoint).toBe(false)
|
||||
expect(meta.fileType).toBe("unknown")
|
||||
expect(meta.impactScore).toBe(0)
|
||||
})
|
||||
|
||||
it("should merge partial values", () => {
|
||||
@@ -42,4 +47,51 @@ describe("FileMeta", () => {
|
||||
expect(isHubFile(0)).toBe(false)
|
||||
})
|
||||
})
|
||||
|
||||
describe("calculateImpactScore", () => {
|
||||
it("should return 0 for file with 0 dependents", () => {
|
||||
expect(calculateImpactScore(0, 10)).toBe(0)
|
||||
})
|
||||
|
||||
it("should return 0 when totalFiles is 0", () => {
|
||||
expect(calculateImpactScore(5, 0)).toBe(0)
|
||||
})
|
||||
|
||||
it("should return 0 when totalFiles is 1", () => {
|
||||
expect(calculateImpactScore(0, 1)).toBe(0)
|
||||
})
|
||||
|
||||
it("should calculate correct percentage", () => {
|
||||
// 5 dependents out of 10 files (excluding itself = 9 possible)
|
||||
// 5/9 * 100 = 55.56 → rounded to 56
|
||||
expect(calculateImpactScore(5, 10)).toBe(56)
|
||||
})
|
||||
|
||||
it("should return 100 when all other files depend on it", () => {
|
||||
// 9 dependents out of 10 files (9 possible dependents)
|
||||
expect(calculateImpactScore(9, 10)).toBe(100)
|
||||
})
|
||||
|
||||
it("should cap at 100", () => {
|
||||
// Edge case: more dependents than possible (shouldn't happen normally)
|
||||
expect(calculateImpactScore(20, 10)).toBe(100)
|
||||
})
|
||||
|
||||
it("should round the percentage", () => {
|
||||
// 1 dependent out of 3 files (2 possible)
|
||||
// 1/2 * 100 = 50
|
||||
expect(calculateImpactScore(1, 3)).toBe(50)
|
||||
})
|
||||
|
||||
it("should calculate impact for small projects", () => {
|
||||
// 1 dependent out of 2 files (1 possible)
|
||||
expect(calculateImpactScore(1, 2)).toBe(100)
|
||||
})
|
||||
|
||||
it("should calculate impact for larger projects", () => {
|
||||
// 50 dependents out of 100 files (99 possible)
|
||||
// 50/99 * 100 = 50.51 → rounded to 51
|
||||
expect(calculateImpactScore(50, 100)).toBe(51)
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
@@ -224,6 +224,62 @@ describe("ASTParser", () => {
|
||||
const ast = parser.parse(code, "ts")
|
||||
expect(ast.typeAliases[0].isExported).toBe(true)
|
||||
})
|
||||
|
||||
it("should extract type alias definition (simple)", () => {
|
||||
const code = `type UserId = string`
|
||||
const ast = parser.parse(code, "ts")
|
||||
expect(ast.typeAliases).toHaveLength(1)
|
||||
expect(ast.typeAliases[0].definition).toBe("string")
|
||||
})
|
||||
|
||||
it("should extract type alias definition (union)", () => {
|
||||
const code = `type Status = "pending" | "active" | "done"`
|
||||
const ast = parser.parse(code, "ts")
|
||||
expect(ast.typeAliases).toHaveLength(1)
|
||||
expect(ast.typeAliases[0].definition).toBe('"pending" | "active" | "done"')
|
||||
})
|
||||
|
||||
it("should extract type alias definition (intersection)", () => {
|
||||
const code = `type AdminUser = User & Admin`
|
||||
const ast = parser.parse(code, "ts")
|
||||
expect(ast.typeAliases).toHaveLength(1)
|
||||
expect(ast.typeAliases[0].definition).toBe("User & Admin")
|
||||
})
|
||||
|
||||
it("should extract type alias definition (object type)", () => {
|
||||
const code = `type Point = { x: number; y: number }`
|
||||
const ast = parser.parse(code, "ts")
|
||||
expect(ast.typeAliases).toHaveLength(1)
|
||||
expect(ast.typeAliases[0].definition).toBe("{ x: number; y: number }")
|
||||
})
|
||||
|
||||
it("should extract type alias definition (function type)", () => {
|
||||
const code = `type Handler = (event: Event) => void`
|
||||
const ast = parser.parse(code, "ts")
|
||||
expect(ast.typeAliases).toHaveLength(1)
|
||||
expect(ast.typeAliases[0].definition).toBe("(event: Event) => void")
|
||||
})
|
||||
|
||||
it("should extract type alias definition (generic)", () => {
|
||||
const code = `type Result<T> = { success: boolean; data: T }`
|
||||
const ast = parser.parse(code, "ts")
|
||||
expect(ast.typeAliases).toHaveLength(1)
|
||||
expect(ast.typeAliases[0].definition).toBe("{ success: boolean; data: T }")
|
||||
})
|
||||
|
||||
it("should extract type alias definition (array)", () => {
|
||||
const code = `type UserIds = string[]`
|
||||
const ast = parser.parse(code, "ts")
|
||||
expect(ast.typeAliases).toHaveLength(1)
|
||||
expect(ast.typeAliases[0].definition).toBe("string[]")
|
||||
})
|
||||
|
||||
it("should extract type alias definition (tuple)", () => {
|
||||
const code = `type Pair = [string, number]`
|
||||
const ast = parser.parse(code, "ts")
|
||||
expect(ast.typeAliases).toHaveLength(1)
|
||||
expect(ast.typeAliases[0].definition).toBe("[string, number]")
|
||||
})
|
||||
})
|
||||
|
||||
describe("exports", () => {
|
||||
@@ -404,4 +460,376 @@ function mix(
|
||||
expect(ast.exports.length).toBeGreaterThanOrEqual(4)
|
||||
})
|
||||
})
|
||||
|
||||
describe("JSON parsing", () => {
|
||||
it("should extract top-level keys from JSON object", () => {
|
||||
const json = `{
|
||||
"name": "test",
|
||||
"version": "1.0.0",
|
||||
"dependencies": {},
|
||||
"scripts": {}
|
||||
}`
|
||||
const ast = parser.parse(json, "json")
|
||||
|
||||
expect(ast.parseError).toBe(false)
|
||||
expect(ast.exports).toHaveLength(4)
|
||||
expect(ast.exports.map((e) => e.name)).toEqual([
|
||||
"name",
|
||||
"version",
|
||||
"dependencies",
|
||||
"scripts",
|
||||
])
|
||||
expect(ast.exports.every((e) => e.kind === "variable")).toBe(true)
|
||||
})
|
||||
|
||||
it("should handle empty JSON object", () => {
|
||||
const json = `{}`
|
||||
const ast = parser.parse(json, "json")
|
||||
|
||||
expect(ast.parseError).toBe(false)
|
||||
expect(ast.exports).toHaveLength(0)
|
||||
})
|
||||
})
|
||||
|
||||
describe("YAML parsing", () => {
|
||||
it("should extract top-level keys from YAML", () => {
|
||||
const yaml = `name: test
|
||||
version: 1.0.0
|
||||
dependencies:
|
||||
foo: ^1.0.0
|
||||
scripts:
|
||||
test: vitest`
|
||||
|
||||
const ast = parser.parse(yaml, "yaml")
|
||||
|
||||
expect(ast.parseError).toBe(false)
|
||||
expect(ast.exports.length).toBeGreaterThanOrEqual(4)
|
||||
expect(ast.exports.map((e) => e.name)).toContain("name")
|
||||
expect(ast.exports.map((e) => e.name)).toContain("version")
|
||||
expect(ast.exports.every((e) => e.kind === "variable")).toBe(true)
|
||||
})
|
||||
|
||||
it("should handle YAML array at root", () => {
|
||||
const yaml = `- item1
|
||||
- item2
|
||||
- item3`
|
||||
|
||||
const ast = parser.parse(yaml, "yaml")
|
||||
|
||||
expect(ast.parseError).toBe(false)
|
||||
expect(ast.exports).toHaveLength(1)
|
||||
expect(ast.exports[0].name).toBe("(array)")
|
||||
})
|
||||
|
||||
it("should handle empty YAML", () => {
|
||||
const yaml = ``
|
||||
|
||||
const ast = parser.parse(yaml, "yaml")
|
||||
|
||||
expect(ast.parseError).toBe(false)
|
||||
expect(ast.exports).toHaveLength(0)
|
||||
})
|
||||
|
||||
it("should handle YAML with null content", () => {
|
||||
const yaml = `null`
|
||||
|
||||
const ast = parser.parse(yaml, "yaml")
|
||||
|
||||
expect(ast.parseError).toBe(false)
|
||||
expect(ast.exports).toHaveLength(0)
|
||||
})
|
||||
|
||||
it("should handle invalid YAML with parse error", () => {
|
||||
const yaml = `{invalid: yaml: syntax: [}`
|
||||
|
||||
const ast = parser.parse(yaml, "yaml")
|
||||
|
||||
expect(ast.parseError).toBe(true)
|
||||
expect(ast.parseErrorMessage).toBeDefined()
|
||||
})
|
||||
|
||||
it("should track correct line numbers for YAML keys", () => {
|
||||
const yaml = `first: value1
|
||||
second: value2
|
||||
third: value3`
|
||||
|
||||
const ast = parser.parse(yaml, "yaml")
|
||||
|
||||
expect(ast.parseError).toBe(false)
|
||||
expect(ast.exports).toHaveLength(3)
|
||||
expect(ast.exports[0].line).toBe(1)
|
||||
expect(ast.exports[1].line).toBe(2)
|
||||
expect(ast.exports[2].line).toBe(3)
|
||||
})
|
||||
})
|
||||
|
||||
describe("enums (0.24.3)", () => {
|
||||
it("should extract enum with numeric values", () => {
|
||||
const code = `enum Status {
|
||||
Active = 1,
|
||||
Inactive = 0,
|
||||
Pending = 2
|
||||
}`
|
||||
const ast = parser.parse(code, "ts")
|
||||
|
||||
expect(ast.enums).toHaveLength(1)
|
||||
expect(ast.enums[0]).toMatchObject({
|
||||
name: "Status",
|
||||
isExported: false,
|
||||
isConst: false,
|
||||
})
|
||||
expect(ast.enums[0].members).toHaveLength(3)
|
||||
expect(ast.enums[0].members[0]).toMatchObject({ name: "Active", value: 1 })
|
||||
expect(ast.enums[0].members[1]).toMatchObject({ name: "Inactive", value: 0 })
|
||||
expect(ast.enums[0].members[2]).toMatchObject({ name: "Pending", value: 2 })
|
||||
})
|
||||
|
||||
it("should extract enum with string values", () => {
|
||||
const code = `enum Role {
|
||||
Admin = "admin",
|
||||
User = "user",
|
||||
Guest = "guest"
|
||||
}`
|
||||
const ast = parser.parse(code, "ts")
|
||||
|
||||
expect(ast.enums).toHaveLength(1)
|
||||
expect(ast.enums[0].members).toHaveLength(3)
|
||||
expect(ast.enums[0].members[0]).toMatchObject({ name: "Admin", value: "admin" })
|
||||
expect(ast.enums[0].members[1]).toMatchObject({ name: "User", value: "user" })
|
||||
expect(ast.enums[0].members[2]).toMatchObject({ name: "Guest", value: "guest" })
|
||||
})
|
||||
|
||||
it("should extract enum without explicit values", () => {
|
||||
const code = `enum Direction {
|
||||
Up,
|
||||
Down,
|
||||
Left,
|
||||
Right
|
||||
}`
|
||||
const ast = parser.parse(code, "ts")
|
||||
|
||||
expect(ast.enums).toHaveLength(1)
|
||||
expect(ast.enums[0].members).toHaveLength(4)
|
||||
expect(ast.enums[0].members[0]).toMatchObject({ name: "Up", value: undefined })
|
||||
expect(ast.enums[0].members[1]).toMatchObject({ name: "Down", value: undefined })
|
||||
})
|
||||
|
||||
it("should extract exported enum", () => {
|
||||
const code = `export enum Color {
|
||||
Red = "#FF0000",
|
||||
Green = "#00FF00",
|
||||
Blue = "#0000FF"
|
||||
}`
|
||||
const ast = parser.parse(code, "ts")
|
||||
|
||||
expect(ast.enums).toHaveLength(1)
|
||||
expect(ast.enums[0].isExported).toBe(true)
|
||||
expect(ast.exports).toHaveLength(1)
|
||||
expect(ast.exports[0].kind).toBe("type")
|
||||
})
|
||||
|
||||
it("should extract const enum", () => {
|
||||
const code = `const enum HttpStatus {
|
||||
OK = 200,
|
||||
NotFound = 404,
|
||||
InternalError = 500
|
||||
}`
|
||||
const ast = parser.parse(code, "ts")
|
||||
|
||||
expect(ast.enums).toHaveLength(1)
|
||||
expect(ast.enums[0].isConst).toBe(true)
|
||||
expect(ast.enums[0].members[0]).toMatchObject({ name: "OK", value: 200 })
|
||||
})
|
||||
|
||||
it("should extract exported const enum", () => {
|
||||
const code = `export const enum LogLevel {
|
||||
Debug = 0,
|
||||
Info = 1,
|
||||
Warn = 2,
|
||||
Error = 3
|
||||
}`
|
||||
const ast = parser.parse(code, "ts")
|
||||
|
||||
expect(ast.enums).toHaveLength(1)
|
||||
expect(ast.enums[0].isExported).toBe(true)
|
||||
expect(ast.enums[0].isConst).toBe(true)
|
||||
})
|
||||
|
||||
it("should extract line range for enum", () => {
|
||||
const code = `enum Test {
|
||||
A = 1,
|
||||
B = 2
|
||||
}`
|
||||
const ast = parser.parse(code, "ts")
|
||||
|
||||
expect(ast.enums[0].lineStart).toBe(1)
|
||||
expect(ast.enums[0].lineEnd).toBe(4)
|
||||
})
|
||||
|
||||
it("should handle enum with negative values", () => {
|
||||
const code = `enum Temperature {
|
||||
Cold = -10,
|
||||
Freezing = -20,
|
||||
Hot = 40
|
||||
}`
|
||||
const ast = parser.parse(code, "ts")
|
||||
|
||||
expect(ast.enums).toHaveLength(1)
|
||||
expect(ast.enums[0].members[0]).toMatchObject({ name: "Cold", value: -10 })
|
||||
expect(ast.enums[0].members[1]).toMatchObject({ name: "Freezing", value: -20 })
|
||||
expect(ast.enums[0].members[2]).toMatchObject({ name: "Hot", value: 40 })
|
||||
})
|
||||
|
||||
it("should handle empty enum", () => {
|
||||
const code = `enum Empty {}`
|
||||
const ast = parser.parse(code, "ts")
|
||||
|
||||
expect(ast.enums).toHaveLength(1)
|
||||
expect(ast.enums[0].name).toBe("Empty")
|
||||
expect(ast.enums[0].members).toHaveLength(0)
|
||||
})
|
||||
|
||||
it("should not extract enum from JavaScript", () => {
|
||||
const code = `enum Status { Active = 1 }`
|
||||
const ast = parser.parse(code, "js")
|
||||
|
||||
expect(ast.enums).toHaveLength(0)
|
||||
})
|
||||
})
|
||||
|
||||
describe("decorators (0.24.4)", () => {
|
||||
it("should extract class decorator", () => {
|
||||
const code = `@Controller('users')
|
||||
class UserController {}`
|
||||
const ast = parser.parse(code, "ts")
|
||||
|
||||
expect(ast.classes).toHaveLength(1)
|
||||
expect(ast.classes[0].decorators).toHaveLength(1)
|
||||
expect(ast.classes[0].decorators[0]).toBe("@Controller('users')")
|
||||
})
|
||||
|
||||
it("should extract multiple class decorators", () => {
|
||||
const code = `@Controller('api')
|
||||
@Injectable()
|
||||
@UseGuards(AuthGuard)
|
||||
class ApiController {}`
|
||||
const ast = parser.parse(code, "ts")
|
||||
|
||||
expect(ast.classes).toHaveLength(1)
|
||||
expect(ast.classes[0].decorators).toHaveLength(3)
|
||||
expect(ast.classes[0].decorators[0]).toBe("@Controller('api')")
|
||||
expect(ast.classes[0].decorators[1]).toBe("@Injectable()")
|
||||
expect(ast.classes[0].decorators[2]).toBe("@UseGuards(AuthGuard)")
|
||||
})
|
||||
|
||||
it("should extract method decorators", () => {
|
||||
const code = `class UserController {
|
||||
@Get(':id')
|
||||
@Auth()
|
||||
async getUser() {}
|
||||
}`
|
||||
const ast = parser.parse(code, "ts")
|
||||
|
||||
expect(ast.classes).toHaveLength(1)
|
||||
expect(ast.classes[0].methods).toHaveLength(1)
|
||||
expect(ast.classes[0].methods[0].decorators).toHaveLength(2)
|
||||
expect(ast.classes[0].methods[0].decorators[0]).toBe("@Get(':id')")
|
||||
expect(ast.classes[0].methods[0].decorators[1]).toBe("@Auth()")
|
||||
})
|
||||
|
||||
it("should extract exported decorated class", () => {
|
||||
const code = `@Injectable()
|
||||
export class UserService {}`
|
||||
const ast = parser.parse(code, "ts")
|
||||
|
||||
expect(ast.classes).toHaveLength(1)
|
||||
expect(ast.classes[0].isExported).toBe(true)
|
||||
expect(ast.classes[0].decorators).toHaveLength(1)
|
||||
expect(ast.classes[0].decorators[0]).toBe("@Injectable()")
|
||||
})
|
||||
|
||||
it("should extract decorator with complex arguments", () => {
|
||||
const code = `@Module({
|
||||
imports: [UserModule],
|
||||
controllers: [AppController],
|
||||
providers: [AppService]
|
||||
})
|
||||
class AppModule {}`
|
||||
const ast = parser.parse(code, "ts")
|
||||
|
||||
expect(ast.classes).toHaveLength(1)
|
||||
expect(ast.classes[0].decorators).toHaveLength(1)
|
||||
expect(ast.classes[0].decorators[0]).toContain("@Module")
|
||||
expect(ast.classes[0].decorators[0]).toContain("imports")
|
||||
})
|
||||
|
||||
it("should extract decorated class with extends", () => {
|
||||
const code = `@Entity()
|
||||
class User extends BaseEntity {}`
|
||||
const ast = parser.parse(code, "ts")
|
||||
|
||||
expect(ast.classes).toHaveLength(1)
|
||||
expect(ast.classes[0].extends).toBe("BaseEntity")
|
||||
expect(ast.classes[0].decorators).toHaveLength(1)
|
||||
expect(ast.classes[0].decorators![0]).toBe("@Entity()")
|
||||
})
|
||||
|
||||
it("should handle class without decorators", () => {
|
||||
const code = `class SimpleClass {}`
|
||||
const ast = parser.parse(code, "ts")
|
||||
|
||||
expect(ast.classes).toHaveLength(1)
|
||||
expect(ast.classes[0].decorators).toHaveLength(0)
|
||||
})
|
||||
|
||||
it("should handle method without decorators", () => {
|
||||
const code = `class SimpleClass {
|
||||
simpleMethod() {}
|
||||
}`
|
||||
const ast = parser.parse(code, "ts")
|
||||
|
||||
expect(ast.classes).toHaveLength(1)
|
||||
expect(ast.classes[0].methods).toHaveLength(1)
|
||||
expect(ast.classes[0].methods[0].decorators).toHaveLength(0)
|
||||
})
|
||||
|
||||
it("should handle function without decorators", () => {
|
||||
const code = `function simpleFunc() {}`
|
||||
const ast = parser.parse(code, "ts")
|
||||
|
||||
expect(ast.functions).toHaveLength(1)
|
||||
expect(ast.functions[0].decorators).toHaveLength(0)
|
||||
})
|
||||
|
||||
it("should handle arrow function without decorators", () => {
|
||||
const code = `const arrowFn = () => {}`
|
||||
const ast = parser.parse(code, "ts")
|
||||
|
||||
expect(ast.functions).toHaveLength(1)
|
||||
expect(ast.functions[0].decorators).toHaveLength(0)
|
||||
})
|
||||
|
||||
it("should extract NestJS controller pattern", () => {
|
||||
const code = `@Controller('users')
|
||||
export class UserController {
|
||||
@Get()
|
||||
findAll() {}
|
||||
|
||||
@Get(':id')
|
||||
findOne() {}
|
||||
|
||||
@Post()
|
||||
@Body()
|
||||
create() {}
|
||||
}`
|
||||
const ast = parser.parse(code, "ts")
|
||||
|
||||
expect(ast.classes).toHaveLength(1)
|
||||
expect(ast.classes[0].decorators).toContain("@Controller('users')")
|
||||
expect(ast.classes[0].methods).toHaveLength(3)
|
||||
expect(ast.classes[0].methods[0].decorators).toContain("@Get()")
|
||||
expect(ast.classes[0].methods[1].decorators).toContain("@Get(':id')")
|
||||
expect(ast.classes[0].methods[2].decorators).toContain("@Post()")
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
@@ -3,6 +3,7 @@ import { MetaAnalyzer } from "../../../../src/infrastructure/indexer/MetaAnalyze
|
||||
import { ASTParser } from "../../../../src/infrastructure/indexer/ASTParser.js"
|
||||
import type { FileAST } from "../../../../src/domain/value-objects/FileAST.js"
|
||||
import { createEmptyFileAST } from "../../../../src/domain/value-objects/FileAST.js"
|
||||
import { createFileMeta, type FileMeta } from "../../../../src/domain/value-objects/FileMeta.js"
|
||||
|
||||
describe("MetaAnalyzer", () => {
|
||||
let analyzer: MetaAnalyzer
|
||||
@@ -737,4 +738,368 @@ export function createComponent(): MyComponent {
|
||||
expect(meta.fileType).toBe("source")
|
||||
})
|
||||
})
|
||||
|
||||
describe("computeTransitiveCounts", () => {
|
||||
it("should compute transitive dependents for a simple chain", () => {
|
||||
// A -> B -> C (A depends on B, B depends on C)
|
||||
// So C has transitive dependents: B, A
|
||||
const metas = new Map<string, FileMeta>()
|
||||
metas.set(
|
||||
"/project/a.ts",
|
||||
createFileMeta({
|
||||
dependencies: ["/project/b.ts"],
|
||||
dependents: [],
|
||||
}),
|
||||
)
|
||||
metas.set(
|
||||
"/project/b.ts",
|
||||
createFileMeta({
|
||||
dependencies: ["/project/c.ts"],
|
||||
dependents: ["/project/a.ts"],
|
||||
}),
|
||||
)
|
||||
metas.set(
|
||||
"/project/c.ts",
|
||||
createFileMeta({
|
||||
dependencies: [],
|
||||
dependents: ["/project/b.ts"],
|
||||
}),
|
||||
)
|
||||
|
||||
analyzer.computeTransitiveCounts(metas)
|
||||
|
||||
expect(metas.get("/project/c.ts")!.transitiveDepCount).toBe(2) // B and A
|
||||
expect(metas.get("/project/b.ts")!.transitiveDepCount).toBe(1) // A
|
||||
expect(metas.get("/project/a.ts")!.transitiveDepCount).toBe(0) // none
|
||||
})
|
||||
|
||||
it("should compute transitive dependencies for a simple chain", () => {
|
||||
// A -> B -> C (A depends on B, B depends on C)
|
||||
// So A has transitive dependencies: B, C
|
||||
const metas = new Map<string, FileMeta>()
|
||||
metas.set(
|
||||
"/project/a.ts",
|
||||
createFileMeta({
|
||||
dependencies: ["/project/b.ts"],
|
||||
dependents: [],
|
||||
}),
|
||||
)
|
||||
metas.set(
|
||||
"/project/b.ts",
|
||||
createFileMeta({
|
||||
dependencies: ["/project/c.ts"],
|
||||
dependents: ["/project/a.ts"],
|
||||
}),
|
||||
)
|
||||
metas.set(
|
||||
"/project/c.ts",
|
||||
createFileMeta({
|
||||
dependencies: [],
|
||||
dependents: ["/project/b.ts"],
|
||||
}),
|
||||
)
|
||||
|
||||
analyzer.computeTransitiveCounts(metas)
|
||||
|
||||
expect(metas.get("/project/a.ts")!.transitiveDepByCount).toBe(2) // B and C
|
||||
expect(metas.get("/project/b.ts")!.transitiveDepByCount).toBe(1) // C
|
||||
expect(metas.get("/project/c.ts")!.transitiveDepByCount).toBe(0) // none
|
||||
})
|
||||
|
||||
it("should handle diamond dependency pattern", () => {
|
||||
// A
|
||||
// / \
|
||||
// B C
|
||||
// \ /
|
||||
// D
|
||||
// A depends on B and C, both depend on D
|
||||
const metas = new Map<string, FileMeta>()
|
||||
metas.set(
|
||||
"/project/a.ts",
|
||||
createFileMeta({
|
||||
dependencies: ["/project/b.ts", "/project/c.ts"],
|
||||
dependents: [],
|
||||
}),
|
||||
)
|
||||
metas.set(
|
||||
"/project/b.ts",
|
||||
createFileMeta({
|
||||
dependencies: ["/project/d.ts"],
|
||||
dependents: ["/project/a.ts"],
|
||||
}),
|
||||
)
|
||||
metas.set(
|
||||
"/project/c.ts",
|
||||
createFileMeta({
|
||||
dependencies: ["/project/d.ts"],
|
||||
dependents: ["/project/a.ts"],
|
||||
}),
|
||||
)
|
||||
metas.set(
|
||||
"/project/d.ts",
|
||||
createFileMeta({
|
||||
dependencies: [],
|
||||
dependents: ["/project/b.ts", "/project/c.ts"],
|
||||
}),
|
||||
)
|
||||
|
||||
analyzer.computeTransitiveCounts(metas)
|
||||
|
||||
// D is depended on by B, C, and transitively by A
|
||||
expect(metas.get("/project/d.ts")!.transitiveDepCount).toBe(3)
|
||||
// A depends on B, C, and transitively on D
|
||||
expect(metas.get("/project/a.ts")!.transitiveDepByCount).toBe(3)
|
||||
})
|
||||
|
||||
it("should handle circular dependencies gracefully", () => {
|
||||
// A -> B -> C -> A (circular)
|
||||
const metas = new Map<string, FileMeta>()
|
||||
metas.set(
|
||||
"/project/a.ts",
|
||||
createFileMeta({
|
||||
dependencies: ["/project/b.ts"],
|
||||
dependents: ["/project/c.ts"],
|
||||
}),
|
||||
)
|
||||
metas.set(
|
||||
"/project/b.ts",
|
||||
createFileMeta({
|
||||
dependencies: ["/project/c.ts"],
|
||||
dependents: ["/project/a.ts"],
|
||||
}),
|
||||
)
|
||||
metas.set(
|
||||
"/project/c.ts",
|
||||
createFileMeta({
|
||||
dependencies: ["/project/a.ts"],
|
||||
dependents: ["/project/b.ts"],
|
||||
}),
|
||||
)
|
||||
|
||||
// Should not throw, should handle cycles
|
||||
analyzer.computeTransitiveCounts(metas)
|
||||
|
||||
// Each file has the other 2 as transitive dependents
|
||||
expect(metas.get("/project/a.ts")!.transitiveDepCount).toBe(2)
|
||||
expect(metas.get("/project/b.ts")!.transitiveDepCount).toBe(2)
|
||||
expect(metas.get("/project/c.ts")!.transitiveDepCount).toBe(2)
|
||||
})
|
||||
|
||||
it("should return 0 for files with no dependencies", () => {
|
||||
const metas = new Map<string, FileMeta>()
|
||||
metas.set(
|
||||
"/project/a.ts",
|
||||
createFileMeta({
|
||||
dependencies: [],
|
||||
dependents: [],
|
||||
}),
|
||||
)
|
||||
|
||||
analyzer.computeTransitiveCounts(metas)
|
||||
|
||||
expect(metas.get("/project/a.ts")!.transitiveDepCount).toBe(0)
|
||||
expect(metas.get("/project/a.ts")!.transitiveDepByCount).toBe(0)
|
||||
})
|
||||
|
||||
it("should handle empty metas map", () => {
|
||||
const metas = new Map<string, FileMeta>()
|
||||
// Should not throw
|
||||
expect(() => analyzer.computeTransitiveCounts(metas)).not.toThrow()
|
||||
})
|
||||
|
||||
it("should handle single file", () => {
|
||||
const metas = new Map<string, FileMeta>()
|
||||
metas.set(
|
||||
"/project/a.ts",
|
||||
createFileMeta({
|
||||
dependencies: [],
|
||||
dependents: [],
|
||||
}),
|
||||
)
|
||||
|
||||
analyzer.computeTransitiveCounts(metas)
|
||||
|
||||
expect(metas.get("/project/a.ts")!.transitiveDepCount).toBe(0)
|
||||
expect(metas.get("/project/a.ts")!.transitiveDepByCount).toBe(0)
|
||||
})
|
||||
|
||||
it("should handle multiple roots depending on same leaf", () => {
|
||||
// A -> C, B -> C
|
||||
const metas = new Map<string, FileMeta>()
|
||||
metas.set(
|
||||
"/project/a.ts",
|
||||
createFileMeta({
|
||||
dependencies: ["/project/c.ts"],
|
||||
dependents: [],
|
||||
}),
|
||||
)
|
||||
metas.set(
|
||||
"/project/b.ts",
|
||||
createFileMeta({
|
||||
dependencies: ["/project/c.ts"],
|
||||
dependents: [],
|
||||
}),
|
||||
)
|
||||
metas.set(
|
||||
"/project/c.ts",
|
||||
createFileMeta({
|
||||
dependencies: [],
|
||||
dependents: ["/project/a.ts", "/project/b.ts"],
|
||||
}),
|
||||
)
|
||||
|
||||
analyzer.computeTransitiveCounts(metas)
|
||||
|
||||
expect(metas.get("/project/c.ts")!.transitiveDepCount).toBe(2) // A and B
|
||||
expect(metas.get("/project/a.ts")!.transitiveDepByCount).toBe(1) // C
|
||||
expect(metas.get("/project/b.ts")!.transitiveDepByCount).toBe(1) // C
|
||||
})
|
||||
|
||||
it("should handle deep dependency chains", () => {
|
||||
// A -> B -> C -> D -> E
|
||||
const metas = new Map<string, FileMeta>()
|
||||
metas.set(
|
||||
"/project/a.ts",
|
||||
createFileMeta({
|
||||
dependencies: ["/project/b.ts"],
|
||||
dependents: [],
|
||||
}),
|
||||
)
|
||||
metas.set(
|
||||
"/project/b.ts",
|
||||
createFileMeta({
|
||||
dependencies: ["/project/c.ts"],
|
||||
dependents: ["/project/a.ts"],
|
||||
}),
|
||||
)
|
||||
metas.set(
|
||||
"/project/c.ts",
|
||||
createFileMeta({
|
||||
dependencies: ["/project/d.ts"],
|
||||
dependents: ["/project/b.ts"],
|
||||
}),
|
||||
)
|
||||
metas.set(
|
||||
"/project/d.ts",
|
||||
createFileMeta({
|
||||
dependencies: ["/project/e.ts"],
|
||||
dependents: ["/project/c.ts"],
|
||||
}),
|
||||
)
|
||||
metas.set(
|
||||
"/project/e.ts",
|
||||
createFileMeta({
|
||||
dependencies: [],
|
||||
dependents: ["/project/d.ts"],
|
||||
}),
|
||||
)
|
||||
|
||||
analyzer.computeTransitiveCounts(metas)
|
||||
|
||||
// E has transitive dependents: D, C, B, A
|
||||
expect(metas.get("/project/e.ts")!.transitiveDepCount).toBe(4)
|
||||
// A has transitive dependencies: B, C, D, E
|
||||
expect(metas.get("/project/a.ts")!.transitiveDepByCount).toBe(4)
|
||||
})
|
||||
})
|
||||
|
||||
describe("getTransitiveDependents", () => {
|
||||
it("should return empty set for file not in metas", () => {
|
||||
const metas = new Map<string, FileMeta>()
|
||||
const cache = new Map<string, Set<string>>()
|
||||
|
||||
const result = analyzer.getTransitiveDependents("/project/unknown.ts", metas, cache)
|
||||
|
||||
expect(result.size).toBe(0)
|
||||
})
|
||||
|
||||
it("should use cache for repeated calls", () => {
|
||||
const metas = new Map<string, FileMeta>()
|
||||
metas.set(
|
||||
"/project/a.ts",
|
||||
createFileMeta({
|
||||
dependencies: [],
|
||||
dependents: ["/project/b.ts"],
|
||||
}),
|
||||
)
|
||||
metas.set(
|
||||
"/project/b.ts",
|
||||
createFileMeta({
|
||||
dependencies: ["/project/a.ts"],
|
||||
dependents: [],
|
||||
}),
|
||||
)
|
||||
|
||||
const cache = new Map<string, Set<string>>()
|
||||
const result1 = analyzer.getTransitiveDependents("/project/a.ts", metas, cache)
|
||||
const result2 = analyzer.getTransitiveDependents("/project/a.ts", metas, cache)
|
||||
|
||||
// Should return same instance from cache
|
||||
expect(result1).toBe(result2)
|
||||
expect(result1.size).toBe(1)
|
||||
})
|
||||
})
|
||||
|
||||
describe("getTransitiveDependencies", () => {
|
||||
it("should return empty set for file not in metas", () => {
|
||||
const metas = new Map<string, FileMeta>()
|
||||
const cache = new Map<string, Set<string>>()
|
||||
|
||||
const result = analyzer.getTransitiveDependencies("/project/unknown.ts", metas, cache)
|
||||
|
||||
expect(result.size).toBe(0)
|
||||
})
|
||||
|
||||
it("should use cache for repeated calls", () => {
|
||||
const metas = new Map<string, FileMeta>()
|
||||
metas.set(
|
||||
"/project/a.ts",
|
||||
createFileMeta({
|
||||
dependencies: ["/project/b.ts"],
|
||||
dependents: [],
|
||||
}),
|
||||
)
|
||||
metas.set(
|
||||
"/project/b.ts",
|
||||
createFileMeta({
|
||||
dependencies: [],
|
||||
dependents: ["/project/a.ts"],
|
||||
}),
|
||||
)
|
||||
|
||||
const cache = new Map<string, Set<string>>()
|
||||
const result1 = analyzer.getTransitiveDependencies("/project/a.ts", metas, cache)
|
||||
const result2 = analyzer.getTransitiveDependencies("/project/a.ts", metas, cache)
|
||||
|
||||
// Should return same instance from cache
|
||||
expect(result1).toBe(result2)
|
||||
expect(result1.size).toBe(1)
|
||||
})
|
||||
})
|
||||
|
||||
describe("analyzeAll with transitive counts", () => {
|
||||
it("should compute transitive counts in analyzeAll", () => {
|
||||
const files = new Map<string, { ast: FileAST; content: string }>()
|
||||
|
||||
// A imports B, B imports C
|
||||
const aContent = `import { b } from "./b"`
|
||||
const aAST = parser.parse(aContent, "ts")
|
||||
files.set("/project/src/a.ts", { ast: aAST, content: aContent })
|
||||
|
||||
const bContent = `import { c } from "./c"\nexport const b = () => c()`
|
||||
const bAST = parser.parse(bContent, "ts")
|
||||
files.set("/project/src/b.ts", { ast: bAST, content: bContent })
|
||||
|
||||
const cContent = `export const c = () => 42`
|
||||
const cAST = parser.parse(cContent, "ts")
|
||||
files.set("/project/src/c.ts", { ast: cAST, content: cContent })
|
||||
|
||||
const results = analyzer.analyzeAll(files)
|
||||
|
||||
// C has transitive dependents: B and A
|
||||
expect(results.get("/project/src/c.ts")!.transitiveDepCount).toBe(2)
|
||||
// A has transitive dependencies: B and C
|
||||
expect(results.get("/project/src/a.ts")!.transitiveDepByCount).toBe(2)
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
@@ -109,24 +109,80 @@ describe("Watchdog", () => {
|
||||
|
||||
describe("flushAll", () => {
|
||||
it("should not throw when no pending changes", () => {
|
||||
watchdog.start(tempDir)
|
||||
expect(() => watchdog.flushAll()).not.toThrow()
|
||||
})
|
||||
|
||||
it("should flush all pending changes", async () => {
|
||||
it("should handle flushAll with active timers", async () => {
|
||||
const slowWatchdog = new Watchdog({ debounceMs: 1000 })
|
||||
const events: FileChangeEvent[] = []
|
||||
watchdog.onFileChange((event) => events.push(event))
|
||||
watchdog.start(tempDir)
|
||||
slowWatchdog.onFileChange((event) => events.push(event))
|
||||
slowWatchdog.start(tempDir)
|
||||
|
||||
await new Promise((resolve) => setTimeout(resolve, 200))
|
||||
|
||||
const testFile = path.join(tempDir, "instant-flush.ts")
|
||||
await fs.writeFile(testFile, "const x = 1")
|
||||
|
||||
await new Promise((resolve) => setTimeout(resolve, 150))
|
||||
|
||||
const pendingCount = slowWatchdog.getPendingCount()
|
||||
if (pendingCount > 0) {
|
||||
slowWatchdog.flushAll()
|
||||
expect(slowWatchdog.getPendingCount()).toBe(0)
|
||||
expect(events.length).toBeGreaterThan(0)
|
||||
}
|
||||
|
||||
await slowWatchdog.stop()
|
||||
})
|
||||
|
||||
it("should flush all pending changes immediately", async () => {
|
||||
const slowWatchdog = new Watchdog({ debounceMs: 500 })
|
||||
const events: FileChangeEvent[] = []
|
||||
slowWatchdog.onFileChange((event) => events.push(event))
|
||||
slowWatchdog.start(tempDir)
|
||||
|
||||
await new Promise((resolve) => setTimeout(resolve, 100))
|
||||
|
||||
const testFile = path.join(tempDir, "flush-test.ts")
|
||||
const testFile1 = path.join(tempDir, "flush-test1.ts")
|
||||
const testFile2 = path.join(tempDir, "flush-test2.ts")
|
||||
await fs.writeFile(testFile1, "const x = 1")
|
||||
await fs.writeFile(testFile2, "const y = 2")
|
||||
|
||||
await new Promise((resolve) => setTimeout(resolve, 100))
|
||||
|
||||
const pendingCount = slowWatchdog.getPendingCount()
|
||||
if (pendingCount > 0) {
|
||||
slowWatchdog.flushAll()
|
||||
expect(slowWatchdog.getPendingCount()).toBe(0)
|
||||
}
|
||||
|
||||
await slowWatchdog.stop()
|
||||
})
|
||||
|
||||
it("should clear all timers when flushing", async () => {
|
||||
const slowWatchdog = new Watchdog({ debounceMs: 500 })
|
||||
const events: FileChangeEvent[] = []
|
||||
slowWatchdog.onFileChange((event) => events.push(event))
|
||||
slowWatchdog.start(tempDir)
|
||||
|
||||
await new Promise((resolve) => setTimeout(resolve, 100))
|
||||
|
||||
const testFile = path.join(tempDir, "timer-test.ts")
|
||||
await fs.writeFile(testFile, "const x = 1")
|
||||
|
||||
await new Promise((resolve) => setTimeout(resolve, 20))
|
||||
await new Promise((resolve) => setTimeout(resolve, 100))
|
||||
|
||||
watchdog.flushAll()
|
||||
const pendingBefore = slowWatchdog.getPendingCount()
|
||||
|
||||
await new Promise((resolve) => setTimeout(resolve, 50))
|
||||
if (pendingBefore > 0) {
|
||||
const eventsBefore = events.length
|
||||
slowWatchdog.flushAll()
|
||||
expect(slowWatchdog.getPendingCount()).toBe(0)
|
||||
expect(events.length).toBeGreaterThan(eventsBefore)
|
||||
}
|
||||
|
||||
await slowWatchdog.stop()
|
||||
})
|
||||
})
|
||||
|
||||
@@ -145,7 +201,7 @@ describe("Watchdog", () => {
|
||||
await customWatchdog.stop()
|
||||
})
|
||||
|
||||
it("should handle simple directory patterns", async () => {
|
||||
it("should handle simple directory patterns without wildcards", async () => {
|
||||
const customWatchdog = new Watchdog({
|
||||
debounceMs: 50,
|
||||
ignorePatterns: ["node_modules", "dist"],
|
||||
@@ -158,6 +214,48 @@ describe("Watchdog", () => {
|
||||
|
||||
await customWatchdog.stop()
|
||||
})
|
||||
|
||||
it("should handle mixed wildcard and non-wildcard patterns", async () => {
|
||||
const customWatchdog = new Watchdog({
|
||||
debounceMs: 50,
|
||||
ignorePatterns: ["node_modules", "*.log", "**/*.tmp", "dist", "build"],
|
||||
})
|
||||
|
||||
customWatchdog.start(tempDir)
|
||||
await new Promise((resolve) => setTimeout(resolve, 100))
|
||||
|
||||
expect(customWatchdog.isWatching()).toBe(true)
|
||||
|
||||
await customWatchdog.stop()
|
||||
})
|
||||
|
||||
it("should handle patterns with dots correctly", async () => {
|
||||
const customWatchdog = new Watchdog({
|
||||
debounceMs: 50,
|
||||
ignorePatterns: ["*.test.ts", "**/*.spec.js"],
|
||||
})
|
||||
|
||||
customWatchdog.start(tempDir)
|
||||
await new Promise((resolve) => setTimeout(resolve, 100))
|
||||
|
||||
expect(customWatchdog.isWatching()).toBe(true)
|
||||
|
||||
await customWatchdog.stop()
|
||||
})
|
||||
|
||||
it("should handle double wildcards correctly", async () => {
|
||||
const customWatchdog = new Watchdog({
|
||||
debounceMs: 50,
|
||||
ignorePatterns: ["**/node_modules/**", "**/.git/**"],
|
||||
})
|
||||
|
||||
customWatchdog.start(tempDir)
|
||||
await new Promise((resolve) => setTimeout(resolve, 100))
|
||||
|
||||
expect(customWatchdog.isWatching()).toBe(true)
|
||||
|
||||
await customWatchdog.stop()
|
||||
})
|
||||
})
|
||||
|
||||
describe("file change detection", () => {
|
||||
@@ -333,4 +431,94 @@ describe("Watchdog", () => {
|
||||
}
|
||||
})
|
||||
})
|
||||
|
||||
describe("error handling", () => {
|
||||
it("should handle watcher errors gracefully", async () => {
|
||||
const consoleErrorSpy = vi.spyOn(console, "error").mockImplementation(() => {})
|
||||
|
||||
watchdog.start(tempDir)
|
||||
|
||||
const watcher = (watchdog as any).watcher
|
||||
if (watcher) {
|
||||
watcher.emit("error", new Error("Test watcher error"))
|
||||
}
|
||||
|
||||
await new Promise((resolve) => setTimeout(resolve, 100))
|
||||
|
||||
expect(consoleErrorSpy).toHaveBeenCalledWith(
|
||||
expect.stringContaining("Test watcher error"),
|
||||
)
|
||||
|
||||
consoleErrorSpy.mockRestore()
|
||||
})
|
||||
})
|
||||
|
||||
describe("polling mode", () => {
|
||||
it("should support polling mode", () => {
|
||||
const pollingWatchdog = new Watchdog({
|
||||
debounceMs: 50,
|
||||
usePolling: true,
|
||||
pollInterval: 500,
|
||||
})
|
||||
|
||||
pollingWatchdog.start(tempDir)
|
||||
expect(pollingWatchdog.isWatching()).toBe(true)
|
||||
|
||||
pollingWatchdog.stop()
|
||||
})
|
||||
})
|
||||
|
||||
describe("edge cases", () => {
|
||||
it("should handle flushing non-existent change", () => {
|
||||
watchdog.start(tempDir)
|
||||
const flushChange = (watchdog as any).flushChange.bind(watchdog)
|
||||
expect(() => flushChange("/non/existent/path.ts")).not.toThrow()
|
||||
})
|
||||
|
||||
it("should handle clearing timer for same file multiple times", async () => {
|
||||
const events: FileChangeEvent[] = []
|
||||
watchdog.onFileChange((event) => events.push(event))
|
||||
watchdog.start(tempDir)
|
||||
|
||||
await new Promise((resolve) => setTimeout(resolve, 100))
|
||||
|
||||
const testFile = path.join(tempDir, "test.ts")
|
||||
await fs.writeFile(testFile, "const x = 1")
|
||||
await new Promise((resolve) => setTimeout(resolve, 10))
|
||||
await fs.writeFile(testFile, "const x = 2")
|
||||
await new Promise((resolve) => setTimeout(resolve, 10))
|
||||
await fs.writeFile(testFile, "const x = 3")
|
||||
|
||||
await new Promise((resolve) => setTimeout(resolve, 200))
|
||||
|
||||
expect(events.length).toBeGreaterThanOrEqual(0)
|
||||
})
|
||||
|
||||
it("should normalize file paths", async () => {
|
||||
const events: FileChangeEvent[] = []
|
||||
watchdog.onFileChange((event) => {
|
||||
events.push(event)
|
||||
expect(path.isAbsolute(event.path)).toBe(true)
|
||||
})
|
||||
watchdog.start(tempDir)
|
||||
|
||||
await new Promise((resolve) => setTimeout(resolve, 100))
|
||||
|
||||
const testFile = path.join(tempDir, "normalize-test.ts")
|
||||
await fs.writeFile(testFile, "const x = 1")
|
||||
|
||||
await new Promise((resolve) => setTimeout(resolve, 200))
|
||||
})
|
||||
|
||||
it("should handle empty directory", async () => {
|
||||
const emptyDir = await fs.mkdtemp(path.join(os.tmpdir(), "empty-"))
|
||||
const emptyWatchdog = new Watchdog({ debounceMs: 50 })
|
||||
|
||||
emptyWatchdog.start(emptyDir)
|
||||
expect(emptyWatchdog.isWatching()).toBe(true)
|
||||
|
||||
await emptyWatchdog.stop()
|
||||
await fs.rm(emptyDir, { recursive: true, force: true })
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
@@ -95,53 +95,36 @@ describe("OllamaClient", () => {
|
||||
)
|
||||
})
|
||||
|
||||
it("should pass tools when provided", async () => {
|
||||
it("should not pass tools parameter (tools are in system prompt)", async () => {
|
||||
const client = new OllamaClient(defaultConfig)
|
||||
const messages = [createUserMessage("Read file")]
|
||||
const tools = [
|
||||
{
|
||||
name: "get_lines",
|
||||
description: "Get lines from file",
|
||||
parameters: [
|
||||
{
|
||||
name: "path",
|
||||
type: "string" as const,
|
||||
description: "File path",
|
||||
required: true,
|
||||
},
|
||||
],
|
||||
},
|
||||
]
|
||||
|
||||
await client.chat(messages, tools)
|
||||
await client.chat(messages)
|
||||
|
||||
expect(mockOllamaInstance.chat).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
tools: expect.arrayContaining([
|
||||
model: "qwen2.5-coder:7b-instruct",
|
||||
messages: expect.arrayContaining([
|
||||
expect.objectContaining({
|
||||
type: "function",
|
||||
function: expect.objectContaining({
|
||||
name: "get_lines",
|
||||
}),
|
||||
role: "user",
|
||||
content: "Read file",
|
||||
}),
|
||||
]),
|
||||
}),
|
||||
)
|
||||
expect(mockOllamaInstance.chat).toHaveBeenCalledWith(
|
||||
expect.not.objectContaining({
|
||||
tools: expect.anything(),
|
||||
}),
|
||||
)
|
||||
})
|
||||
|
||||
it("should extract tool calls from response", async () => {
|
||||
it("should extract tool calls from XML in response content", async () => {
|
||||
mockOllamaInstance.chat.mockResolvedValue({
|
||||
message: {
|
||||
role: "assistant",
|
||||
content: "",
|
||||
tool_calls: [
|
||||
{
|
||||
function: {
|
||||
name: "get_lines",
|
||||
arguments: { path: "src/index.ts" },
|
||||
},
|
||||
},
|
||||
],
|
||||
content: '<tool_call name="get_lines"><path>src/index.ts</path></tool_call>',
|
||||
tool_calls: undefined,
|
||||
},
|
||||
eval_count: 30,
|
||||
})
|
||||
@@ -424,48 +407,6 @@ describe("OllamaClient", () => {
|
||||
})
|
||||
})
|
||||
|
||||
describe("tool parameter conversion", () => {
|
||||
it("should include enum values when present", async () => {
|
||||
const client = new OllamaClient(defaultConfig)
|
||||
const messages = [createUserMessage("Get status")]
|
||||
const tools = [
|
||||
{
|
||||
name: "get_status",
|
||||
description: "Get status",
|
||||
parameters: [
|
||||
{
|
||||
name: "type",
|
||||
type: "string" as const,
|
||||
description: "Status type",
|
||||
required: true,
|
||||
enum: ["active", "inactive", "pending"],
|
||||
},
|
||||
],
|
||||
},
|
||||
]
|
||||
|
||||
await client.chat(messages, tools)
|
||||
|
||||
expect(mockOllamaInstance.chat).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
tools: expect.arrayContaining([
|
||||
expect.objectContaining({
|
||||
function: expect.objectContaining({
|
||||
parameters: expect.objectContaining({
|
||||
properties: expect.objectContaining({
|
||||
type: expect.objectContaining({
|
||||
enum: ["active", "inactive", "pending"],
|
||||
}),
|
||||
}),
|
||||
}),
|
||||
}),
|
||||
}),
|
||||
]),
|
||||
}),
|
||||
)
|
||||
})
|
||||
})
|
||||
|
||||
describe("error handling", () => {
|
||||
it("should handle ECONNREFUSED errors", async () => {
|
||||
mockOllamaInstance.chat.mockRejectedValue(new Error("ECONNREFUSED"))
|
||||
@@ -484,5 +425,27 @@ describe("OllamaClient", () => {
|
||||
|
||||
await expect(client.pullModel("test")).rejects.toThrow(/Failed to pull model/)
|
||||
})
|
||||
|
||||
it("should handle AbortError correctly", async () => {
|
||||
const abortError = new Error("aborted")
|
||||
abortError.name = "AbortError"
|
||||
mockOllamaInstance.chat.mockRejectedValue(abortError)
|
||||
|
||||
const client = new OllamaClient(defaultConfig)
|
||||
|
||||
await expect(client.chat([createUserMessage("Hello")])).rejects.toThrow(
|
||||
/Request was aborted/,
|
||||
)
|
||||
})
|
||||
|
||||
it("should handle model not found errors", async () => {
|
||||
mockOllamaInstance.chat.mockRejectedValue(new Error("model 'unknown' not found"))
|
||||
|
||||
const client = new OllamaClient(defaultConfig)
|
||||
|
||||
await expect(client.chat([createUserMessage("Hello")])).rejects.toThrow(
|
||||
/Model.*not found/,
|
||||
)
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
@@ -72,7 +72,7 @@ describe("ResponseParser", () => {
|
||||
})
|
||||
|
||||
it("should parse null values", () => {
|
||||
const response = `<tool_call name="test">
|
||||
const response = `<tool_call name="get_lines">
|
||||
<value>null</value>
|
||||
</tool_call>`
|
||||
|
||||
@@ -92,7 +92,7 @@ describe("ResponseParser", () => {
|
||||
})
|
||||
|
||||
it("should parse JSON objects", () => {
|
||||
const response = `<tool_call name="test">
|
||||
const response = `<tool_call name="get_lines">
|
||||
<config>{"key": "value"}</config>
|
||||
</tool_call>`
|
||||
|
||||
@@ -123,6 +123,161 @@ describe("ResponseParser", () => {
|
||||
start: 5,
|
||||
})
|
||||
})
|
||||
|
||||
it("should reject unknown tool names", () => {
|
||||
const response = `<tool_call name="unknown_tool"><path>test.ts</path></tool_call>`
|
||||
|
||||
const result = parseToolCalls(response)
|
||||
|
||||
expect(result.toolCalls).toHaveLength(0)
|
||||
expect(result.hasParseErrors).toBe(true)
|
||||
expect(result.parseErrors[0]).toContain("Unknown tool")
|
||||
expect(result.parseErrors[0]).toContain("unknown_tool")
|
||||
})
|
||||
|
||||
it("should normalize tool name aliases", () => {
|
||||
// get_functions -> get_lines (common LLM typo)
|
||||
const response1 = `<tool_call name="get_functions"><path>src/index.ts</path></tool_call>`
|
||||
const result1 = parseToolCalls(response1)
|
||||
expect(result1.toolCalls).toHaveLength(1)
|
||||
expect(result1.toolCalls[0].name).toBe("get_lines")
|
||||
expect(result1.hasParseErrors).toBe(false)
|
||||
|
||||
// read_file -> get_lines
|
||||
const response2 = `<tool_call name="read_file"><path>test.ts</path></tool_call>`
|
||||
const result2 = parseToolCalls(response2)
|
||||
expect(result2.toolCalls).toHaveLength(1)
|
||||
expect(result2.toolCalls[0].name).toBe("get_lines")
|
||||
|
||||
// find_todos -> get_todos
|
||||
const response3 = `<tool_call name="find_todos"></tool_call>`
|
||||
const result3 = parseToolCalls(response3)
|
||||
expect(result3.toolCalls).toHaveLength(1)
|
||||
expect(result3.toolCalls[0].name).toBe("get_todos")
|
||||
|
||||
// list_files -> get_structure
|
||||
const response4 = `<tool_call name="list_files"><path>.</path></tool_call>`
|
||||
const result4 = parseToolCalls(response4)
|
||||
expect(result4.toolCalls).toHaveLength(1)
|
||||
expect(result4.toolCalls[0].name).toBe("get_structure")
|
||||
})
|
||||
|
||||
// JSON format tests
|
||||
it("should parse JSON format tool calls as fallback", () => {
|
||||
const response = `{"name": "get_lines", "arguments": {"path": "src/index.ts"}}`
|
||||
const result = parseToolCalls(response)
|
||||
|
||||
expect(result.toolCalls).toHaveLength(1)
|
||||
expect(result.toolCalls[0].name).toBe("get_lines")
|
||||
expect(result.toolCalls[0].params).toEqual({ path: "src/index.ts" })
|
||||
expect(result.hasParseErrors).toBe(false)
|
||||
})
|
||||
|
||||
it("should parse JSON format with numeric arguments", () => {
|
||||
const response = `{"name": "get_lines", "arguments": {"path": "src/index.ts", "start": 1, "end": 50}}`
|
||||
const result = parseToolCalls(response)
|
||||
|
||||
expect(result.toolCalls).toHaveLength(1)
|
||||
expect(result.toolCalls[0].params).toEqual({
|
||||
path: "src/index.ts",
|
||||
start: 1,
|
||||
end: 50,
|
||||
})
|
||||
})
|
||||
|
||||
it("should parse JSON format with surrounding text", () => {
|
||||
const response = `I'll read the file for you:
|
||||
{"name": "get_lines", "arguments": {"path": "src/index.ts"}}
|
||||
Let me know if you need more.`
|
||||
|
||||
const result = parseToolCalls(response)
|
||||
|
||||
expect(result.toolCalls).toHaveLength(1)
|
||||
expect(result.toolCalls[0].name).toBe("get_lines")
|
||||
expect(result.content).toContain("I'll read the file for you:")
|
||||
expect(result.content).toContain("Let me know if you need more.")
|
||||
})
|
||||
|
||||
it("should normalize tool name aliases in JSON format", () => {
|
||||
// read_file -> get_lines
|
||||
const response = `{"name": "read_file", "arguments": {"path": "test.ts"}}`
|
||||
const result = parseToolCalls(response)
|
||||
|
||||
expect(result.toolCalls).toHaveLength(1)
|
||||
expect(result.toolCalls[0].name).toBe("get_lines")
|
||||
})
|
||||
|
||||
it("should reject unknown tool names in JSON format", () => {
|
||||
const response = `{"name": "unknown_tool", "arguments": {"path": "test.ts"}}`
|
||||
const result = parseToolCalls(response)
|
||||
|
||||
expect(result.toolCalls).toHaveLength(0)
|
||||
expect(result.hasParseErrors).toBe(true)
|
||||
expect(result.parseErrors[0]).toContain("unknown_tool")
|
||||
})
|
||||
|
||||
it("should prefer XML over JSON when both present", () => {
|
||||
const response = `<tool_call name="get_lines"><path>xml.ts</path></tool_call>
|
||||
{"name": "get_function", "arguments": {"path": "json.ts", "name": "foo"}}`
|
||||
|
||||
const result = parseToolCalls(response)
|
||||
|
||||
// Should only parse XML since it was found first
|
||||
expect(result.toolCalls).toHaveLength(1)
|
||||
expect(result.toolCalls[0].name).toBe("get_lines")
|
||||
expect(result.toolCalls[0].params.path).toBe("xml.ts")
|
||||
})
|
||||
|
||||
it("should parse JSON with empty arguments", () => {
|
||||
const response = `{"name": "git_status", "arguments": {}}`
|
||||
const result = parseToolCalls(response)
|
||||
|
||||
expect(result.toolCalls).toHaveLength(1)
|
||||
expect(result.toolCalls[0].name).toBe("git_status")
|
||||
expect(result.toolCalls[0].params).toEqual({})
|
||||
})
|
||||
|
||||
it("should support CDATA for multiline content", () => {
|
||||
const response = `<tool_call name="edit_lines">
|
||||
<path>src/index.ts</path>
|
||||
<content><![CDATA[const x = 1;
|
||||
const y = 2;]]></content>
|
||||
</tool_call>`
|
||||
|
||||
const result = parseToolCalls(response)
|
||||
|
||||
expect(result.toolCalls[0].params.content).toBe("const x = 1;\nconst y = 2;")
|
||||
})
|
||||
|
||||
it("should handle multiple tool calls with mixed content", () => {
|
||||
const response = `Some text
|
||||
<tool_call name="get_lines"><path>a.ts</path></tool_call>
|
||||
More text
|
||||
<tool_call name="get_function"><path>b.ts</path><name>foo</name></tool_call>`
|
||||
|
||||
const result = parseToolCalls(response)
|
||||
|
||||
expect(result.toolCalls).toHaveLength(2)
|
||||
expect(result.toolCalls[0].name).toBe("get_lines")
|
||||
expect(result.toolCalls[1].name).toBe("get_function")
|
||||
expect(result.content).toContain("Some text")
|
||||
expect(result.content).toContain("More text")
|
||||
})
|
||||
|
||||
it("should handle parse errors gracefully and continue", () => {
|
||||
const response = `<tool_call name="unknown_tool1"><path>test.ts</path></tool_call>
|
||||
<tool_call name="get_lines"><path>valid.ts</path></tool_call>
|
||||
<tool_call name="unknown_tool2"><path>test2.ts</path></tool_call>`
|
||||
|
||||
const result = parseToolCalls(response)
|
||||
|
||||
expect(result.toolCalls).toHaveLength(1)
|
||||
expect(result.toolCalls[0].name).toBe("get_lines")
|
||||
expect(result.hasParseErrors).toBe(true)
|
||||
expect(result.parseErrors).toHaveLength(2)
|
||||
expect(result.parseErrors[0]).toContain("unknown_tool1")
|
||||
expect(result.parseErrors[1]).toContain("unknown_tool2")
|
||||
})
|
||||
})
|
||||
|
||||
describe("formatToolCallsAsXml", () => {
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user