Compare commits

...

21 Commits

Author SHA1 Message Date
imfozilbek
225480c806 feat(ipuaro): implement Redis storage module (v0.2.0)
- Add RedisClient with connection management and AOF config
- Add RedisStorage implementing full IStorage interface
- Add Redis key schema for project and session data
- Add generateProjectName() utility
- Add 68 unit tests for Redis module (159 total)
- Update ESLint: no-unnecessary-type-parameters as warn
2025-11-30 00:22:49 +05:00
imfozilbek
fd8e97af0e chore(ipuaro): bump version to 0.1.1 2025-11-29 23:25:49 +05:00
imfozilbek
d36f9a6e21 chore(guardian): bump version to 0.9.3 2025-11-29 23:24:28 +05:00
imfozilbek
4267938dcd docs(guardian): remove fictional success stories and stats 2025-11-29 23:22:26 +05:00
imfozilbek
127c7e2185 docs(ipuaro): improve README with detailed documentation 2025-11-29 23:19:56 +05:00
imfozilbek
130a8c4f24 feat(ipuaro): implement v0.1.0 foundation
- Project setup with tsup, vitest, ESM support
- Domain entities: Session, Project
- Value objects: FileData, FileAST, FileMeta, ChatMessage, ToolCall, ToolResult, UndoEntry
- Service interfaces: IStorage, ILLMClient, ITool, IIndexer, IToolRegistry
- Shared: Config (zod), IpuaroError, utils (hash, tokens), Result type
- CLI with placeholder commands (start, init, index)
- 91 unit tests with 100% coverage
- Fix package scope @puaros -> @samiyev in CLAUDE.md
2025-11-29 23:08:38 +05:00
imfozilbek
7f6180df37 docs: add monorepo versioning strategy and release pipeline
- add Path Reference section with explicit paths
- add Monorepo Versioning Strategy with prefixed tags
- add 6-phase Release Pipeline documentation
- update Git Commit Format for monorepo (package scope)
- update .gitmessage with package scopes
- fix tsconfig.json references (remove non-existent, add ipuaro)
- fix guardian tsconfig formatting (4-space indent)
2025-11-29 22:41:03 +05:00
imfozilbek
daace23814 docs: move ipuaro CONCEPT.md to docs folder 2025-11-29 22:12:28 +05:00
imfozilbek
625e109c0a feat: add ipuaro package with concept and roadmap 2025-11-29 22:10:32 +05:00
imfozilbek
ec7adb1330 docs: add ipuaro package documentation to root files 2025-11-29 22:10:13 +05:00
imfozilbek
085e236c4a docs: move guardian analysis docs to docs folder 2025-11-29 22:09:42 +05:00
imfozilbek
ee6388f587 docs: add research on project structure detection approaches 2025-11-28 11:41:21 +05:00
imfozilbek
a75dbcf147 chore: bump version to 0.9.2 2025-11-27 19:32:07 +05:00
imfozilbek
42da5127cc docs: update CHANGELOG.md for v0.9.2 2025-11-27 19:28:32 +05:00
imfozilbek
0da6d9f3c2 test: update naming convention detector tests for AST-based analysis 2025-11-27 19:27:46 +05:00
imfozilbek
6b35679f09 refactor: update AST strategies to use centralized node type constants 2025-11-27 19:27:30 +05:00
imfozilbek
07e6535633 refactor: add context keywords and improve hardcoded value suggestions 2025-11-27 19:27:07 +05:00
imfozilbek
e8626dd03c refactor: migrate naming convention detector to AST-based analysis 2025-11-27 19:26:43 +05:00
imfozilbek
ce78183c6e refactor: create AST-based naming analyzers for enhanced detection 2025-11-27 19:26:24 +05:00
imfozilbek
1d6aebcd87 refactor: add AST node type constants for tree-sitter analysis 2025-11-27 19:26:01 +05:00
imfozilbek
ceb87f1b1f chore: bump version to 0.9.1 2025-11-26 18:10:36 +05:00
113 changed files with 11654 additions and 1262 deletions

1
.gitignore vendored
View File

@@ -86,3 +86,4 @@ Thumbs.db
# Yarn Integrity file
.yarn-integrity
packages/guardian/docs/STRATEGIC_ANALYSIS_2025-11.md

View File

@@ -1,9 +1,17 @@
# <type>: <subject>
# <type>(<package>): <subject>
#
# <body>
#
# <footer>
# Format:
# - Package changes: <type>(<package>): <subject>
# Examples: feat(guardian): add detector
# fix(ipuaro): resolve memory leak
# - Root changes: <type>: <subject>
# Examples: chore: update eslint config
# docs: update root README
# Type should be one of the following:
# * feat: A new feature
# * fix: A bug fix
@@ -16,6 +24,11 @@
# * ci: Changes to CI configuration files and scripts
# * chore: Other changes that don't modify src or test files
# * revert: Reverts a previous commit
# Package scopes:
# * guardian - @puaros/guardian package
# * ipuaro - @puaros/ipuaro package
# * (none) - root-level changes
#
# Subject line rules:
# - Use imperative mood ("add feature" not "added feature")

View File

@@ -1,63 +0,0 @@
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [Unreleased]
## [0.4.0] - 2025-11-24
### Added
- Dependency direction enforcement - validate that dependencies flow in the correct direction according to Clean Architecture principles
- Architecture layer violation detection for domain, application, and infrastructure layers
## [0.3.0] - 2025-11-24
### Added
- Entity exposure detection - identify when domain entities are exposed outside their module boundaries
- Enhanced architecture violation reporting
## [0.2.0] - 2025-11-24
### Added
- Framework leak detection - detect when domain layer imports framework code
- Framework leak reporting in CLI
- Framework leak examples and documentation
## [0.1.0] - 2025-11-24
### Added
- Initial monorepo setup with pnpm workspaces
- `@puaros/guardian` package - code quality guardian for vibe coders and enterprise teams
- TypeScript with strict type checking and Vitest configuration
- ESLint strict TypeScript rules with 4-space indentation
- Prettier code formatting (4 spaces, double quotes, no semicolons)
- LINTING.md documentation for code style guidelines
- CLAUDE.md for AI assistant guidance
- EditorConfig for consistent IDE settings
- Node.js version specification (.nvmrc: 22.18.0)
- Vitest testing framework with 80% coverage thresholds
- Guardian dependencies: commander, simple-git, tree-sitter, uuid
### Configuration
- TypeScript: nodenext modules, ES2023 target, strict null checks
- ESLint: Strict type checking, complexity limits, code quality rules
- Prettier: 100 char line length, double quotes, no semicolons, trailing commas
- Test coverage: 80% threshold for lines, functions, branches, statements
### Guardian Package
- Hardcode detection (magic numbers, strings)
- Circular dependency detection
- Naming convention enforcement
- Architecture violation detection
- CLI tool with `guardian` command
- 159 tests, all passing
- Clean Architecture implementation
## [0.0.1] - 2025-11-24
### Added
- Initial project structure
- Monorepo workspace configuration

456
CLAUDE.md
View File

@@ -4,7 +4,53 @@ This file provides guidance to Claude Code (claude.ai/code) when working with co
## Project Overview
Puaros is a TypeScript monorepo using pnpm workspaces. Currently contains the `@puaros/guardian` package - a code quality guardian for detecting hardcoded values, circular dependencies, framework leaks, naming violations, and architecture violations. The project uses Node.js 22.18.0 (see `.nvmrc`).
Puaros is a TypeScript monorepo using pnpm workspaces. Contains two packages:
- **`@samiyev/guardian`** - Code quality guardian for detecting hardcoded values, circular dependencies, framework leaks, naming violations, and architecture violations.
- **`@samiyev/ipuaro`** - Local AI agent for codebase operations with "infinite" context feeling. Uses lazy loading, Redis persistence, tree-sitter AST parsing, and Ollama LLM integration.
The project uses Node.js 22.18.0 (see `.nvmrc`).
## Path Reference
**Root:** `/Users/fozilbeksamiyev/projects/ailabs/puaros`
### Key Paths
| Description | Path |
|-------------|------|
| **Root** | `.` |
| **Guardian package** | `packages/guardian` |
| **Guardian src** | `packages/guardian/src` |
| **Guardian tests** | `packages/guardian/tests` |
| **Guardian CLI** | `packages/guardian/src/cli` |
| **Guardian domain** | `packages/guardian/src/domain` |
| **Guardian infrastructure** | `packages/guardian/src/infrastructure` |
| **ipuaro package** | `packages/ipuaro` |
| **ipuaro docs** | `packages/ipuaro/docs` |
### File Locations
| File | Location |
|------|----------|
| Root package.json | `./package.json` |
| Guardian package.json | `packages/guardian/package.json` |
| Guardian tsconfig | `packages/guardian/tsconfig.json` |
| Guardian TODO | `packages/guardian/TODO.md` |
| Guardian CHANGELOG | `packages/guardian/CHANGELOG.md` |
| ipuaro ROADMAP | `packages/ipuaro/ROADMAP.md` |
| ESLint config | `./eslint.config.mjs` |
| Prettier config | `./.prettierrc` |
| Base tsconfig | `./tsconfig.base.json` |
### Path Rules
1. **Always use relative paths from project root** (not absolute)
2. **Package paths start with** `packages/<name>/`
3. **Source code is in** `packages/<name>/src/`
4. **Tests are in** `packages/<name>/tests/`
5. **Docs are in** `packages/<name>/docs/` or `./docs/`
## Essential Commands
@@ -100,28 +146,51 @@ From `eslint.config.mjs` and detailed in `LINTING.md`:
Follow Conventional Commits format. See `.gitmessage` for full rules.
Format: `<type>: <subject>` (imperative mood, no caps, max 50 chars)
**Monorepo format:** `<type>(<package>): <subject>`
**IMPORTANT: Do NOT add "Generated with Claude Code" footer or "Co-Authored-By: Claude" to commit messages.**
Commits should only follow the Conventional Commits format without any additional attribution.
Examples:
- `feat(guardian): add circular dependency detector`
- `fix(ipuaro): resolve memory leak in indexer`
- `docs(guardian): update CLI usage examples`
- `refactor(ipuaro): extract tool registry`
**Root-level changes:** `<type>: <subject>` (no scope)
- `chore: update eslint config`
- `docs: update root README`
**Types:** feat, fix, docs, style, refactor, test, chore
**Rules:**
- Imperative mood, no caps, max 50 chars
- Do NOT add "Generated with Claude Code" footer
- Do NOT add "Co-Authored-By: Claude"
## Monorepo Structure
```
puaros/
├── packages/
── guardian/ # @puaros/guardian - Code quality analyzer
├── src/ # Source files (Clean Architecture layers)
│ ├── domain/ # Domain layer (entities, value objects)
│ ├── application/ # Application layer (use cases, DTOs)
│ ├── infrastructure/ # Infrastructure layer (parsers, analyzers)
│ ├── cli/ # CLI implementation
│ └── shared/ # Shared utilities
├── dist/ # Build output
── guardian/ # @samiyev/guardian - Code quality analyzer
├── src/ # Source files (Clean Architecture)
│ ├── domain/ # Entities, value objects
│ ├── application/ # Use cases, DTOs
│ ├── infrastructure/ # Parsers, analyzers
│ ├── cli/ # CLI implementation
│ └── shared/ # Shared utilities
├── bin/ # CLI entry point
│ │ ├── tests/ # Test files
│ │ └── examples/ # Usage examples
│ └── ipuaro/ # @samiyev/ipuaro - Local AI agent
│ ├── src/ # Source files (Clean Architecture)
│ │ ├── domain/ # Entities, value objects, services
│ │ ├── application/ # Use cases, DTOs, mappers
│ │ ├── infrastructure/ # Storage, LLM, indexer, tools
│ │ ├── tui/ # Terminal UI (Ink/React)
│ │ ├── cli/ # CLI commands
│ │ └── shared/ # Types, constants, utils
│ ├── bin/ # CLI entry point
│ ├── tests/ # Test files
── examples/ # Usage examples
│ └── package.json # Uses Vitest for testing
│ ├── tests/ # Unit and E2E tests
── examples/ # Demo projects
├── pnpm-workspace.yaml # Workspace configuration
└── tsconfig.base.json # Shared TypeScript config
```
@@ -142,6 +211,34 @@ Key features:
- Architecture violation detection
- CLI tool with `guardian` command
### ipuaro Package Architecture
The ipuaro package follows Clean Architecture principles:
- **Domain Layer**: Entities (Session, Project), value objects (FileData, FileAST, ChatMessage), service interfaces
- **Application Layer**: Use cases (StartSession, HandleMessage, IndexProject, ExecuteTool), DTOs, mappers
- **Infrastructure Layer**: Redis storage, Ollama client, indexer, 18 tool implementations, security
- **TUI Layer**: Ink/React components (StatusBar, Chat, Input, DiffView, ConfirmDialog)
- **CLI Layer**: Commander.js entry point and commands
Key features:
- 18 LLM tools (read, edit, search, analysis, git, run)
- Redis persistence with AOF
- tree-sitter AST parsing (ts, tsx, js, jsx)
- Ollama LLM integration (qwen2.5-coder:7b-instruct)
- File watching via chokidar
- Session and undo management
- Security (blacklist/whitelist for commands)
**Tools summary:**
| Category | Tools |
|----------|-------|
| Read | get_lines, get_function, get_class, get_structure |
| Edit | edit_lines, create_file, delete_file |
| Search | find_references, find_definition |
| Analysis | get_dependencies, get_dependents, get_complexity, get_todos |
| Git | git_status, git_diff, git_commit |
| Run | run_command, run_tests |
### TypeScript Configuration
Base configuration (`tsconfig.base.json`) uses:
@@ -163,253 +260,254 @@ Guardian package (`packages/guardian/tsconfig.json`):
## Adding New Packages
1. Create `packages/new-package/` directory
2. Add `package.json` with name `@puaros/new-package`
2. Add `package.json` with name `@samiyev/new-package`
3. Create `tsconfig.json` extending `../../tsconfig.base.json`
4. Package auto-discovered via `pnpm-workspace.yaml` glob pattern
## Dependencies
Guardian package uses:
- `commander` - CLI framework for command-line interface
**Guardian package:**
- `commander` - CLI framework
- `simple-git` - Git operations
- `tree-sitter` - Abstract syntax tree parsing
- `tree-sitter-javascript` - JavaScript parser
- `tree-sitter-typescript` - TypeScript parser
- `tree-sitter` - AST parsing
- `tree-sitter-javascript/typescript` - JS/TS parsers
- `uuid` - UUID generation
Development tools:
- Vitest for testing with coverage thresholds
**ipuaro package:**
- `ink`, `ink-text-input`, `react` - Terminal UI
- `ioredis` - Redis client
- `tree-sitter` - AST parsing
- `tree-sitter-javascript/typescript` - JS/TS parsers
- `ollama` - LLM client
- `simple-git` - Git operations
- `chokidar` - File watching
- `commander` - CLI framework
- `zod` - Validation
- `ignore` - Gitignore parsing
**Development tools (shared):**
- Vitest for testing (80% coverage threshold)
- ESLint with TypeScript strict rules
- Prettier for formatting
- `@vitest/ui` - Vitest UI for interactive testing
- Prettier (4-space indentation)
- `@vitest/ui` - Interactive testing UI
- `@vitest/coverage-v8` - Coverage reporting
## Development Workflow
## Monorepo Versioning Strategy
### Complete Feature Development & Release Workflow
### Git Tag Format
This workflow ensures high quality and consistency from feature implementation to package publication.
#### Phase 1: Feature Planning & Implementation
```bash
# 1. Create feature branch (if needed)
git checkout -b feature/your-feature-name
# 2. Implement feature following Clean Architecture
# - Add to appropriate layer (domain/application/infrastructure/cli)
# - Follow naming conventions
# - Keep functions small and focused
# 3. Update constants if adding CLI options
# Edit: packages/guardian/src/cli/constants.ts
**Prefixed tags for each package:**
```
guardian-v0.5.0
ipuaro-v0.1.0
```
#### Phase 2: Quality Checks (Run After Implementation)
**Why prefixed tags:**
- Independent versioning per package
- Clear release history for each package
- Works with npm publish and CI/CD
- Easy to filter: `git tag -l "guardian-*"`
**Legacy tags:** Tags before monorepo (v0.1.0, v0.2.0, etc.) are kept as-is for historical reference.
### Semantic Versioning
All packages follow SemVer: `MAJOR.MINOR.PATCH`
- **MAJOR** (1.0.0) - Breaking API changes
- **MINOR** (0.1.0) - New features, backwards compatible
- **PATCH** (0.0.1) - Bug fixes, backwards compatible
**Pre-1.0 policy:** Minor bumps (0.x.0) may include breaking changes.
## Release Pipeline
**Quick reference:** Say "run pipeline for [package]" to execute full release flow.
The pipeline has 6 phases. Each phase must pass before proceeding.
### Phase 1: Quality Gates
```bash
# Navigate to package
cd packages/guardian
cd packages/<package>
# 1. Format code (REQUIRED - 4 spaces indentation)
pnpm format
# 2. Build to check compilation
pnpm build
# 3. Run linter (must pass with 0 errors, 0 warnings)
cd ../.. && pnpm eslint "packages/**/*.ts" --fix
# 4. Run tests (all must pass)
pnpm test:run
# 5. Check coverage (must be ≥80%)
pnpm test:coverage
# All must pass:
pnpm format # 4-space indentation
pnpm build # TypeScript compiles
cd ../.. && pnpm eslint "packages/**/*.ts" --fix # 0 errors, 0 warnings
cd packages/<package>
pnpm test:run # All tests pass
pnpm test:coverage # Coverage ≥80%
```
**Quality Gates:**
- ✅ Format: No changes after `pnpm format`
- ✅ Build: TypeScript compiles without errors
- ✅ Lint: 0 errors, 0 warnings
- ✅ Tests: All tests pass (292/292)
- ✅ Coverage: ≥80% on all metrics
### Phase 2: Documentation
#### Phase 3: Documentation Updates
Update these files in `packages/<package>/`:
| File | Action |
|------|--------|
| `README.md` | Add feature docs, update CLI usage, update API |
| `TODO.md` | Mark completed tasks, add new tech debt if any |
| `CHANGELOG.md` | Add version entry with all changes |
| `ROADMAP.md` | Update if milestone completed |
**Tech debt rule:** If implementation leaves known issues, shortcuts, or future improvements needed — add them to TODO.md before committing.
### Phase 3: Manual Testing
```bash
# 1. Update README.md
# - Add new feature to Features section
# - Update CLI Usage examples if CLI changed
# - Update API documentation if public API changed
# - Update TypeScript interfaces
cd packages/<package>
# 2. Update TODO.md
# - Mark completed tasks as done
# - Add new technical debt if discovered
# - Document coverage issues for new files
# - Update "Recent Updates" section with changes
# Test CLI/API manually
node dist/cli/index.js <command> ./examples
# 3. Update CHANGELOG.md (for releases)
# - Add entry with version number
# - List all changes (features, fixes, improvements)
# - Follow Keep a Changelog format
# Verify output, edge cases, error handling
```
#### Phase 4: Verification & Testing
### Phase 4: Commit
```bash
# 1. Test CLI manually with examples
cd packages/guardian
node dist/cli/index.js check ./examples --limit 5
# 2. Test new feature with different options
node dist/cli/index.js check ./examples --only-critical
node dist/cli/index.js check ./examples --min-severity high
# 3. Verify output formatting and messages
# - Check that all violations display correctly
# - Verify severity labels and suggestions
# - Test edge cases and error handling
# 4. Run full quality check suite
pnpm format && pnpm eslint "packages/**/*.ts" && pnpm build && pnpm test:run
```
#### Phase 5: Commit & Version
```bash
# 1. Stage changes
git add .
git commit -m "<type>(<package>): <description>"
# 2. Commit with Conventional Commits format
git commit -m "feat: add --limit option for output control"
# or
git commit -m "fix: resolve unused variable in detector"
# or
git commit -m "docs: update README with new features"
# Types: feat, fix, docs, style, refactor, test, chore
# 3. Update package version (if releasing)
cd packages/guardian
npm version patch # Bug fixes (0.5.2 → 0.5.3)
npm version minor # New features (0.5.2 → 0.6.0)
npm version major # Breaking changes (0.5.2 → 1.0.0)
# 4. Push changes
git push origin main # or your branch
git push --tags # Push version tags
# Examples:
# feat(guardian): add --limit option
# fix(ipuaro): resolve memory leak in indexer
# docs(guardian): update API examples
```
#### Phase 6: Publication (Maintainers Only)
**Commit types:** feat, fix, docs, style, refactor, test, chore
### Phase 5: Version & Tag
```bash
# 1. Final verification before publish
cd packages/guardian
cd packages/<package>
# Bump version
npm version patch # 0.5.2 → 0.5.3 (bug fix)
npm version minor # 0.5.2 → 0.6.0 (new feature)
npm version major # 0.5.2 → 1.0.0 (breaking change)
# Create prefixed git tag
git tag <package>-v<version>
# Example: git tag guardian-v0.6.0
# Push
git push origin main
git push origin <package>-v<version>
```
### Phase 6: Publish (Maintainers Only)
```bash
cd packages/<package>
# Final verification
pnpm build && pnpm test:run && pnpm test:coverage
# 2. Verify package contents
# Check package contents
npm pack --dry-run
# 3. Publish to npm
# Publish
npm publish --access public
# 4. Verify publication
npm info @samiyev/guardian
# 5. Test installation
npm install -g @samiyev/guardian@latest
guardian --version
# Verify
npm info @samiyev/<package>
```
### Quick Checklist for New Features
## Pipeline Checklist
**Before Committing:**
- [ ] Feature implemented in correct layer
- [ ] Code formatted with `pnpm format`
- [ ] Lint passes: `pnpm eslint "packages/**/*.ts"`
- [ ] Build succeeds: `pnpm build`
- [ ] All tests pass: `pnpm test:run`
- [ ] Coverage ≥80%: `pnpm test:coverage`
- [ ] CLI tested manually if CLI changed
- [ ] README.md updated with examples
- [ ] TODO.md updated with progress
- [ ] No `console.log` in production code
- [ ] TypeScript interfaces documented
Copy and use for each release:
**Before Publishing:**
- [ ] CHANGELOG.md updated
```markdown
## Release: <package> v<version>
### Quality Gates
- [ ] `pnpm format` - no changes
- [ ] `pnpm build` - compiles
- [ ] `pnpm eslint` - 0 errors, 0 warnings
- [ ] `pnpm test:run` - all pass
- [ ] `pnpm test:coverage` - ≥80%
### Documentation
- [ ] README.md updated
- [ ] TODO.md - completed tasks marked, new debt added
- [ ] CHANGELOG.md - version entry added
- [ ] ROADMAP.md updated (if needed)
### Testing
- [ ] CLI/API tested manually
- [ ] Edge cases verified
### Release
- [ ] Commit with conventional format
- [ ] Version bumped in package.json
- [ ] All quality gates pass
- [ ] Examples work correctly
- [ ] Git tags pushed
- [ ] Git tag created: <package>-v<version>
- [ ] Pushed to origin
- [ ] Published to npm (if public release)
```
### Common Workflows
## Common Workflows
### Adding a new CLI option
**Adding a new CLI option:**
```bash
# 1. Add to cli/constants.ts (CLI_OPTIONS, CLI_DESCRIPTIONS)
# 2. Add option in cli/index.ts (.option() call)
# 3. Parse and use option in action handler
# 4. Test with: node dist/cli/index.js check ./examples --your-option
# 5. Update README.md CLI Usage section
# 6. Run quality checks
# 4. Test: node dist/cli/index.js <command> --your-option
# 5. Run pipeline
```
**Adding a new detector:**
### Adding a new detector (guardian)
```bash
# 1. Create value object in domain/value-objects/
# 2. Create detector in infrastructure/analyzers/
# 3. Add detector interface to domain/services/
# 3. Add interface to domain/services/
# 4. Integrate in application/use-cases/AnalyzeProject.ts
# 5. Add CLI output in cli/index.ts
# 6. Write tests (aim for >90% coverage)
# 7. Update README.md Features section
# 8. Run full quality suite
# 7. Run pipeline
```
**Fixing technical debt:**
### Adding a new tool (ipuaro)
```bash
# 1. Define tool schema in infrastructure/tools/schemas/
# 2. Implement tool in infrastructure/tools/
# 3. Register in infrastructure/tools/index.ts
# 4. Add tests
# 5. Run pipeline
```
### Fixing technical debt
```bash
# 1. Find issue in TODO.md
# 2. Implement fix
# 3. Run quality checks
# 4. Update TODO.md (mark as completed)
# 5. Commit with type: "refactor:" or "fix:"
# 3. Update TODO.md (mark as completed)
# 4. Run pipeline with type: "refactor:" or "fix:"
```
### Debugging Tips
## Debugging Tips
**Build errors:**
```bash
# Check TypeScript errors in detail
pnpm tsc --noEmit
# Check specific file
pnpm tsc --noEmit packages/guardian/src/path/to/file.ts
pnpm tsc --noEmit packages/<package>/src/path/to/file.ts
```
**Test failures:**
```bash
# Run single test file
pnpm vitest tests/path/to/test.test.ts
# Run tests with UI
pnpm test:ui
# Run tests in watch mode for debugging
pnpm test
```
**Coverage issues:**
```bash
# Generate detailed coverage report
pnpm test:coverage
# View HTML report
open coverage/index.html
# Check specific file coverage
pnpm vitest --coverage --reporter=verbose
```
## Important Notes

104
README.md
View File

@@ -6,6 +6,8 @@ A TypeScript monorepo for code quality and analysis tools.
- **[@puaros/guardian](./packages/guardian)** - Research-backed code quality guardian for vibe coders and enterprise teams. Detects hardcoded values, secrets, circular dependencies, architecture violations, and anemic domain models. Every rule is based on academic research, industry standards (OWASP, SonarQube), and authoritative books (Martin Fowler, Uncle Bob, Eric Evans). Perfect for AI-assisted development and enforcing Clean Architecture at scale.
- **[@puaros/ipuaro](./packages/ipuaro)** - Local AI agent for codebase operations with "infinite" context feeling. Uses lazy loading and smart context management to work with codebases of any size. Features 18 LLM tools for reading, editing, searching, and analyzing code. Built with Ink/React TUI, Redis persistence, tree-sitter AST parsing, and Ollama integration.
## Prerequisites
- Node.js 22.18.0 (use `nvm use` to automatically switch to the correct version)
@@ -75,18 +77,27 @@ pnpm eslint "packages/**/*.ts"
```
puaros/
├── packages/
── guardian/ # @puaros/guardian - Code quality analyzer
── guardian/ # @puaros/guardian - Code quality analyzer
│ │ ├── src/ # Source files (Clean Architecture)
│ │ │ ├── domain/ # Domain layer
│ │ │ ├── application/ # Application layer
│ │ │ ├── infrastructure/# Infrastructure layer
│ │ │ ├── cli/ # CLI implementation
│ │ │ └── shared/ # Shared utilities
│ │ ├── bin/ # CLI entry point
│ │ ├── tests/ # Unit and integration tests
│ │ └── examples/ # Usage examples
│ └── ipuaro/ # @puaros/ipuaro - Local AI agent
│ ├── src/ # Source files (Clean Architecture)
│ │ ├── domain/ # Domain layer
│ │ ├── application/ # Application layer
│ │ ├── infrastructure/# Infrastructure layer
│ │ ├── cli/ # CLI implementation
│ │ ── shared/ # Shared utilities
├── dist/ # Build output (generated)
│ │ ├── domain/ # Entities, value objects, services
│ │ ├── application/ # Use cases, DTOs, mappers
│ │ ├── infrastructure/# Storage, LLM, indexer, tools
│ │ ├── tui/ # Terminal UI (Ink/React)
│ │ ── cli/ # CLI commands
│ └── shared/ # Types, constants, utils
│ ├── bin/ # CLI entry point
│ ├── tests/ # Unit and integration tests
── examples/ # Usage examples
│ └── package.json
│ ├── tests/ # Unit and E2E tests
── examples/ # Demo projects
├── pnpm-workspace.yaml # Workspace configuration
├── tsconfig.base.json # Shared TypeScript config
├── eslint.config.mjs # ESLint configuration
@@ -204,6 +215,79 @@ guardian check ./src --format json > report.json
guardian check ./src --fail-on hardcode --fail-on circular
```
## ipuaro Package
The `@puaros/ipuaro` package is a local AI agent for codebase operations:
### Features
- **Infinite Context Feeling**: Lazy loading and smart context management for any codebase size
- **18 LLM Tools**: Read, edit, search, analyze code through natural language
- **Terminal UI**: Full-featured TUI built with Ink/React
- **Redis Persistence**: Sessions, undo stack, and project index stored in Redis
- **AST Parsing**: tree-sitter for TypeScript/JavaScript analysis
- **File Watching**: Real-time index updates via chokidar
- **Security**: Blacklist/whitelist for command execution
### Tech Stack
| Component | Technology |
|-----------|------------|
| Runtime | Node.js + TypeScript |
| TUI | Ink (React for terminal) |
| Storage | Redis with AOF persistence |
| AST | tree-sitter (ts, tsx, js, jsx) |
| LLM | Ollama (qwen2.5-coder:7b-instruct) |
| Git | simple-git |
| File watching | chokidar |
### Tools (18 total)
| Category | Tools |
|----------|-------|
| **Read** | get_lines, get_function, get_class, get_structure |
| **Edit** | edit_lines, create_file, delete_file |
| **Search** | find_references, find_definition |
| **Analysis** | get_dependencies, get_dependents, get_complexity, get_todos |
| **Git** | git_status, git_diff, git_commit |
| **Run** | run_command, run_tests |
### Architecture
Built with Clean Architecture principles:
- **Domain Layer**: Entities, value objects, service interfaces
- **Application Layer**: Use cases, DTOs, mappers
- **Infrastructure Layer**: Redis storage, Ollama client, indexer, tools
- **TUI Layer**: Ink/React components and hooks
- **CLI Layer**: Commander.js entry point
### Usage
```bash
# Start TUI in current directory
ipuaro
# Start in specific directory
ipuaro /path/to/project
# Index only (no TUI)
ipuaro index
# With auto-apply mode
ipuaro --auto-apply
```
### Commands
| Command | Description |
|---------|-------------|
| `/help` | Show all commands |
| `/clear` | Clear chat history |
| `/undo` | Revert last file change |
| `/sessions` | Manage sessions |
| `/status` | System status |
| `/reindex` | Force reindexation |
## Dependencies
Guardian package uses:

View File

@@ -74,6 +74,7 @@ export default tseslint.config(
'@typescript-eslint/require-await': 'warn',
'@typescript-eslint/no-unnecessary-condition': 'off', // Sometimes useful for defensive coding
'@typescript-eslint/no-non-null-assertion': 'warn',
'@typescript-eslint/no-unnecessary-type-parameters': 'warn', // Allow generic JSON parsers
// ========================================
// Code Quality & Best Practices

View File

@@ -5,6 +5,32 @@ All notable changes to @samiyev/guardian will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [0.9.2] - 2025-11-27
### Changed
- 🔄 **Refactored naming convention detector** - Migrated from regex-based to AST-based analysis:
- Replaced regex pattern matching with tree-sitter Abstract Syntax Tree traversal
- Improved accuracy with AST node context awareness (classes, interfaces, functions, variables)
- Reduced false positives with better naming pattern detection
- Added centralized AST node type constants (`ast-node-types.ts`) for maintainability
- New modular architecture with specialized analyzers:
- `AstClassNameAnalyzer` - Class naming validation
- `AstInterfaceNameAnalyzer` - Interface naming validation
- `AstFunctionNameAnalyzer` - Function naming validation
- `AstVariableNameAnalyzer` - Variable naming validation
- `AstNamingTraverser` - AST traversal for naming analysis
- Enhanced context-aware suggestions for hardcoded values:
- Added context keywords (EMAIL_CONTEXT_KEYWORDS, API_KEY_CONTEXT_KEYWORDS, URL_CONTEXT_KEYWORDS, etc.)
- Improved constant name generation based on context (ADMIN_EMAIL, API_SECRET_KEY, DATABASE_URL, etc.)
- Better file path suggestions (CONFIG_ENVIRONMENT, CONFIG_CONTACTS, CONFIG_PATHS, etc.)
### Quality
-**All tests pass** - Updated tests for AST-based naming detection
-**Code organization** - Centralized AST constants reduce code duplication
-**Maintainability** - Modular analyzers improve code separation and testability
## [0.9.1] - 2025-11-26
### Changed

View File

@@ -325,17 +325,6 @@ await reportMetrics({
| **AI Enablement** | Safely adopt AI coding tools at scale |
| **Technical Debt Visibility** | Metrics and trends for data-driven decisions |
### Enterprise Success Stories
**Fortune 500 Financial Services** 🏦
> "We have 200+ developers and were struggling with architectural consistency. Guardian reduced our code review cycle time by 35% and caught 12 hardcoded API keys before they hit production. ROI in first month." - VP Engineering
**Scale-up SaaS (Series B)** 📈
> "Guardian allowed us to confidently adopt GitHub Copilot across our team. AI writes code 3x faster, Guardian ensures quality. We ship more features without increasing tech debt." - CTO
**Consulting Firm** 💼
> "We use Guardian on every client project. It enforces our standards automatically, and clients love the quality metrics reports. Saved us from a major security incident when it caught hardcoded AWS credentials." - Lead Architect
## Installation
```bash
@@ -970,36 +959,6 @@ Guardian follows Clean Architecture principles:
- Node.js >= 18.0.0
- TypeScript >= 5.0.0 (for TypeScript projects)
## Real-World Vibe Coding Stats
Based on testing Guardian with AI-generated codebases:
| Metric | Typical AI Code | After Guardian |
|--------|----------------|----------------|
| Hardcoded values | 15-30 per 1000 LOC | 0-2 per 1000 LOC |
| Circular deps | 2-5 per project | 0 per project |
| Architecture violations | 10-20% of files | <1% of files |
| Time to fix issues | Manual review: 2-4 hours | Guardian + AI: 5-10 minutes |
**Common Issues Guardian Finds in AI Code:**
- 🔐 Hardcoded secrets and API keys (CRITICAL)
- ⏱️ Magic timeouts and retry counts
- 🌐 Hardcoded URLs and endpoints
- 🔄 Accidental circular imports
- 📁 Files in wrong architectural layers
- 🏷️ Inconsistent naming patterns
## Success Stories
**Prototype to Production**
> "Built a SaaS MVP with Claude in 3 days. Guardian caught 47 hardcoded values before first deploy. Saved us from production disasters." - Indie Hacker
**Learning Clean Architecture** 📚
> "Guardian taught me Clean Architecture better than any tutorial. Every violation is a mini lesson with suggestions." - Junior Dev
**AI-First Startup** 🚀
> "We ship 5+ features daily using Claude + Guardian. No human code reviews needed for AI-generated code anymore." - Tech Lead
## FAQ for Vibe Coders
**Q: Will Guardian slow down my AI workflow?**

View File

@@ -0,0 +1,979 @@
# Research: Project Structure Detection for Architecture Analysis
This document provides comprehensive research on approaches to detecting and validating project architecture structure. It covers existing tools, academic research, algorithms, and industry best practices that inform Guardian's architecture detection strategy.
---
## Table of Contents
1. [Executive Summary](#1-executive-summary)
2. [Existing Tools Analysis](#2-existing-tools-analysis)
3. [Academic Approaches to Architecture Recovery](#3-academic-approaches-to-architecture-recovery)
4. [Graph Analysis Algorithms](#4-graph-analysis-algorithms)
5. [Configuration Patterns and Best Practices](#5-configuration-patterns-and-best-practices)
6. [Industry Consensus](#6-industry-consensus)
7. [Recommendations for Guardian](#7-recommendations-for-guardian)
8. [Additional Resources](#8-additional-resources)
---
## 1. Executive Summary
### Key Finding
**Industry consensus:** Automatic architecture detection is unreliable. All major tools (ArchUnit, eslint-plugin-boundaries, Nx, dependency-cruiser, SonarQube) require **explicit configuration** from users rather than attempting automatic detection.
### Why Automatic Detection Fails
1. **Too Many Variations**: Project structures vary wildly across teams, frameworks, and domains
2. **False Positives**: Algorithms may "detect" non-existent architectural patterns
3. **Performance**: Graph analysis is slow for large codebases (>2000 files)
4. **Ambiguity**: Same folder names can mean different things in different contexts
5. **Legacy Code**: Poorly structured code produces meaningless analysis results
### Recommended Approach
| Priority | Approach | Description |
|----------|----------|-------------|
| P0 | Pattern-based detection | Glob/regex patterns for layer identification |
| P0 | Configuration file | `.guardianrc.json` for explicit rules |
| P1 | Presets | Pre-configured patterns for common architectures |
| P1 | Generic mode | Fallback with minimal checks |
| P2 | Interactive setup | CLI wizard for configuration generation |
| P2 | Graph visualization | Visual dependency analysis (informational only) |
| ❌ | Auto-detection | NOT recommended as primary strategy |
---
## 2. Existing Tools Analysis
### 2.1 ArchUnit (Java)
**Approach:** Fully declarative - user defines all layers explicitly.
**Official Website:** https://www.archunit.org/
**User Guide:** https://www.archunit.org/userguide/html/000_Index.html
**GitHub Repository:** https://github.com/TNG/ArchUnit
**Key Characteristics:**
- Does NOT detect architecture automatically
- User explicitly defines layers via package patterns
- Fluent API for rule definition
- Supports Layered, Onion, and Hexagonal architectures out-of-box
- Integrates with JUnit/TestNG test frameworks
**Example Configuration:**
```java
layeredArchitecture()
.layer("Controller").definedBy("..controller..")
.layer("Service").definedBy("..service..")
.layer("Persistence").definedBy("..persistence..")
.whereLayer("Controller").mayNotBeAccessedByAnyLayer()
.whereLayer("Service").mayOnlyBeAccessedByLayers("Controller")
.whereLayer("Persistence").mayOnlyBeAccessedByLayers("Service")
```
**References:**
- Baeldung Tutorial: https://www.baeldung.com/java-archunit-intro
- InfoQ Article: https://www.infoq.com/news/2022/10/archunit/
- Examples Repository: https://github.com/TNG/ArchUnit-Examples
---
### 2.2 eslint-plugin-boundaries (TypeScript/JavaScript)
**Approach:** Pattern-based element definition with dependency rules.
**NPM Package:** https://www.npmjs.com/package/eslint-plugin-boundaries
**GitHub Repository:** https://github.com/javierbrea/eslint-plugin-boundaries
**Key Characteristics:**
- Does NOT detect architecture automatically
- Uses micromatch/glob patterns for element identification
- Supports capture groups for dynamic element naming
- TypeScript import type awareness (`value` vs `type` imports)
- Works with monorepos
**Example Configuration:**
```javascript
settings: {
"boundaries/elements": [
{
type: "domain",
pattern: "src/domain/*",
mode: "folder",
capture: ["elementName"]
},
{
type: "application",
pattern: "src/application/*",
mode: "folder"
},
{
type: "infrastructure",
pattern: "src/infrastructure/*",
mode: "folder"
}
]
},
rules: {
"boundaries/element-types": [2, {
default: "disallow",
rules: [
{ from: "infrastructure", allow: ["application", "domain"] },
{ from: "application", allow: ["domain"] },
{ from: "domain", disallow: ["*"] }
]
}]
}
```
**References:**
- TypeScript Example: https://github.com/javierbrea/epb-ts-example
- Element Types Documentation: https://github.com/javierbrea/eslint-plugin-boundaries/blob/master/docs/rules/element-types.md
- Medium Tutorial: https://medium.com/@taynan_duarte/ensuring-dependency-rules-in-a-nodejs-application-with-typescript-using-eslint-plugin-boundaries-68b70ce32437
---
### 2.3 SonarQube Architecture as Code
**Approach:** YAML/JSON configuration with automatic code structure analysis.
**Official Documentation:** https://docs.sonarsource.com/sonarqube-server/design-and-architecture/overview/
**Configuration Guide:** https://docs.sonarsource.com/sonarqube-server/design-and-architecture/configuring-the-architecture-analysis/
**Key Characteristics:**
- Introduced in SonarQube 2025 Release 2
- Automatic code structure analysis (basic)
- YAML/JSON configuration for custom rules
- Supports "Perspectives" (multiple views of architecture)
- Hierarchical "Groups" for organization
- Glob and regex pattern support
- Works without configuration for basic checks (cycle detection)
**Supported Languages:**
- Java (SonarQube Server)
- Java, JavaScript, TypeScript (SonarQube Cloud)
- Python, C# (coming soon)
- C++ (under consideration)
**Example Configuration:**
```yaml
# architecture.yaml
perspectives:
- name: "Clean Architecture"
groups:
- name: "Domain"
patterns:
- "src/domain/**"
- "src/core/**"
- name: "Application"
patterns:
- "src/application/**"
- "src/use-cases/**"
- name: "Infrastructure"
patterns:
- "src/infrastructure/**"
- "src/adapters/**"
constraints:
- from: "Domain"
deny: ["Application", "Infrastructure"]
- from: "Application"
deny: ["Infrastructure"]
```
**References:**
- Blog Announcement: https://www.sonarsource.com/blog/introducing-architecture-as-code-in-sonarqube/
- Security Boulevard Coverage: https://securityboulevard.com/2025/04/introducing-architecture-as-code-in-sonarqube-7/
---
### 2.4 Nx Enforce Module Boundaries
**Approach:** Tag-based system with ESLint integration.
**Official Documentation:** https://nx.dev/docs/features/enforce-module-boundaries
**ESLint Rule Guide:** https://nx.dev/docs/technologies/eslint/eslint-plugin/guides/enforce-module-boundaries
**Key Characteristics:**
- Tag-based constraint system (scope, type)
- Projects tagged in project.json or package.json
- Supports regex patterns in tags
- Two-dimensional constraints (scope + type)
- External dependency blocking
- Integration with Nx project graph
**Example Configuration:**
```json
// project.json
{
"name": "user-domain",
"tags": ["scope:user", "type:domain"]
}
// ESLint config
{
"@nx/enforce-module-boundaries": ["error", {
"depConstraints": [
{
"sourceTag": "type:domain",
"onlyDependOnLibsWithTags": ["type:domain"]
},
{
"sourceTag": "type:application",
"onlyDependOnLibsWithTags": ["type:domain", "type:application"]
},
{
"sourceTag": "scope:user",
"onlyDependOnLibsWithTags": ["scope:user", "scope:shared"]
}
]
}]
}
```
**References:**
- Project Dependency Rules: https://nx.dev/docs/concepts/decisions/project-dependency-rules
- Blog Post on Module Boundaries: https://nx.dev/blog/mastering-the-project-boundaries-in-nx
- Medium Tutorial: https://medium.com/rupesh-tiwari/enforcing-dependency-constraints-within-service-in-nx-monorepo-workspace-56e87e792c98
---
### 2.5 dependency-cruiser
**Approach:** Rule-based validation with visualization capabilities.
**NPM Package:** https://www.npmjs.com/package/dependency-cruiser
**GitHub Repository:** https://github.com/sverweij/dependency-cruiser
**Key Characteristics:**
- Regex patterns for from/to rules
- Multiple output formats (SVG, DOT, Mermaid, JSON, HTML)
- CI/CD integration support
- TypeScript pre-compilation dependency support
- Does NOT detect architecture automatically
**Example Configuration:**
```javascript
// .dependency-cruiser.js
module.exports = {
forbidden: [
{
name: "no-domain-to-infrastructure",
severity: "error",
from: { path: "^src/domain" },
to: { path: "^src/infrastructure" }
},
{
name: "no-circular",
severity: "error",
from: {},
to: { circular: true }
}
],
options: {
doNotFollow: { path: "node_modules" },
tsPreCompilationDeps: true
}
}
```
**References:**
- Options Reference: https://github.com/sverweij/dependency-cruiser/blob/main/doc/options-reference.md
- Rules Reference: https://github.com/sverweij/dependency-cruiser/blob/main/doc/rules-reference.md
- Clean Architecture Tutorial: https://betterprogramming.pub/validate-dependencies-according-to-clean-architecture-743077ea084c
---
### 2.6 ts-arch / ArchUnitTS (TypeScript)
**Approach:** ArchUnit-like fluent API for TypeScript.
**ts-arch GitHub:** https://github.com/ts-arch/ts-arch
**ts-arch Documentation:** https://ts-arch.github.io/ts-arch/
**ArchUnitTS GitHub:** https://github.com/LukasNiessen/ArchUnitTS
**Key Characteristics:**
- Fluent API similar to ArchUnit
- PlantUML diagram validation support
- Jest/Vitest integration
- Nx monorepo support
- Does NOT detect architecture automatically
**Example Usage:**
```typescript
import { filesOfProject } from "tsarch"
// Folder-based dependency check
const rule = filesOfProject()
.inFolder("domain")
.shouldNot()
.dependOnFiles()
.inFolder("infrastructure")
await expect(rule).toPassAsync()
// PlantUML diagram validation
const rule = await slicesOfProject()
.definedBy("src/(**/)")
.should()
.adhereToDiagramInFile("architecture.puml")
```
**References:**
- NPM Package: https://www.npmjs.com/package/tsarch
- ArchUnitTS Documentation: https://lukasniessen.github.io/ArchUnitTS/
- DeepWiki Analysis: https://deepwiki.com/ts-arch/ts-arch
---
### 2.7 Madge
**Approach:** Visualization and circular dependency detection.
**NPM Package:** https://www.npmjs.com/package/madge
**GitHub Repository:** https://github.com/pahen/madge
**Key Characteristics:**
- Dependency graph visualization
- Circular dependency detection
- Multiple layout algorithms (dot, neato, fdp, circo)
- Simple CLI interface
- Does NOT define or enforce layers
**Usage:**
```bash
# Find circular dependencies
npx madge --circular src/
# Generate dependency graph
npx madge src/ --image deps.svg
# TypeScript support
npx madge src/main.ts --ts-config tsconfig.json --image ./deps.png
```
**References:**
- NestJS Integration: https://manishbit97.medium.com/identifying-circular-dependencies-in-nestjs-using-madge-de137cd7f74f
- Angular Integration: https://www.angulartraining.com/daily-newsletter/visualizing-internal-dependencies-with-madge/
- React/TypeScript Tutorial: https://dev.to/greenroach/detecting-circular-dependencies-in-a-reacttypescript-app-using-madge-229
**Alternative: Skott**
- Claims to be 7x faster than Madge
- Reference: https://dev.to/antoinecoulon/introducing-skott-the-new-madge-1bfl
---
## 3. Academic Approaches to Architecture Recovery
### 3.1 Software Architecture Recovery Overview
**Wikipedia Definition:** https://en.wikipedia.org/wiki/Software_architecture_recovery
Software architecture recovery is a set of methods for extracting architectural information from lower-level representations of a software system, such as source code. The abstraction process frequently involves clustering source code entities (files, classes, functions) into subsystems according to application-dependent or independent criteria.
**Motivation:**
- Legacy systems often lack architectural documentation
- Existing documentation is frequently out of sync with implementation
- Understanding architecture is essential for maintenance and evolution
---
### 3.2 Machine Learning Approaches
**Research Paper:** "Automatic software architecture recovery: A machine learning approach"
**Source:** ResearchGate - https://www.researchgate.net/publication/261309157_Automatic_software_architecture_recovery_A_machine_learning_approach
**Key Points:**
- Current architecture recovery techniques require heavy human intervention or fail to recover quality components
- Machine learning techniques use multiple feature types:
- Structural features (dependencies, coupling)
- Runtime behavioral features
- Domain/textual features
- Contextual features (code authorship, line co-change)
- Automatically recovering functional architecture facilitates developer understanding
**Limitation:** Requires training data and may not generalize across project types.
---
### 3.3 Genetic Algorithms for Architecture Recovery
**Research Paper:** "Parallelization of genetic algorithms for software architecture recovery"
**Source:** Springer - https://link.springer.com/content/pdf/10.1007/s10515-024-00479-0.pdf
**Key Points:**
- Software Architecture Recovery (SAR) techniques analyze dependencies between modules
- Automatically cluster modules to achieve high modularity
- Many approaches employ Genetic Algorithms (GAs)
- Major drawback: lack of scalability
- Solution: parallel execution of GA subroutines
**Finding:** Finding optimal software clustering is an NP-complete problem.
---
### 3.4 Clustering Algorithms Comparison
**Research Paper:** "A comparative analysis of software architecture recovery techniques"
**Source:** IEEE Xplore - https://ieeexplore.ieee.org/document/6693106/
**Algorithms Compared:**
| Algorithm | Description | Strengths | Weaknesses |
|-----------|-------------|-----------|------------|
| ACDC | Comprehension-Driven Clustering | Finds natural subsystems | Requires parameter tuning |
| LIMBO | Information-Theoretic Clustering | Scalable | May miss domain patterns |
| WCA | Weighted Combined Algorithm | Balances multiple factors | Complex configuration |
| K-means | Baseline clustering | Simple, fast | Poor for code structure |
**Key Finding:** Even the best techniques have surprisingly low accuracy when compared against verified ground truths.
---
### 3.5 ACDC Algorithm (Algorithm for Comprehension-Driven Clustering)
**Original Paper:** "ACDC: An Algorithm for Comprehension-Driven Clustering"
**Source:** ResearchGate - https://www.researchgate.net/publication/221200422_ACDC_An_Algorithm_for_Comprehension-Driven_Clustering
**York University Wiki:** https://wiki.eecs.yorku.ca/project/cluster/protected:acdc
**Algorithm Steps:**
1. Build dependency graph
2. Find "dominator" nodes (subsystem patterns)
3. Group nodes with common dominators
4. Apply orphan adoption for ungrouped nodes
5. Iteratively improve clusters
**Advantages:**
- Considers human comprehension patterns
- Finds natural subsystems
- Works without prior knowledge
**Disadvantages:**
- Requires parameter tuning
- Does not guarantee optimality
- May not work well on poorly structured code
---
### 3.6 LLM-Based Architecture Recovery (Recent Research)
**Research Paper:** "Automated Software Architecture Design Recovery from Source Code Using LLMs"
**Source:** Springer - https://link.springer.com/chapter/10.1007/978-3-032-02138-0_5
**Key Findings:**
- LLMs show promise for automating software architecture recovery
- Effective at identifying:
- ✅ Architectural styles
- ✅ Structural elements
- ✅ Basic design patterns
- Struggle with:
- ❌ Complex abstractions
- ❌ Class relationships
- ❌ Fine-grained design patterns
**Conclusion:** "LLMs can support SAR activities, particularly in identifying structural and stylistic elements, but they struggle with complex abstractions"
**Additional Reference:** arXiv paper on design principles - https://arxiv.org/html/2508.11717
---
## 4. Graph Analysis Algorithms
### 4.1 Louvain Algorithm for Community Detection
**Wikipedia:** https://en.wikipedia.org/wiki/Louvain_method
**Original Paper:** "Fast unfolding of communities in large networks" (2008)
- Authors: Vincent D Blondel, Jean-Loup Guillaume, Renaud Lambiotte, Etienne Lefebvre
- Journal: Journal of Statistical Mechanics: Theory and Experiment
- Reference: https://perso.uclouvain.be/vincent.blondel/research/louvain.html
**Algorithm Description:**
1. Initialize each node as its own community
2. For each node, try moving to neighboring communities
3. Select move with maximum modularity gain
4. Merge communities into "super-nodes"
5. Repeat from step 2
**Modularity Formula:**
```
Q = (1/2m) * Σ[Aij - (ki*kj)/(2m)] * δ(ci, cj)
Where:
- Aij = edge weight between i and j
- ki, kj = node degrees
- m = sum of all weights
- δ = 1 if ci = cj (same cluster)
```
**Characteristics:**
| Parameter | Value |
|-----------|-------|
| Time Complexity | O(n log n) |
| Modularity Range | -1 to 1 |
| Good Result | Q > 0.3 |
| Resolution Limit | Yes (may hide small communities) |
**Implementations:**
- NetworkX: https://networkx.org/documentation/stable/reference/algorithms/generated/networkx.algorithms.community.louvain.louvain_communities.html
- Neo4j: https://neo4j.com/docs/graph-data-science/current/algorithms/louvain/
- Graphology: https://graphology.github.io/standard-library/communities-louvain.html
- igraph: https://igraph.org/r/doc/cluster_louvain.html
**Application to Code Analysis:**
```
Dependency Graph:
User.ts → Email.ts, UserId.ts
Order.ts → OrderId.ts, Money.ts
UserController.ts → User.ts, CreateUser.ts
Louvain detects communities:
Community 1: [User.ts, Email.ts, UserId.ts] // User aggregate
Community 2: [Order.ts, OrderId.ts, Money.ts] // Order aggregate
Community 3: [UserController.ts, CreateUser.ts] // User feature
```
---
### 4.2 Modularity as Quality Metric
**Wikipedia:** https://en.wikipedia.org/wiki/Modularity_(networks)
**Definition:** Modularity measures the strength of division of a network into modules (groups, clusters, communities). Networks with high modularity have dense connections within modules but sparse connections between modules.
**Interpretation:**
| Modularity Value | Interpretation |
|------------------|----------------|
| Q < 0 | Non-modular (worse than random) |
| 0 < Q < 0.3 | Weak community structure |
| 0.3 < Q < 0.5 | Moderate community structure |
| Q > 0.5 | Strong community structure |
| Q → 1 | Perfect modularity |
**Research Reference:** "Fast Algorithm for Modularity-Based Graph Clustering" - https://cdn.aaai.org/ojs/8455/8455-13-11983-1-2-20201228.pdf
---
### 4.3 Graph-Based Software Modularization
**Research Paper:** "A graph-based clustering algorithm for software systems modularization"
**Source:** ScienceDirect - https://www.sciencedirect.com/science/article/abs/pii/S0950584920302147
**Key Points:**
- Clustering algorithms partition source code into manageable modules
- Resulting decomposition is called software system structure
- Due to NP-hardness, evolutionary approaches are commonly used
- Objectives:
- Minimize inter-cluster connections
- Maximize intra-cluster connections
- Maximize overall clustering quality
---
### 4.4 Topological Sorting for Layer Detection
**Algorithm Description:**
Layers can be inferred from dependency graph topology:
- **Layer 0 (Domain)**: Nodes with no outgoing dependencies to other layers
- **Layer 1 (Application)**: Nodes depending only on Layer 0
- **Layer 2+ (Infrastructure)**: Nodes depending on lower layers
**Pseudocode:**
```
function detectLayers(graph):
layers = Map()
visited = Set()
function dfs(node):
if layers.has(node): return layers.get(node)
if visited.has(node): return 0 // Cycle detected
visited.add(node)
deps = graph.getDependencies(node)
if deps.isEmpty():
layers.set(node, 0) // Leaf node = Domain
return 0
maxDepth = max(deps.map(dfs))
layers.set(node, maxDepth + 1)
return maxDepth + 1
graph.nodes.forEach(dfs)
return layers
```
**Limitation:** Assumes acyclic graph; circular dependencies break this approach.
---
### 4.5 Graph Metrics for Code Quality Assessment
**Useful Metrics:**
| Metric | Description | Good Value |
|--------|-------------|------------|
| Modularity | Clustering quality | > 0.3 |
| Density | Edge/node ratio | Low for good separation |
| Clustering Coefficient | Local clustering | Domain-dependent |
| Cyclic Rate | % of circular deps | < 0.1 (10%) |
| Average Path Length | Mean dependency distance | Lower = more coupled |
**Code Quality Interpretation:**
```
if cyclicRate > 0.5:
return "SPAGHETTI" // Cannot determine architecture
if modularity < 0.2:
return "MONOLITH" // No clear separation
if modularity > 0.5:
return "WELL_STRUCTURED" // Can determine layers
return "MODERATE"
```
---
## 5. Configuration Patterns and Best Practices
### 5.1 Pattern Hierarchy
**Level 1: Minimal Configuration**
```json
{
"architecture": "clean-architecture"
}
```
**Level 2: Custom Paths**
```json
{
"architecture": "clean-architecture",
"layers": {
"domain": ["src/core", "src/domain"],
"application": ["src/app", "src/use-cases"],
"infrastructure": ["src/infra", "src/adapters"]
}
}
```
**Level 3: Full Control**
```json
{
"layers": [
{
"name": "domain",
"patterns": ["src/domain/**", "**/*.entity.ts"],
"allowDependOn": []
},
{
"name": "application",
"patterns": ["src/application/**", "**/*.use-case.ts"],
"allowDependOn": ["domain"]
},
{
"name": "infrastructure",
"patterns": ["src/infrastructure/**", "**/*.controller.ts"],
"allowDependOn": ["domain", "application"]
}
]
}
```
---
### 5.2 Architecture Drift Detection in CI/CD
**Best Practices from Industry:**
**Source:** Firefly Academy - https://www.firefly.ai/academy/implementing-continuous-drift-detection-in-ci-cd-pipelines-with-github-actions-workflow
**Source:** Brainboard Blog - https://blog.brainboard.co/drift-detection-best-practices/
**Key Recommendations:**
1. **Integrate into Pipeline**: Validate architecture on every code update
2. **Continuous Monitoring**: Run automated scans daily minimum, hourly for active projects
3. **Enforce IaC-Only Changes**: All changes through automated workflows
4. **Automated Reconciliation**: Regular drift detection and correction
5. **Proper Alerting**: Slack for minor drift, PagerDuty for critical
6. **Least Privilege**: Limit who can bypass architecture checks
7. **Emergency Process**: Document process for urgent manual changes
8. **Environment Refresh**: Reset after each pipeline run
**Example GitHub Actions Integration:**
```yaml
name: Architecture Check
on: [push, pull_request]
jobs:
architecture:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Check Architecture
run: npx guardian check --strict
- name: Generate Report
if: failure()
run: npx guardian report --format html
- name: Upload Report
if: failure()
uses: actions/upload-artifact@v3
with:
name: architecture-report
path: architecture-report.html
```
---
### 5.3 Presets for Common Architectures
**Clean Architecture Preset:**
```json
{
"preset": "clean-architecture",
"layers": {
"domain": {
"patterns": ["**/domain/**", "**/entities/**", "**/core/**"],
"allowDependOn": []
},
"application": {
"patterns": ["**/application/**", "**/use-cases/**", "**/services/**"],
"allowDependOn": ["domain"]
},
"infrastructure": {
"patterns": ["**/infrastructure/**", "**/adapters/**", "**/api/**"],
"allowDependOn": ["domain", "application"]
}
}
}
```
**Hexagonal Architecture Preset:**
```json
{
"preset": "hexagonal",
"layers": {
"core": {
"patterns": ["**/core/**", "**/domain/**"],
"allowDependOn": []
},
"ports": {
"patterns": ["**/ports/**"],
"allowDependOn": ["core"]
},
"adapters": {
"patterns": ["**/adapters/**", "**/infrastructure/**"],
"allowDependOn": ["core", "ports"]
}
}
}
```
**NestJS Preset:**
```json
{
"preset": "nestjs",
"layers": {
"domain": {
"patterns": ["**/*.entity.ts", "**/entities/**"],
"allowDependOn": []
},
"application": {
"patterns": ["**/*.service.ts", "**/*.use-case.ts"],
"allowDependOn": ["domain"]
},
"infrastructure": {
"patterns": ["**/*.controller.ts", "**/*.module.ts", "**/*.resolver.ts"],
"allowDependOn": ["domain", "application"]
}
}
}
```
---
## 6. Industry Consensus
### 6.1 Why Major Tools Don't Auto-Detect
| Tool | Auto-Detection | Reasoning |
|------|----------------|-----------|
| ArchUnit | ❌ No | "User knows their architecture best" |
| eslint-plugin-boundaries | ❌ No | "Too many structure variations" |
| Nx | ❌ No | "Tag-based approach is more flexible" |
| dependency-cruiser | ❌ No | "Regex patterns cover all cases" |
| SonarQube | ⚠️ Partial | "Basic analysis + config for accuracy" |
### 6.2 Common Themes Across Tools
1. **Explicit Configuration**: All tools require user-defined rules
2. **Pattern Matching**: Glob/regex patterns are universal
3. **Layered Rules**: Allow/deny dependencies between layers
4. **CI/CD Integration**: All support pipeline integration
5. **Visualization**: Optional but valuable for understanding
### 6.3 Graph Analysis Position
Graph analysis is used for:
- ✅ Circular dependency detection
- ✅ Visualization
- ✅ Metrics calculation
- ✅ Suggestion generation
Graph analysis is NOT used for:
- ❌ Primary layer detection
- ❌ Automatic architecture classification
- ❌ Rule enforcement
---
## 7. Recommendations for Guardian
### 7.1 Recommended Architecture
```
┌─────────────────────────────────────────────────────────────┐
│ Configuration Layer │
├─────────────────────────────────────────────────────────────┤
│ .guardianrc.json │ package.json │ CLI args │ Interactive │
└─────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────┐
│ Strategy Resolver │
├─────────────────────────────────────────────────────────────┤
│ 1. Explicit Config (if .guardianrc.json exists) │
│ 2. Preset Detection (if preset specified) │
│ 3. Smart Defaults (standard patterns) │
│ 4. Generic Mode (fallback - minimal checks) │
└─────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────┐
│ Analysis Engine │
├─────────────────────────────────────────────────────────────┤
│ Pattern Matcher │ Layer Detector │ Dependency Analyzer │
└─────────────────────────────────────────────────────────────┘
```
### 7.2 Implementation Priorities
**Phase 1: Configuration File Support**
- Add `.guardianrc.json` parser
- Support custom layer patterns
- Support custom DDD folder names
- Validate configuration on load
**Phase 2: Presets System**
- Clean Architecture preset
- Hexagonal Architecture preset
- NestJS preset
- Feature-based preset
**Phase 3: Smart Defaults**
- Try standard folder names first
- Fall back to file naming patterns
- Support common conventions
**Phase 4: Interactive Setup**
- `guardian init` command
- Project structure scanning
- Configuration file generation
- Preset recommendations
**Phase 5: Generic Mode**
- Minimal checks without layer knowledge
- Hardcode detection
- Secret detection
- Circular dependency detection
- Basic naming conventions
### 7.3 Graph Analysis - Optional Feature Only
Graph analysis should be:
- **Optional**: Not required for basic functionality
- **Informational**: For visualization and metrics
- **Suggestive**: Can propose configuration, not enforce it
**CLI Commands:**
```bash
guardian analyze --graph --output deps.svg # Visualization
guardian metrics # Quality metrics
guardian suggest # Configuration suggestions
```
---
## 8. Additional Resources
### Official Documentation
- ArchUnit: https://www.archunit.org/userguide/html/000_Index.html
- eslint-plugin-boundaries: https://github.com/javierbrea/eslint-plugin-boundaries
- SonarQube Architecture: https://docs.sonarsource.com/sonarqube-server/design-and-architecture/overview/
- Nx Module Boundaries: https://nx.dev/docs/features/enforce-module-boundaries
- dependency-cruiser: https://github.com/sverweij/dependency-cruiser
### Academic Papers
- Software Architecture Recovery (Wikipedia): https://en.wikipedia.org/wiki/Software_architecture_recovery
- ACDC Algorithm: https://www.researchgate.net/publication/221200422_ACDC_An_Algorithm_for_Comprehension-Driven_Clustering
- Louvain Method: https://en.wikipedia.org/wiki/Louvain_method
- Graph Modularity: https://en.wikipedia.org/wiki/Modularity_(networks)
- LLM-based SAR: https://link.springer.com/chapter/10.1007/978-3-032-02138-0_5
### Tutorials and Guides
- Clean Architecture Validation: https://betterprogramming.pub/validate-dependencies-according-to-clean-architecture-743077ea084c
- Drift Detection Best Practices: https://blog.brainboard.co/drift-detection-best-practices/
- Louvain Algorithm Tutorial: https://medium.com/data-science-in-your-pocket/community-detection-in-a-graph-using-louvain-algorithm-with-example-7a77e5e4b079
### Related Books
- **Clean Architecture** by Robert C. Martin (2017) - ISBN: 978-0134494166
- **Domain-Driven Design** by Eric Evans (2003) - ISBN: 978-0321125217
- **Implementing Domain-Driven Design** by Vaughn Vernon (2013) - ISBN: 978-0321834577
---
## Conclusion
The research conclusively shows that **automatic architecture detection is unreliable** and **not used by major industry tools**. The recommended approach for Guardian is:
1. **Configuration-first**: Support explicit layer definitions via `.guardianrc.json`
2. **Pattern-based**: Use glob/regex patterns for flexible matching
3. **Presets**: Provide pre-configured patterns for common architectures
4. **Smart defaults**: Try standard conventions when no config exists
5. **Generic fallback**: Provide useful checks even without architecture knowledge
6. **Graph analysis as optional**: Use for visualization and suggestions only
This approach aligns with industry best practices from ArchUnit, eslint-plugin-boundaries, SonarQube, Nx, and dependency-cruiser.
---
**Document Version**: 1.0
**Last Updated**: 2025-11-27
**Author**: Guardian Research Team
**Questions or contributions?**
- 📧 Email: fozilbek.samiyev@gmail.com
- 🐙 GitHub: https://github.com/samiyev/puaros/issues
**Based on research as of**: November 2025

View File

@@ -1,6 +1,6 @@
{
"name": "@samiyev/guardian",
"version": "0.9.0",
"version": "0.9.3",
"description": "Research-backed code quality guardian for AI-assisted development. Detects hardcodes, secrets, circular deps, framework leaks, entity exposure, and 9 architecture violations. Enforces Clean Architecture/DDD principles. Works with GitHub Copilot, Cursor, Windsurf, Claude, ChatGPT, Cline, and any AI coding tool.",
"keywords": [
"puaros",

View File

@@ -240,6 +240,7 @@ export class ExecuteDetection {
for (const file of sourceFiles) {
const namingViolations = this.namingConventionDetector.detectViolations(
file.content,
file.path.filename,
file.layer,
file.path.relative,

View File

@@ -80,3 +80,12 @@ export const ANEMIC_MODEL_MESSAGES = {
ENCAPSULATE_BUSINESS_RULES: "3. Encapsulate business rules inside entity methods",
USE_DOMAIN_EVENTS: "4. Use domain events to communicate state changes",
}
/**
* Example values used in violation messages
*/
export const VIOLATION_EXAMPLE_VALUES = {
UNKNOWN: "unknown",
USER_REPOSITORY: "UserRepository",
FIND_ONE: "findOne",
}

View File

@@ -24,6 +24,106 @@ export const SUGGESTION_KEYWORDS = {
CONSOLE_ERROR: "console.error",
} as const
/**
* Context keywords for email detection
*/
export const EMAIL_CONTEXT_KEYWORDS = {
ADMIN: "admin",
SUPPORT: "support",
NOREPLY: "noreply",
NO_REPLY: "no-reply",
} as const
/**
* Context keywords for API key detection
*/
export const API_KEY_CONTEXT_KEYWORDS = {
SECRET: "secret",
PUBLIC: "public",
} as const
/**
* Context keywords for URL detection
*/
export const URL_CONTEXT_KEYWORDS = {
API: "api",
DATABASE: "database",
DB: "db",
MONGO: "mongo",
POSTGRES: "postgres",
PG: "pg",
} as const
/**
* Context keywords for IP address detection
*/
export const IP_CONTEXT_KEYWORDS = {
SERVER: "server",
REDIS: "redis",
} as const
/**
* Context keywords for file path detection
*/
export const FILE_PATH_CONTEXT_KEYWORDS = {
LOG: "log",
DATA: "data",
TEMP: "temp",
} as const
/**
* Context keywords for date detection
*/
export const DATE_CONTEXT_KEYWORDS = {
DEADLINE: "deadline",
START: "start",
END: "end",
EXPIR: "expir",
} as const
/**
* Context keywords for UUID detection
*/
export const UUID_CONTEXT_KEYWORDS = {
ID: "id",
IDENTIFIER: "identifier",
REQUEST: "request",
SESSION: "session",
} as const
/**
* Context keywords for version detection
*/
export const VERSION_CONTEXT_KEYWORDS = {
APP: "app",
} as const
/**
* Context keywords for color detection
*/
export const COLOR_CONTEXT_KEYWORDS = {
PRIMARY: "primary",
SECONDARY: "secondary",
BACKGROUND: "background",
} as const
/**
* Context keywords for base64 detection
*/
export const BASE64_CONTEXT_KEYWORDS = {
TOKEN: "token",
KEY: "key",
} as const
/**
* Context keywords for config detection
*/
export const CONFIG_CONTEXT_KEYWORDS = {
ENDPOINT: "endpoint",
ROUTE: "route",
CONNECTION: "connection",
} as const
/**
* Constant name templates
*/
@@ -41,6 +141,50 @@ export const CONSTANT_NAMES = {
MAGIC_STRING: "MAGIC_STRING",
MAGIC_NUMBER: "MAGIC_NUMBER",
UNKNOWN_CONSTANT: "UNKNOWN_CONSTANT",
ADMIN_EMAIL: "ADMIN_EMAIL",
SUPPORT_EMAIL: "SUPPORT_EMAIL",
NOREPLY_EMAIL: "NOREPLY_EMAIL",
DEFAULT_EMAIL: "DEFAULT_EMAIL",
API_SECRET_KEY: "API_SECRET_KEY",
API_PUBLIC_KEY: "API_PUBLIC_KEY",
API_KEY: "API_KEY",
DATABASE_URL: "DATABASE_URL",
MONGODB_CONNECTION_STRING: "MONGODB_CONNECTION_STRING",
POSTGRES_URL: "POSTGRES_URL",
BASE_URL: "BASE_URL",
SERVER_IP: "SERVER_IP",
DATABASE_HOST: "DATABASE_HOST",
REDIS_HOST: "REDIS_HOST",
HOST_IP: "HOST_IP",
LOG_FILE_PATH: "LOG_FILE_PATH",
CONFIG_FILE_PATH: "CONFIG_FILE_PATH",
DATA_DIR_PATH: "DATA_DIR_PATH",
TEMP_DIR_PATH: "TEMP_DIR_PATH",
FILE_PATH: "FILE_PATH",
DEADLINE: "DEADLINE",
START_DATE: "START_DATE",
END_DATE: "END_DATE",
EXPIRATION_DATE: "EXPIRATION_DATE",
DEFAULT_DATE: "DEFAULT_DATE",
DEFAULT_ID: "DEFAULT_ID",
REQUEST_ID: "REQUEST_ID",
SESSION_ID: "SESSION_ID",
UUID_CONSTANT: "UUID_CONSTANT",
API_VERSION: "API_VERSION",
APP_VERSION: "APP_VERSION",
VERSION: "VERSION",
PRIMARY_COLOR: "PRIMARY_COLOR",
SECONDARY_COLOR: "SECONDARY_COLOR",
BACKGROUND_COLOR: "BACKGROUND_COLOR",
COLOR_CONSTANT: "COLOR_CONSTANT",
MAC_ADDRESS: "MAC_ADDRESS",
ENCODED_TOKEN: "ENCODED_TOKEN",
ENCODED_KEY: "ENCODED_KEY",
BASE64_VALUE: "BASE64_VALUE",
API_ENDPOINT: "API_ENDPOINT",
ROUTE_PATH: "ROUTE_PATH",
CONNECTION_STRING: "CONNECTION_STRING",
CONFIG_VALUE: "CONFIG_VALUE",
} as const
/**
@@ -50,4 +194,8 @@ export const LOCATIONS = {
SHARED_CONSTANTS: "shared/constants",
DOMAIN_CONSTANTS: "domain/constants",
INFRASTRUCTURE_CONFIG: "infrastructure/config",
CONFIG_ENVIRONMENT: "src/config/environment.ts",
CONFIG_CONTACTS: "src/config/contacts.ts",
CONFIG_PATHS: "src/config/paths.ts",
CONFIG_DATES: "src/config/dates.ts",
} as const

View File

@@ -7,12 +7,14 @@ export interface INamingConventionDetector {
/**
* Detects naming convention violations for a given file
*
* @param content - Source code content to analyze
* @param fileName - Name of the file to check (e.g., "UserService.ts")
* @param layer - Architectural layer of the file (domain, application, infrastructure, shared)
* @param filePath - Relative file path for context
* @returns Array of naming convention violations
*/
detectViolations(
content: string,
fileName: string,
layer: string | undefined,
filePath: string,

View File

@@ -1,6 +1,21 @@
import { ValueObject } from "./ValueObject"
import { DETECTION_PATTERNS, HARDCODE_TYPES } from "../../shared/constants/rules"
import { CONSTANT_NAMES, LOCATIONS, SUGGESTION_KEYWORDS } from "../constants/Suggestions"
import {
API_KEY_CONTEXT_KEYWORDS,
BASE64_CONTEXT_KEYWORDS,
COLOR_CONTEXT_KEYWORDS,
CONFIG_CONTEXT_KEYWORDS,
CONSTANT_NAMES,
DATE_CONTEXT_KEYWORDS,
EMAIL_CONTEXT_KEYWORDS,
FILE_PATH_CONTEXT_KEYWORDS,
IP_CONTEXT_KEYWORDS,
LOCATIONS,
SUGGESTION_KEYWORDS,
URL_CONTEXT_KEYWORDS,
UUID_CONTEXT_KEYWORDS,
VERSION_CONTEXT_KEYWORDS,
} from "../constants/Suggestions"
export type HardcodeType = (typeof HARDCODE_TYPES)[keyof typeof HARDCODE_TYPES]
@@ -162,150 +177,165 @@ export class HardcodedValue extends ValueObject<HardcodedValueProps> {
const valueType = this.props.valueType
if (valueType === "email") {
if (context.includes("admin")) {
return "ADMIN_EMAIL"
if (context.includes(EMAIL_CONTEXT_KEYWORDS.ADMIN)) {
return CONSTANT_NAMES.ADMIN_EMAIL
}
if (context.includes("support")) {
return "SUPPORT_EMAIL"
if (context.includes(EMAIL_CONTEXT_KEYWORDS.SUPPORT)) {
return CONSTANT_NAMES.SUPPORT_EMAIL
}
if (context.includes("noreply") || context.includes("no-reply")) {
return "NOREPLY_EMAIL"
if (
context.includes(EMAIL_CONTEXT_KEYWORDS.NOREPLY) ||
context.includes(EMAIL_CONTEXT_KEYWORDS.NO_REPLY)
) {
return CONSTANT_NAMES.NOREPLY_EMAIL
}
return "DEFAULT_EMAIL"
return CONSTANT_NAMES.DEFAULT_EMAIL
}
if (valueType === "api_key") {
if (context.includes("secret")) {
return "API_SECRET_KEY"
if (context.includes(API_KEY_CONTEXT_KEYWORDS.SECRET)) {
return CONSTANT_NAMES.API_SECRET_KEY
}
if (context.includes("public")) {
return "API_PUBLIC_KEY"
if (context.includes(API_KEY_CONTEXT_KEYWORDS.PUBLIC)) {
return CONSTANT_NAMES.API_PUBLIC_KEY
}
return "API_KEY"
return CONSTANT_NAMES.API_KEY
}
if (valueType === "url") {
if (context.includes("api")) {
return "API_BASE_URL"
if (context.includes(URL_CONTEXT_KEYWORDS.API)) {
return CONSTANT_NAMES.API_BASE_URL
}
if (context.includes("database") || context.includes("db")) {
return "DATABASE_URL"
if (
context.includes(URL_CONTEXT_KEYWORDS.DATABASE) ||
context.includes(URL_CONTEXT_KEYWORDS.DB)
) {
return CONSTANT_NAMES.DATABASE_URL
}
if (context.includes("mongo")) {
return "MONGODB_CONNECTION_STRING"
if (context.includes(URL_CONTEXT_KEYWORDS.MONGO)) {
return CONSTANT_NAMES.MONGODB_CONNECTION_STRING
}
if (context.includes("postgres") || context.includes("pg")) {
return "POSTGRES_URL"
if (
context.includes(URL_CONTEXT_KEYWORDS.POSTGRES) ||
context.includes(URL_CONTEXT_KEYWORDS.PG)
) {
return CONSTANT_NAMES.POSTGRES_URL
}
return "BASE_URL"
return CONSTANT_NAMES.BASE_URL
}
if (valueType === "ip_address") {
if (context.includes("server")) {
return "SERVER_IP"
if (context.includes(IP_CONTEXT_KEYWORDS.SERVER)) {
return CONSTANT_NAMES.SERVER_IP
}
if (context.includes("database") || context.includes("db")) {
return "DATABASE_HOST"
if (
context.includes(URL_CONTEXT_KEYWORDS.DATABASE) ||
context.includes(URL_CONTEXT_KEYWORDS.DB)
) {
return CONSTANT_NAMES.DATABASE_HOST
}
if (context.includes("redis")) {
return "REDIS_HOST"
if (context.includes(IP_CONTEXT_KEYWORDS.REDIS)) {
return CONSTANT_NAMES.REDIS_HOST
}
return "HOST_IP"
return CONSTANT_NAMES.HOST_IP
}
if (valueType === "file_path") {
if (context.includes("log")) {
return "LOG_FILE_PATH"
if (context.includes(FILE_PATH_CONTEXT_KEYWORDS.LOG)) {
return CONSTANT_NAMES.LOG_FILE_PATH
}
if (context.includes("config")) {
return "CONFIG_FILE_PATH"
if (context.includes(SUGGESTION_KEYWORDS.CONFIG)) {
return CONSTANT_NAMES.CONFIG_FILE_PATH
}
if (context.includes("data")) {
return "DATA_DIR_PATH"
if (context.includes(FILE_PATH_CONTEXT_KEYWORDS.DATA)) {
return CONSTANT_NAMES.DATA_DIR_PATH
}
if (context.includes("temp")) {
return "TEMP_DIR_PATH"
if (context.includes(FILE_PATH_CONTEXT_KEYWORDS.TEMP)) {
return CONSTANT_NAMES.TEMP_DIR_PATH
}
return "FILE_PATH"
return CONSTANT_NAMES.FILE_PATH
}
if (valueType === "date") {
if (context.includes("deadline")) {
return "DEADLINE"
if (context.includes(DATE_CONTEXT_KEYWORDS.DEADLINE)) {
return CONSTANT_NAMES.DEADLINE
}
if (context.includes("start")) {
return "START_DATE"
if (context.includes(DATE_CONTEXT_KEYWORDS.START)) {
return CONSTANT_NAMES.START_DATE
}
if (context.includes("end")) {
return "END_DATE"
if (context.includes(DATE_CONTEXT_KEYWORDS.END)) {
return CONSTANT_NAMES.END_DATE
}
if (context.includes("expir")) {
return "EXPIRATION_DATE"
if (context.includes(DATE_CONTEXT_KEYWORDS.EXPIR)) {
return CONSTANT_NAMES.EXPIRATION_DATE
}
return "DEFAULT_DATE"
return CONSTANT_NAMES.DEFAULT_DATE
}
if (valueType === "uuid") {
if (context.includes("id") || context.includes("identifier")) {
return "DEFAULT_ID"
if (
context.includes(UUID_CONTEXT_KEYWORDS.ID) ||
context.includes(UUID_CONTEXT_KEYWORDS.IDENTIFIER)
) {
return CONSTANT_NAMES.DEFAULT_ID
}
if (context.includes("request")) {
return "REQUEST_ID"
if (context.includes(UUID_CONTEXT_KEYWORDS.REQUEST)) {
return CONSTANT_NAMES.REQUEST_ID
}
if (context.includes("session")) {
return "SESSION_ID"
if (context.includes(UUID_CONTEXT_KEYWORDS.SESSION)) {
return CONSTANT_NAMES.SESSION_ID
}
return "UUID_CONSTANT"
return CONSTANT_NAMES.UUID_CONSTANT
}
if (valueType === "version") {
if (context.includes("api")) {
return "API_VERSION"
if (context.includes(URL_CONTEXT_KEYWORDS.API)) {
return CONSTANT_NAMES.API_VERSION
}
if (context.includes("app")) {
return "APP_VERSION"
if (context.includes(VERSION_CONTEXT_KEYWORDS.APP)) {
return CONSTANT_NAMES.APP_VERSION
}
return "VERSION"
return CONSTANT_NAMES.VERSION
}
if (valueType === "color") {
if (context.includes("primary")) {
return "PRIMARY_COLOR"
if (context.includes(COLOR_CONTEXT_KEYWORDS.PRIMARY)) {
return CONSTANT_NAMES.PRIMARY_COLOR
}
if (context.includes("secondary")) {
return "SECONDARY_COLOR"
if (context.includes(COLOR_CONTEXT_KEYWORDS.SECONDARY)) {
return CONSTANT_NAMES.SECONDARY_COLOR
}
if (context.includes("background")) {
return "BACKGROUND_COLOR"
if (context.includes(COLOR_CONTEXT_KEYWORDS.BACKGROUND)) {
return CONSTANT_NAMES.BACKGROUND_COLOR
}
return "COLOR_CONSTANT"
return CONSTANT_NAMES.COLOR_CONSTANT
}
if (valueType === "mac_address") {
return "MAC_ADDRESS"
return CONSTANT_NAMES.MAC_ADDRESS
}
if (valueType === "base64") {
if (context.includes("token")) {
return "ENCODED_TOKEN"
if (context.includes(BASE64_CONTEXT_KEYWORDS.TOKEN)) {
return CONSTANT_NAMES.ENCODED_TOKEN
}
if (context.includes("key")) {
return "ENCODED_KEY"
if (context.includes(BASE64_CONTEXT_KEYWORDS.KEY)) {
return CONSTANT_NAMES.ENCODED_KEY
}
return "BASE64_VALUE"
return CONSTANT_NAMES.BASE64_VALUE
}
if (valueType === "config") {
if (context.includes("endpoint")) {
return "API_ENDPOINT"
if (context.includes(CONFIG_CONTEXT_KEYWORDS.ENDPOINT)) {
return CONSTANT_NAMES.API_ENDPOINT
}
if (context.includes("route")) {
return "ROUTE_PATH"
if (context.includes(CONFIG_CONTEXT_KEYWORDS.ROUTE)) {
return CONSTANT_NAMES.ROUTE_PATH
}
if (context.includes("connection")) {
return "CONNECTION_STRING"
if (context.includes(CONFIG_CONTEXT_KEYWORDS.CONNECTION)) {
return CONSTANT_NAMES.CONNECTION_STRING
}
return "CONFIG_VALUE"
return CONSTANT_NAMES.CONFIG_VALUE
}
if (value.includes(SUGGESTION_KEYWORDS.HTTP)) {
@@ -339,19 +369,19 @@ export class HardcodedValue extends ValueObject<HardcodedValueProps> {
const valueType = this.props.valueType
if (valueType === "api_key" || valueType === "url" || valueType === "ip_address") {
return "src/config/environment.ts"
return LOCATIONS.CONFIG_ENVIRONMENT
}
if (valueType === "email") {
return "src/config/contacts.ts"
return LOCATIONS.CONFIG_CONTACTS
}
if (valueType === "file_path") {
return "src/config/paths.ts"
return LOCATIONS.CONFIG_PATHS
}
if (valueType === "date") {
return "src/config/dates.ts"
return LOCATIONS.CONFIG_DATES
}
if (

View File

@@ -1,6 +1,10 @@
import { ValueObject } from "./ValueObject"
import { REPOSITORY_VIOLATION_TYPES } from "../../shared/constants/rules"
import { REPOSITORY_FALLBACK_SUGGESTIONS, REPOSITORY_PATTERN_MESSAGES } from "../constants/Messages"
import {
REPOSITORY_FALLBACK_SUGGESTIONS,
REPOSITORY_PATTERN_MESSAGES,
VIOLATION_EXAMPLE_VALUES,
} from "../constants/Messages"
interface RepositoryViolationProps {
readonly violationType:
@@ -105,16 +109,16 @@ export class RepositoryViolation extends ValueObject<RepositoryViolationProps> {
public getMessage(): string {
switch (this.props.violationType) {
case REPOSITORY_VIOLATION_TYPES.ORM_TYPE_IN_INTERFACE:
return `Repository interface uses ORM-specific type '${this.props.ormType || "unknown"}'. Domain should not depend on infrastructure concerns.`
return `Repository interface uses ORM-specific type '${this.props.ormType || VIOLATION_EXAMPLE_VALUES.UNKNOWN}'. Domain should not depend on infrastructure concerns.`
case REPOSITORY_VIOLATION_TYPES.CONCRETE_REPOSITORY_IN_USE_CASE:
return `Use case depends on concrete repository '${this.props.repositoryName || "unknown"}' instead of interface. Use dependency inversion.`
return `Use case depends on concrete repository '${this.props.repositoryName || VIOLATION_EXAMPLE_VALUES.UNKNOWN}' instead of interface. Use dependency inversion.`
case REPOSITORY_VIOLATION_TYPES.NEW_REPOSITORY_IN_USE_CASE:
return `Use case creates repository with 'new ${this.props.repositoryName || "Repository"}()'. Use dependency injection instead.`
case REPOSITORY_VIOLATION_TYPES.NON_DOMAIN_METHOD_NAME:
return `Repository method '${this.props.methodName || "unknown"}' uses technical name. Use domain language instead.`
return `Repository method '${this.props.methodName || VIOLATION_EXAMPLE_VALUES.UNKNOWN}' uses technical name. Use domain language instead.`
default:
return `Repository pattern violation: ${this.props.details}`
@@ -159,8 +163,8 @@ export class RepositoryViolation extends ValueObject<RepositoryViolationProps> {
REPOSITORY_PATTERN_MESSAGES.STEP_USE_DI,
"",
REPOSITORY_PATTERN_MESSAGES.EXAMPLE_PREFIX,
`❌ Bad: constructor(private repo: ${this.props.repositoryName || "UserRepository"})`,
`✅ Good: constructor(private repo: I${this.props.repositoryName?.replace(/^.*?([A-Z]\w+)$/, "$1") || "UserRepository"})`,
`❌ Bad: constructor(private repo: ${this.props.repositoryName || VIOLATION_EXAMPLE_VALUES.USER_REPOSITORY})`,
`✅ Good: constructor(private repo: I${this.props.repositoryName?.replace(/^.*?([A-Z]\w+)$/, "$1") || VIOLATION_EXAMPLE_VALUES.USER_REPOSITORY})`,
].join("\n")
}
@@ -200,7 +204,7 @@ export class RepositoryViolation extends ValueObject<RepositoryViolationProps> {
REPOSITORY_PATTERN_MESSAGES.STEP_AVOID_TECHNICAL,
"",
REPOSITORY_PATTERN_MESSAGES.EXAMPLE_PREFIX,
`❌ Bad: ${this.props.methodName || "findOne"}()`,
`❌ Bad: ${this.props.methodName || VIOLATION_EXAMPLE_VALUES.FIND_ONE}()`,
`✅ Good: ${finalSuggestion}`,
].join("\n")
}

View File

@@ -1,6 +1,7 @@
import Parser from "tree-sitter"
import { IHardcodeDetector } from "../../domain/services/IHardcodeDetector"
import { HardcodedValue } from "../../domain/value-objects/HardcodedValue"
import { FILE_EXTENSIONS } from "../../shared/constants"
import { CodeParser } from "../parsers/CodeParser"
import { AstBooleanAnalyzer } from "../strategies/AstBooleanAnalyzer"
import { AstConfigObjectAnalyzer } from "../strategies/AstConfigObjectAnalyzer"
@@ -112,9 +113,9 @@ export class HardcodeDetector implements IHardcodeDetector {
* Parses code based on file extension
*/
private parseCode(code: string, filePath: string): Parser.Tree {
if (filePath.endsWith(".tsx")) {
if (filePath.endsWith(FILE_EXTENSIONS.TYPESCRIPT_JSX)) {
return this.parser.parseTsx(code)
} else if (filePath.endsWith(".ts")) {
} else if (filePath.endsWith(FILE_EXTENSIONS.TYPESCRIPT)) {
return this.parser.parseTypeScript(code)
}
return this.parser.parseJavaScript(code)

View File

@@ -1,37 +1,72 @@
import Parser from "tree-sitter"
import { INamingConventionDetector } from "../../domain/services/INamingConventionDetector"
import { NamingViolation } from "../../domain/value-objects/NamingViolation"
import {
LAYERS,
NAMING_PATTERNS,
NAMING_VIOLATION_TYPES,
USE_CASE_VERBS,
} from "../../shared/constants/rules"
import {
EXCLUDED_FILES,
FILE_SUFFIXES,
NAMING_ERROR_MESSAGES,
PATH_PATTERNS,
PATTERN_WORDS,
} from "../constants/detectorPatterns"
import { NAMING_SUGGESTION_DEFAULT } from "../constants/naming-patterns"
import { FILE_EXTENSIONS } from "../../shared/constants"
import { EXCLUDED_FILES } from "../constants/detectorPatterns"
import { CodeParser } from "../parsers/CodeParser"
import { AstClassNameAnalyzer } from "../strategies/naming/AstClassNameAnalyzer"
import { AstFunctionNameAnalyzer } from "../strategies/naming/AstFunctionNameAnalyzer"
import { AstInterfaceNameAnalyzer } from "../strategies/naming/AstInterfaceNameAnalyzer"
import { AstNamingTraverser } from "../strategies/naming/AstNamingTraverser"
import { AstVariableNameAnalyzer } from "../strategies/naming/AstVariableNameAnalyzer"
/**
* Detects naming convention violations based on Clean Architecture layers
* Detects naming convention violations using AST-based analysis
*
* This detector ensures that files follow naming conventions appropriate to their layer:
* - Domain: Entities (nouns), Services (*Service), Value Objects, Repository interfaces (I*Repository)
* - Application: Use cases (verbs), DTOs (*Dto/*Request/*Response), Mappers (*Mapper)
* - Infrastructure: Controllers (*Controller), Repository implementations (*Repository), Services (*Service/*Adapter)
* This detector uses Abstract Syntax Tree (AST) analysis via tree-sitter to identify
* naming convention violations in classes, interfaces, functions, and variables
* according to Clean Architecture layer rules.
*
* The detector uses a modular architecture with specialized components:
* - AstClassNameAnalyzer: Analyzes class names
* - AstInterfaceNameAnalyzer: Analyzes interface names
* - AstFunctionNameAnalyzer: Analyzes function and method names
* - AstVariableNameAnalyzer: Analyzes variable and constant names
* - AstNamingTraverser: Traverses the AST and coordinates analyzers
*
* @example
* ```typescript
* const detector = new NamingConventionDetector()
* const violations = detector.detectViolations('UserDto.ts', 'domain', 'src/domain/UserDto.ts')
* // Returns violation: DTOs should not be in domain layer
* const code = `
* class userService { // Wrong: should be UserService
* GetUser() {} // Wrong: should be getUser
* }
* `
* const violations = detector.detectViolations(code, 'UserService.ts', 'domain', 'src/domain/UserService.ts')
* // Returns array of NamingViolation objects
* ```
*/
export class NamingConventionDetector implements INamingConventionDetector {
private readonly parser: CodeParser
private readonly traverser: AstNamingTraverser
constructor() {
this.parser = new CodeParser()
const classAnalyzer = new AstClassNameAnalyzer()
const interfaceAnalyzer = new AstInterfaceNameAnalyzer()
const functionAnalyzer = new AstFunctionNameAnalyzer()
const variableAnalyzer = new AstVariableNameAnalyzer()
this.traverser = new AstNamingTraverser(
classAnalyzer,
interfaceAnalyzer,
functionAnalyzer,
variableAnalyzer,
)
}
/**
* Detects naming convention violations in the given code
*
* @param content - Source code to analyze
* @param fileName - Name of the file being analyzed
* @param layer - Architectural layer (domain, application, infrastructure, shared)
* @param filePath - File path for context (used in violation reports)
* @returns Array of detected naming violations
*/
public detectViolations(
content: string,
fileName: string,
layer: string | undefined,
filePath: string,
@@ -44,235 +79,23 @@ export class NamingConventionDetector implements INamingConventionDetector {
return []
}
switch (layer) {
case LAYERS.DOMAIN:
return this.checkDomainLayer(fileName, filePath)
case LAYERS.APPLICATION:
return this.checkApplicationLayer(fileName, filePath)
case LAYERS.INFRASTRUCTURE:
return this.checkInfrastructureLayer(fileName, filePath)
case LAYERS.SHARED:
return []
default:
return []
if (!content || content.trim().length === 0) {
return []
}
const tree = this.parseCode(content, filePath)
return this.traverser.traverse(tree, content, layer, filePath)
}
private checkDomainLayer(fileName: string, filePath: string): NamingViolation[] {
const violations: NamingViolation[] = []
const forbiddenPatterns = NAMING_PATTERNS.DOMAIN.ENTITY.forbidden ?? []
for (const forbidden of forbiddenPatterns) {
if (fileName.includes(forbidden)) {
violations.push(
NamingViolation.create(
fileName,
NAMING_VIOLATION_TYPES.FORBIDDEN_PATTERN,
LAYERS.DOMAIN,
filePath,
NAMING_ERROR_MESSAGES.DOMAIN_FORBIDDEN,
fileName,
NAMING_SUGGESTION_DEFAULT,
),
)
return violations
}
/**
* Parses code based on file extension
*/
private parseCode(code: string, filePath: string): Parser.Tree {
if (filePath.endsWith(FILE_EXTENSIONS.TYPESCRIPT_JSX)) {
return this.parser.parseTsx(code)
} else if (filePath.endsWith(FILE_EXTENSIONS.TYPESCRIPT)) {
return this.parser.parseTypeScript(code)
}
if (fileName.endsWith(FILE_SUFFIXES.SERVICE)) {
if (!NAMING_PATTERNS.DOMAIN.SERVICE.pattern.test(fileName)) {
violations.push(
NamingViolation.create(
fileName,
NAMING_VIOLATION_TYPES.WRONG_CASE,
LAYERS.DOMAIN,
filePath,
NAMING_PATTERNS.DOMAIN.SERVICE.description,
fileName,
),
)
}
return violations
}
if (
fileName.startsWith(PATTERN_WORDS.I_PREFIX) &&
fileName.includes(PATTERN_WORDS.REPOSITORY)
) {
if (!NAMING_PATTERNS.DOMAIN.REPOSITORY_INTERFACE.pattern.test(fileName)) {
violations.push(
NamingViolation.create(
fileName,
NAMING_VIOLATION_TYPES.WRONG_PREFIX,
LAYERS.DOMAIN,
filePath,
NAMING_PATTERNS.DOMAIN.REPOSITORY_INTERFACE.description,
fileName,
),
)
}
return violations
}
if (!NAMING_PATTERNS.DOMAIN.ENTITY.pattern.test(fileName)) {
violations.push(
NamingViolation.create(
fileName,
NAMING_VIOLATION_TYPES.WRONG_CASE,
LAYERS.DOMAIN,
filePath,
NAMING_PATTERNS.DOMAIN.ENTITY.description,
fileName,
NAMING_ERROR_MESSAGES.USE_PASCAL_CASE,
),
)
}
return violations
}
private checkApplicationLayer(fileName: string, filePath: string): NamingViolation[] {
const violations: NamingViolation[] = []
if (
fileName.endsWith(FILE_SUFFIXES.DTO) ||
fileName.endsWith(FILE_SUFFIXES.REQUEST) ||
fileName.endsWith(FILE_SUFFIXES.RESPONSE)
) {
if (!NAMING_PATTERNS.APPLICATION.DTO.pattern.test(fileName)) {
violations.push(
NamingViolation.create(
fileName,
NAMING_VIOLATION_TYPES.WRONG_SUFFIX,
LAYERS.APPLICATION,
filePath,
NAMING_PATTERNS.APPLICATION.DTO.description,
fileName,
NAMING_ERROR_MESSAGES.USE_DTO_SUFFIX,
),
)
}
return violations
}
if (fileName.endsWith(FILE_SUFFIXES.MAPPER)) {
if (!NAMING_PATTERNS.APPLICATION.MAPPER.pattern.test(fileName)) {
violations.push(
NamingViolation.create(
fileName,
NAMING_VIOLATION_TYPES.WRONG_SUFFIX,
LAYERS.APPLICATION,
filePath,
NAMING_PATTERNS.APPLICATION.MAPPER.description,
fileName,
),
)
}
return violations
}
const startsWithVerb = this.startsWithCommonVerb(fileName)
if (startsWithVerb) {
if (!NAMING_PATTERNS.APPLICATION.USE_CASE.pattern.test(fileName)) {
violations.push(
NamingViolation.create(
fileName,
NAMING_VIOLATION_TYPES.WRONG_VERB_NOUN,
LAYERS.APPLICATION,
filePath,
NAMING_PATTERNS.APPLICATION.USE_CASE.description,
fileName,
NAMING_ERROR_MESSAGES.USE_VERB_NOUN,
),
)
}
return violations
}
if (
filePath.includes(PATH_PATTERNS.USE_CASES) ||
filePath.includes(PATH_PATTERNS.USE_CASES_ALT)
) {
const hasVerb = this.startsWithCommonVerb(fileName)
if (!hasVerb) {
violations.push(
NamingViolation.create(
fileName,
NAMING_VIOLATION_TYPES.WRONG_VERB_NOUN,
LAYERS.APPLICATION,
filePath,
NAMING_ERROR_MESSAGES.USE_CASE_START_VERB,
fileName,
`Start with a verb like: ${USE_CASE_VERBS.slice(0, 5).join(", ")}`,
),
)
}
}
return violations
}
private checkInfrastructureLayer(fileName: string, filePath: string): NamingViolation[] {
const violations: NamingViolation[] = []
if (fileName.endsWith(FILE_SUFFIXES.CONTROLLER)) {
if (!NAMING_PATTERNS.INFRASTRUCTURE.CONTROLLER.pattern.test(fileName)) {
violations.push(
NamingViolation.create(
fileName,
NAMING_VIOLATION_TYPES.WRONG_SUFFIX,
LAYERS.INFRASTRUCTURE,
filePath,
NAMING_PATTERNS.INFRASTRUCTURE.CONTROLLER.description,
fileName,
),
)
}
return violations
}
if (
fileName.endsWith(FILE_SUFFIXES.REPOSITORY) &&
!fileName.startsWith(PATTERN_WORDS.I_PREFIX)
) {
if (!NAMING_PATTERNS.INFRASTRUCTURE.REPOSITORY_IMPL.pattern.test(fileName)) {
violations.push(
NamingViolation.create(
fileName,
NAMING_VIOLATION_TYPES.WRONG_SUFFIX,
LAYERS.INFRASTRUCTURE,
filePath,
NAMING_PATTERNS.INFRASTRUCTURE.REPOSITORY_IMPL.description,
fileName,
),
)
}
return violations
}
if (fileName.endsWith(FILE_SUFFIXES.SERVICE) || fileName.endsWith(FILE_SUFFIXES.ADAPTER)) {
if (!NAMING_PATTERNS.INFRASTRUCTURE.SERVICE.pattern.test(fileName)) {
violations.push(
NamingViolation.create(
fileName,
NAMING_VIOLATION_TYPES.WRONG_SUFFIX,
LAYERS.INFRASTRUCTURE,
filePath,
NAMING_PATTERNS.INFRASTRUCTURE.SERVICE.description,
fileName,
),
)
}
return violations
}
return violations
}
private startsWithCommonVerb(fileName: string): boolean {
const baseFileName = fileName.replace(/\.tsx?$/, "")
return USE_CASE_VERBS.some((verb) => baseFileName.startsWith(verb))
return this.parser.parseJavaScript(code)
}
}

View File

@@ -63,6 +63,28 @@ export const NAMING_ERROR_MESSAGES = {
USE_DTO_SUFFIX: "Use *Dto, *Request, or *Response suffix (e.g., UserResponseDto.ts)",
USE_VERB_NOUN: "Use verb + noun in PascalCase (e.g., CreateUser.ts, UpdateProfile.ts)",
USE_CASE_START_VERB: "Use cases should start with a verb",
DOMAIN_SERVICE_PASCAL_CASE: "Domain services must be PascalCase ending with 'Service'",
DOMAIN_ENTITY_PASCAL_CASE: "Domain entities must be PascalCase nouns",
DTO_PASCAL_CASE: "DTOs must be PascalCase ending with 'Dto', 'Request', or 'Response'",
MAPPER_PASCAL_CASE: "Mappers must be PascalCase ending with 'Mapper'",
USE_CASE_VERB_NOUN: "Use cases must be PascalCase Verb+Noun (e.g., CreateUser)",
CONTROLLER_PASCAL_CASE: "Controllers must be PascalCase ending with 'Controller'",
REPOSITORY_IMPL_PASCAL_CASE:
"Repository implementations must be PascalCase ending with 'Repository'",
SERVICE_ADAPTER_PASCAL_CASE:
"Services/Adapters must be PascalCase ending with 'Service' or 'Adapter'",
FUNCTION_CAMEL_CASE: "Functions and methods must be camelCase",
USE_CAMEL_CASE_FUNCTION: "Use camelCase for function names (e.g., getUserById, createOrder)",
INTERFACE_PASCAL_CASE: "Interfaces must be PascalCase",
USE_PASCAL_CASE_INTERFACE: "Use PascalCase for interface names",
REPOSITORY_INTERFACE_I_PREFIX:
"Domain repository interfaces must start with 'I' (e.g., IUserRepository)",
REPOSITORY_INTERFACE_PATTERN: "Repository interfaces must be I + PascalCase + Repository",
CONSTANT_UPPER_SNAKE_CASE: "Exported constants must be UPPER_SNAKE_CASE",
USE_UPPER_SNAKE_CASE_CONSTANT:
"Use UPPER_SNAKE_CASE for constant names (e.g., MAX_RETRIES, API_URL)",
VARIABLE_CAMEL_CASE: "Variables must be camelCase",
USE_CAMEL_CASE_VARIABLE: "Use camelCase for variable names (e.g., userId, orderList)",
} as const
/**

View File

@@ -1,6 +1,6 @@
import Parser from "tree-sitter"
import { HardcodedValue, HardcodeType } from "../../domain/value-objects/HardcodedValue"
import { DETECTION_VALUES } from "../../shared/constants/rules"
import { DETECTION_VALUES, HARDCODE_TYPES } from "../../shared/constants/rules"
import { AstContextChecker } from "./AstContextChecker"
/**
@@ -83,7 +83,7 @@ export class AstBooleanAnalyzer {
return HardcodedValue.create(
value,
"MAGIC_BOOLEAN" as HardcodeType,
HARDCODE_TYPES.MAGIC_BOOLEAN as HardcodeType,
lineNumber,
column,
context,

View File

@@ -1,6 +1,7 @@
import Parser from "tree-sitter"
import { HardcodedValue, HardcodeType } from "../../domain/value-objects/HardcodedValue"
import { HARDCODE_TYPES } from "../../shared/constants/rules"
import { AST_STRING_TYPES } from "../../shared/constants/ast-node-types"
import { ALLOWED_NUMBERS } from "../constants/defaults"
import { AstContextChecker } from "./AstContextChecker"
@@ -71,7 +72,9 @@ export class AstConfigObjectAnalyzer {
}
if (node.type === "string") {
const stringFragment = node.children.find((c) => c.type === "string_fragment")
const stringFragment = node.children.find(
(c) => c.type === AST_STRING_TYPES.STRING_FRAGMENT,
)
return stringFragment !== undefined && stringFragment.text.length > 3
}

View File

@@ -1,4 +1,10 @@
import Parser from "tree-sitter"
import {
AST_FIELD_NAMES,
AST_IDENTIFIER_TYPES,
AST_MODIFIER_TYPES,
AST_VARIABLE_TYPES,
} from "../../shared/constants/ast-node-types"
/**
* AST context checker for analyzing node contexts
@@ -29,22 +35,26 @@ export class AstContextChecker {
* Helper to check if export statement contains "as const"
*/
private checkExportedConstant(exportNode: Parser.SyntaxNode): boolean {
const declaration = exportNode.childForFieldName("declaration")
const declaration = exportNode.childForFieldName(AST_FIELD_NAMES.DECLARATION)
if (!declaration) {
return false
}
const declarator = this.findDescendant(declaration, "variable_declarator")
if (declaration.type !== "lexical_declaration") {
return false
}
const declarator = this.findDescendant(declaration, AST_VARIABLE_TYPES.VARIABLE_DECLARATOR)
if (!declarator) {
return false
}
const value = declarator.childForFieldName("value")
const value = declarator.childForFieldName(AST_FIELD_NAMES.VALUE)
if (value?.type !== "as_expression") {
return false
}
const asType = value.children.find((c) => c.type === "const")
const asType = value.children.find((c) => c.type === AST_MODIFIER_TYPES.CONST)
return asType !== undefined
}
@@ -83,12 +93,17 @@ export class AstContextChecker {
if (current.type === "call_expression") {
const functionNode =
current.childForFieldName("function") ||
current.children.find((c) => c.type === "identifier" || c.type === "import")
current.childForFieldName(AST_FIELD_NAMES.FUNCTION) ||
current.children.find(
(c) =>
c.type === AST_IDENTIFIER_TYPES.IDENTIFIER ||
c.type === AST_IDENTIFIER_TYPES.IMPORT,
)
if (
functionNode &&
(functionNode.text === "import" || functionNode.type === "import")
(functionNode.text === "import" ||
functionNode.type === AST_IDENTIFIER_TYPES.IMPORT)
) {
return true
}
@@ -229,7 +244,13 @@ export class AstContextChecker {
public getNodeContext(node: Parser.SyntaxNode): string {
let current: Parser.SyntaxNode | null = node
while (current && current.type !== "lexical_declaration" && current.type !== "pair") {
while (
current &&
current.type !== "lexical_declaration" &&
current.type !== "pair" &&
current.type !== "call_expression" &&
current.type !== "return_statement"
) {
current = current.parent
}

View File

@@ -1,6 +1,7 @@
import Parser from "tree-sitter"
import { HardcodedValue, HardcodeType } from "../../domain/value-objects/HardcodedValue"
import { HARDCODE_TYPES } from "../../shared/constants/rules"
import { TIMER_FUNCTIONS } from "../../shared/constants/ast-node-types"
import { ALLOWED_NUMBERS, DETECTION_KEYWORDS } from "../constants/defaults"
import { AstContextChecker } from "./AstContextChecker"
@@ -43,7 +44,12 @@ export class AstNumberAnalyzer {
return false
}
if (this.contextChecker.isInCallExpression(parent, ["setTimeout", "setInterval"])) {
if (
this.contextChecker.isInCallExpression(parent, [
TIMER_FUNCTIONS.SET_TIMEOUT,
TIMER_FUNCTIONS.SET_INTERVAL,
])
) {
return true
}

View File

@@ -1,6 +1,7 @@
import Parser from "tree-sitter"
import { HardcodedValue, HardcodeType } from "../../domain/value-objects/HardcodedValue"
import { CONFIG_KEYWORDS, DETECTION_VALUES, HARDCODE_TYPES } from "../../shared/constants/rules"
import { AST_STRING_TYPES } from "../../shared/constants/ast-node-types"
import { AstContextChecker } from "./AstContextChecker"
import { ValuePatternMatcher } from "./ValuePatternMatcher"
@@ -29,7 +30,9 @@ export class AstStringAnalyzer {
* Analyzes a string node and returns a violation if it's a magic string
*/
public analyze(node: Parser.SyntaxNode, lines: string[]): HardcodedValue | null {
const stringFragment = node.children.find((child) => child.type === "string_fragment")
const stringFragment = node.children.find(
(child) => child.type === AST_STRING_TYPES.STRING_FRAGMENT,
)
if (!stringFragment) {
return null
}
@@ -108,6 +111,7 @@ export class AstStringAnalyzer {
"key",
...CONFIG_KEYWORDS.MESSAGES,
"label",
...CONFIG_KEYWORDS.TECHNICAL,
]
return configKeywords.some((keyword) => context.includes(keyword))

View File

@@ -1,3 +1,5 @@
import { VALUE_PATTERN_TYPES } from "../../shared/constants/ast-node-types"
/**
* Pattern matcher for detecting specific value types
*
@@ -131,40 +133,40 @@ export class ValuePatternMatcher {
| "base64"
| null {
if (this.isEmail(value)) {
return "email"
return VALUE_PATTERN_TYPES.EMAIL
}
if (this.isJwt(value)) {
return "api_key"
return VALUE_PATTERN_TYPES.API_KEY
}
if (this.isApiKey(value)) {
return "api_key"
return VALUE_PATTERN_TYPES.API_KEY
}
if (this.isUrl(value)) {
return "url"
return VALUE_PATTERN_TYPES.URL
}
if (this.isIpAddress(value)) {
return "ip_address"
return VALUE_PATTERN_TYPES.IP_ADDRESS
}
if (this.isFilePath(value)) {
return "file_path"
return VALUE_PATTERN_TYPES.FILE_PATH
}
if (this.isDate(value)) {
return "date"
return VALUE_PATTERN_TYPES.DATE
}
if (this.isUuid(value)) {
return "uuid"
return VALUE_PATTERN_TYPES.UUID
}
if (this.isSemver(value)) {
return "version"
return VALUE_PATTERN_TYPES.VERSION
}
if (this.isHexColor(value)) {
return "color"
}
if (this.isMacAddress(value)) {
return "mac_address"
return VALUE_PATTERN_TYPES.MAC_ADDRESS
}
if (this.isBase64(value)) {
return "base64"
return VALUE_PATTERN_TYPES.BASE64
}
return null
}

View File

@@ -0,0 +1,230 @@
import Parser from "tree-sitter"
import { NamingViolation } from "../../../domain/value-objects/NamingViolation"
import { AST_CLASS_TYPES, AST_FIELD_NAMES } from "../../../shared/constants"
import { LAYERS, NAMING_VIOLATION_TYPES, USE_CASE_VERBS } from "../../../shared/constants/rules"
import {
FILE_SUFFIXES,
NAMING_ERROR_MESSAGES,
PATTERN_WORDS,
} from "../../constants/detectorPatterns"
/**
* AST-based analyzer for detecting class naming violations
*
* Analyzes class declaration nodes to ensure proper naming conventions:
* - Domain layer: PascalCase entities and services (*Service)
* - Application layer: PascalCase use cases (Verb+Noun), DTOs (*Dto/*Request/*Response)
* - Infrastructure layer: PascalCase controllers, repositories, services
*/
export class AstClassNameAnalyzer {
/**
* Analyzes a class declaration node
*/
public analyze(
node: Parser.SyntaxNode,
layer: string,
filePath: string,
_lines: string[],
): NamingViolation | null {
if (node.type !== AST_CLASS_TYPES.CLASS_DECLARATION) {
return null
}
const nameNode = node.childForFieldName(AST_FIELD_NAMES.NAME)
if (!nameNode) {
return null
}
const className = nameNode.text
const lineNumber = nameNode.startPosition.row + 1
switch (layer) {
case LAYERS.DOMAIN:
return this.checkDomainClass(className, filePath, lineNumber)
case LAYERS.APPLICATION:
return this.checkApplicationClass(className, filePath, lineNumber)
case LAYERS.INFRASTRUCTURE:
return this.checkInfrastructureClass(className, filePath, lineNumber)
default:
return null
}
}
/**
* Checks domain layer class naming
*/
private checkDomainClass(
className: string,
filePath: string,
lineNumber: number,
): NamingViolation | null {
if (className.endsWith(FILE_SUFFIXES.SERVICE.replace(".ts", ""))) {
if (!/^[A-Z][a-zA-Z0-9]*Service$/.test(className)) {
return NamingViolation.create(
className,
NAMING_VIOLATION_TYPES.WRONG_CASE,
LAYERS.DOMAIN,
`${filePath}:${String(lineNumber)}`,
NAMING_ERROR_MESSAGES.DOMAIN_SERVICE_PASCAL_CASE,
className,
)
}
return null
}
if (!/^[A-Z][a-zA-Z0-9]*$/.test(className)) {
return NamingViolation.create(
className,
NAMING_VIOLATION_TYPES.WRONG_CASE,
LAYERS.DOMAIN,
`${filePath}:${String(lineNumber)}`,
NAMING_ERROR_MESSAGES.DOMAIN_ENTITY_PASCAL_CASE,
className,
NAMING_ERROR_MESSAGES.USE_PASCAL_CASE,
)
}
return null
}
/**
* Checks application layer class naming
*/
private checkApplicationClass(
className: string,
filePath: string,
lineNumber: number,
): NamingViolation | null {
if (
className.endsWith("Dto") ||
className.endsWith("Request") ||
className.endsWith("Response")
) {
if (!/^[A-Z][a-zA-Z0-9]*(Dto|Request|Response)$/.test(className)) {
return NamingViolation.create(
className,
NAMING_VIOLATION_TYPES.WRONG_SUFFIX,
LAYERS.APPLICATION,
`${filePath}:${String(lineNumber)}`,
NAMING_ERROR_MESSAGES.DTO_PASCAL_CASE,
className,
NAMING_ERROR_MESSAGES.USE_DTO_SUFFIX,
)
}
return null
}
if (className.endsWith("Mapper")) {
if (!/^[A-Z][a-zA-Z0-9]*Mapper$/.test(className)) {
return NamingViolation.create(
className,
NAMING_VIOLATION_TYPES.WRONG_SUFFIX,
LAYERS.APPLICATION,
`${filePath}:${String(lineNumber)}`,
NAMING_ERROR_MESSAGES.MAPPER_PASCAL_CASE,
className,
)
}
return null
}
const startsWithVerb = this.startsWithCommonVerb(className)
const startsWithLowercaseVerb = this.startsWithLowercaseVerb(className)
if (startsWithVerb) {
if (!/^[A-Z][a-z]+[A-Z][a-zA-Z0-9]*$/.test(className)) {
return NamingViolation.create(
className,
NAMING_VIOLATION_TYPES.WRONG_VERB_NOUN,
LAYERS.APPLICATION,
`${filePath}:${String(lineNumber)}`,
NAMING_ERROR_MESSAGES.USE_CASE_VERB_NOUN,
className,
NAMING_ERROR_MESSAGES.USE_VERB_NOUN,
)
}
} else if (startsWithLowercaseVerb) {
return NamingViolation.create(
className,
NAMING_VIOLATION_TYPES.WRONG_VERB_NOUN,
LAYERS.APPLICATION,
`${filePath}:${String(lineNumber)}`,
NAMING_ERROR_MESSAGES.USE_CASE_VERB_NOUN,
className,
NAMING_ERROR_MESSAGES.USE_VERB_NOUN,
)
}
return null
}
/**
* Checks infrastructure layer class naming
*/
private checkInfrastructureClass(
className: string,
filePath: string,
lineNumber: number,
): NamingViolation | null {
if (className.endsWith("Controller")) {
if (!/^[A-Z][a-zA-Z0-9]*Controller$/.test(className)) {
return NamingViolation.create(
className,
NAMING_VIOLATION_TYPES.WRONG_SUFFIX,
LAYERS.INFRASTRUCTURE,
`${filePath}:${String(lineNumber)}`,
NAMING_ERROR_MESSAGES.CONTROLLER_PASCAL_CASE,
className,
)
}
return null
}
if (
className.endsWith(PATTERN_WORDS.REPOSITORY) &&
!className.startsWith(PATTERN_WORDS.I_PREFIX)
) {
if (!/^[A-Z][a-zA-Z0-9]*Repository$/.test(className)) {
return NamingViolation.create(
className,
NAMING_VIOLATION_TYPES.WRONG_SUFFIX,
LAYERS.INFRASTRUCTURE,
`${filePath}:${String(lineNumber)}`,
NAMING_ERROR_MESSAGES.REPOSITORY_IMPL_PASCAL_CASE,
className,
)
}
return null
}
if (className.endsWith("Service") || className.endsWith("Adapter")) {
if (!/^[A-Z][a-zA-Z0-9]*(Service|Adapter)$/.test(className)) {
return NamingViolation.create(
className,
NAMING_VIOLATION_TYPES.WRONG_SUFFIX,
LAYERS.INFRASTRUCTURE,
`${filePath}:${String(lineNumber)}`,
NAMING_ERROR_MESSAGES.SERVICE_ADAPTER_PASCAL_CASE,
className,
)
}
return null
}
return null
}
/**
* Checks if class name starts with a common use case verb
*/
private startsWithCommonVerb(className: string): boolean {
return USE_CASE_VERBS.some((verb) => className.startsWith(verb))
}
/**
* Checks if class name starts with a lowercase verb (camelCase use case)
*/
private startsWithLowercaseVerb(className: string): boolean {
const lowercaseVerbs = USE_CASE_VERBS.map((verb) => verb.toLowerCase())
return lowercaseVerbs.some((verb) => className.startsWith(verb))
}
}

View File

@@ -0,0 +1,65 @@
import Parser from "tree-sitter"
import { NamingViolation } from "../../../domain/value-objects/NamingViolation"
import { AST_FIELD_NAMES, AST_FUNCTION_TYPES, CLASS_KEYWORDS } from "../../../shared/constants"
import { NAMING_VIOLATION_TYPES } from "../../../shared/constants/rules"
import { NAMING_ERROR_MESSAGES } from "../../constants/detectorPatterns"
/**
* AST-based analyzer for detecting function and method naming violations
*
* Analyzes function declaration, method definition, and arrow function nodes
* to ensure proper naming conventions:
* - Functions and methods should be camelCase
* - Private methods with underscore prefix are allowed
*/
export class AstFunctionNameAnalyzer {
/**
* Analyzes a function or method declaration node
*/
public analyze(
node: Parser.SyntaxNode,
layer: string,
filePath: string,
_lines: string[],
): NamingViolation | null {
const functionNodeTypes = [
AST_FUNCTION_TYPES.FUNCTION_DECLARATION,
AST_FUNCTION_TYPES.METHOD_DEFINITION,
AST_FUNCTION_TYPES.FUNCTION_SIGNATURE,
] as const
if (!(functionNodeTypes as readonly string[]).includes(node.type)) {
return null
}
const nameNode = node.childForFieldName(AST_FIELD_NAMES.NAME)
if (!nameNode) {
return null
}
const functionName = nameNode.text
const lineNumber = nameNode.startPosition.row + 1
if (functionName.startsWith("_")) {
return null
}
if (functionName === CLASS_KEYWORDS.CONSTRUCTOR) {
return null
}
if (!/^[a-z][a-zA-Z0-9]*$/.test(functionName)) {
return NamingViolation.create(
functionName,
NAMING_VIOLATION_TYPES.WRONG_CASE,
layer,
`${filePath}:${String(lineNumber)}`,
NAMING_ERROR_MESSAGES.FUNCTION_CAMEL_CASE,
functionName,
NAMING_ERROR_MESSAGES.USE_CAMEL_CASE_FUNCTION,
)
}
return null
}
}

View File

@@ -0,0 +1,90 @@
import Parser from "tree-sitter"
import { NamingViolation } from "../../../domain/value-objects/NamingViolation"
import { AST_CLASS_TYPES, AST_FIELD_NAMES } from "../../../shared/constants"
import { LAYERS, NAMING_VIOLATION_TYPES } from "../../../shared/constants/rules"
import { NAMING_ERROR_MESSAGES, PATTERN_WORDS } from "../../constants/detectorPatterns"
/**
* AST-based analyzer for detecting interface naming violations
*
* Analyzes interface declaration nodes to ensure proper naming conventions:
* - Domain layer: Repository interfaces must start with 'I' (e.g., IUserRepository)
* - All layers: Interfaces should be PascalCase
*/
export class AstInterfaceNameAnalyzer {
/**
* Analyzes an interface declaration node
*/
public analyze(
node: Parser.SyntaxNode,
layer: string,
filePath: string,
_lines: string[],
): NamingViolation | null {
if (node.type !== AST_CLASS_TYPES.INTERFACE_DECLARATION) {
return null
}
const nameNode = node.childForFieldName(AST_FIELD_NAMES.NAME)
if (!nameNode) {
return null
}
const interfaceName = nameNode.text
const lineNumber = nameNode.startPosition.row + 1
if (!/^[A-Z][a-zA-Z0-9]*$/.test(interfaceName)) {
return NamingViolation.create(
interfaceName,
NAMING_VIOLATION_TYPES.WRONG_CASE,
layer,
`${filePath}:${String(lineNumber)}`,
NAMING_ERROR_MESSAGES.INTERFACE_PASCAL_CASE,
interfaceName,
NAMING_ERROR_MESSAGES.USE_PASCAL_CASE_INTERFACE,
)
}
if (layer === LAYERS.DOMAIN) {
return this.checkDomainInterface(interfaceName, filePath, lineNumber)
}
return null
}
/**
* Checks domain layer interface naming
*/
private checkDomainInterface(
interfaceName: string,
filePath: string,
lineNumber: number,
): NamingViolation | null {
if (interfaceName.endsWith(PATTERN_WORDS.REPOSITORY)) {
if (!interfaceName.startsWith(PATTERN_WORDS.I_PREFIX)) {
return NamingViolation.create(
interfaceName,
NAMING_VIOLATION_TYPES.WRONG_PREFIX,
LAYERS.DOMAIN,
`${filePath}:${String(lineNumber)}`,
NAMING_ERROR_MESSAGES.REPOSITORY_INTERFACE_I_PREFIX,
interfaceName,
`Rename to I${interfaceName}`,
)
}
if (!/^I[A-Z][a-zA-Z0-9]*Repository$/.test(interfaceName)) {
return NamingViolation.create(
interfaceName,
NAMING_VIOLATION_TYPES.WRONG_CASE,
LAYERS.DOMAIN,
`${filePath}:${String(lineNumber)}`,
NAMING_ERROR_MESSAGES.REPOSITORY_INTERFACE_PATTERN,
interfaceName,
)
}
}
return null
}
}

View File

@@ -0,0 +1,92 @@
import Parser from "tree-sitter"
import { NamingViolation } from "../../../domain/value-objects/NamingViolation"
import { AST_CLASS_TYPES, AST_FUNCTION_TYPES, AST_VARIABLE_TYPES } from "../../../shared/constants"
import { AstClassNameAnalyzer } from "./AstClassNameAnalyzer"
import { AstFunctionNameAnalyzer } from "./AstFunctionNameAnalyzer"
import { AstInterfaceNameAnalyzer } from "./AstInterfaceNameAnalyzer"
import { AstVariableNameAnalyzer } from "./AstVariableNameAnalyzer"
/**
* AST tree traverser for detecting naming convention violations
*
* Walks through the Abstract Syntax Tree and uses analyzers
* to detect naming violations in classes, interfaces, functions, and variables.
*/
export class AstNamingTraverser {
constructor(
private readonly classAnalyzer: AstClassNameAnalyzer,
private readonly interfaceAnalyzer: AstInterfaceNameAnalyzer,
private readonly functionAnalyzer: AstFunctionNameAnalyzer,
private readonly variableAnalyzer: AstVariableNameAnalyzer,
) {}
/**
* Traverses the AST tree and collects naming violations
*/
public traverse(
tree: Parser.Tree,
sourceCode: string,
layer: string,
filePath: string,
): NamingViolation[] {
const results: NamingViolation[] = []
const lines = sourceCode.split("\n")
const cursor = tree.walk()
this.visit(cursor, lines, layer, filePath, results)
return results
}
/**
* Recursively visits AST nodes
*/
private visit(
cursor: Parser.TreeCursor,
lines: string[],
layer: string,
filePath: string,
results: NamingViolation[],
): void {
const node = cursor.currentNode
if (node.type === AST_CLASS_TYPES.CLASS_DECLARATION) {
const violation = this.classAnalyzer.analyze(node, layer, filePath, lines)
if (violation) {
results.push(violation)
}
} else if (node.type === AST_CLASS_TYPES.INTERFACE_DECLARATION) {
const violation = this.interfaceAnalyzer.analyze(node, layer, filePath, lines)
if (violation) {
results.push(violation)
}
} else if (
node.type === AST_FUNCTION_TYPES.FUNCTION_DECLARATION ||
node.type === AST_FUNCTION_TYPES.METHOD_DEFINITION ||
node.type === AST_FUNCTION_TYPES.FUNCTION_SIGNATURE
) {
const violation = this.functionAnalyzer.analyze(node, layer, filePath, lines)
if (violation) {
results.push(violation)
}
} else if (
node.type === AST_VARIABLE_TYPES.VARIABLE_DECLARATOR ||
node.type === AST_VARIABLE_TYPES.REQUIRED_PARAMETER ||
node.type === AST_VARIABLE_TYPES.OPTIONAL_PARAMETER ||
node.type === AST_VARIABLE_TYPES.PUBLIC_FIELD_DEFINITION ||
node.type === AST_VARIABLE_TYPES.PROPERTY_SIGNATURE
) {
const violation = this.variableAnalyzer.analyze(node, layer, filePath, lines)
if (violation) {
results.push(violation)
}
}
if (cursor.gotoFirstChild()) {
do {
this.visit(cursor, lines, layer, filePath, results)
} while (cursor.gotoNextSibling())
cursor.gotoParent()
}
}
}

View File

@@ -0,0 +1,159 @@
import Parser from "tree-sitter"
import { NamingViolation } from "../../../domain/value-objects/NamingViolation"
import {
AST_FIELD_NAMES,
AST_FIELD_TYPES,
AST_MODIFIER_TYPES,
AST_PATTERN_TYPES,
AST_STATEMENT_TYPES,
AST_VARIABLE_TYPES,
} from "../../../shared/constants"
import { NAMING_VIOLATION_TYPES } from "../../../shared/constants/rules"
import { NAMING_ERROR_MESSAGES } from "../../constants/detectorPatterns"
/**
* AST-based analyzer for detecting variable naming violations
*
* Analyzes variable declarations to ensure proper naming conventions:
* - Regular variables: camelCase
* - Constants (exported UPPER_CASE): UPPER_SNAKE_CASE
* - Class properties: camelCase
* - Private properties with underscore prefix are allowed
*/
export class AstVariableNameAnalyzer {
/**
* Analyzes a variable declaration node
*/
public analyze(
node: Parser.SyntaxNode,
layer: string,
filePath: string,
_lines: string[],
): NamingViolation | null {
const variableNodeTypes = [
AST_VARIABLE_TYPES.VARIABLE_DECLARATOR,
AST_VARIABLE_TYPES.REQUIRED_PARAMETER,
AST_VARIABLE_TYPES.OPTIONAL_PARAMETER,
AST_VARIABLE_TYPES.PUBLIC_FIELD_DEFINITION,
AST_VARIABLE_TYPES.PROPERTY_SIGNATURE,
] as const
if (!(variableNodeTypes as readonly string[]).includes(node.type)) {
return null
}
const nameNode = node.childForFieldName(AST_FIELD_NAMES.NAME)
if (!nameNode) {
return null
}
if (this.isDestructuringPattern(nameNode)) {
return null
}
const variableName = nameNode.text
const lineNumber = nameNode.startPosition.row + 1
if (variableName.startsWith("_")) {
return null
}
const isConstant = this.isConstantVariable(node)
if (isConstant) {
if (!/^[A-Z][A-Z0-9_]*$/.test(variableName)) {
return NamingViolation.create(
variableName,
NAMING_VIOLATION_TYPES.WRONG_CASE,
layer,
`${filePath}:${String(lineNumber)}`,
NAMING_ERROR_MESSAGES.CONSTANT_UPPER_SNAKE_CASE,
variableName,
NAMING_ERROR_MESSAGES.USE_UPPER_SNAKE_CASE_CONSTANT,
)
}
} else {
if (!/^[a-z][a-zA-Z0-9]*$/.test(variableName)) {
return NamingViolation.create(
variableName,
NAMING_VIOLATION_TYPES.WRONG_CASE,
layer,
`${filePath}:${String(lineNumber)}`,
NAMING_ERROR_MESSAGES.VARIABLE_CAMEL_CASE,
variableName,
NAMING_ERROR_MESSAGES.USE_CAMEL_CASE_VARIABLE,
)
}
}
return null
}
/**
* Checks if node is a destructuring pattern (object or array)
*/
private isDestructuringPattern(node: Parser.SyntaxNode): boolean {
return (
node.type === AST_PATTERN_TYPES.OBJECT_PATTERN ||
node.type === AST_PATTERN_TYPES.ARRAY_PATTERN
)
}
/**
* Checks if a variable is a constant (exported UPPER_CASE)
*/
private isConstantVariable(node: Parser.SyntaxNode): boolean {
const variableName = node.childForFieldName(AST_FIELD_NAMES.NAME)?.text
if (!variableName || !/^[A-Z]/.test(variableName)) {
return false
}
if (
node.type === AST_VARIABLE_TYPES.PUBLIC_FIELD_DEFINITION ||
node.type === AST_FIELD_TYPES.FIELD_DEFINITION
) {
return this.hasConstModifiers(node)
}
let current: Parser.SyntaxNode | null = node.parent
while (current) {
if (current.type === AST_STATEMENT_TYPES.LEXICAL_DECLARATION) {
const firstChild = current.child(0)
if (firstChild?.type === AST_MODIFIER_TYPES.CONST) {
return true
}
}
if (
current.type === AST_VARIABLE_TYPES.PUBLIC_FIELD_DEFINITION ||
current.type === AST_FIELD_TYPES.FIELD_DEFINITION
) {
return this.hasConstModifiers(current)
}
current = current.parent
}
return false
}
/**
* Checks if field has readonly or static modifiers (indicating a constant)
*/
private hasConstModifiers(fieldNode: Parser.SyntaxNode): boolean {
for (let i = 0; i < fieldNode.childCount; i++) {
const child = fieldNode.child(i)
const childText = child?.text
if (
child?.type === AST_MODIFIER_TYPES.READONLY ||
child?.type === AST_MODIFIER_TYPES.STATIC ||
childText === AST_MODIFIER_TYPES.READONLY ||
childText === AST_MODIFIER_TYPES.STATIC
) {
return true
}
}
return false
}
}

View File

@@ -0,0 +1,139 @@
/**
* Abstract Syntax Tree (AST) node type constants
*
* These constants represent tree-sitter AST node types used for code analysis.
* Using constants instead of magic strings improves maintainability and prevents typos.
*
* @see https://tree-sitter.github.io/tree-sitter/
*/
/**
* Class and interface declaration node types
*/
export const AST_CLASS_TYPES = {
CLASS_DECLARATION: "class_declaration",
INTERFACE_DECLARATION: "interface_declaration",
} as const
/**
* Function and method node types
*/
export const AST_FUNCTION_TYPES = {
FUNCTION_DECLARATION: "function_declaration",
METHOD_DEFINITION: "method_definition",
FUNCTION_SIGNATURE: "function_signature",
} as const
/**
* Variable and parameter node types
*/
export const AST_VARIABLE_TYPES = {
VARIABLE_DECLARATOR: "variable_declarator",
REQUIRED_PARAMETER: "required_parameter",
OPTIONAL_PARAMETER: "optional_parameter",
PUBLIC_FIELD_DEFINITION: "public_field_definition",
PROPERTY_SIGNATURE: "property_signature",
} as const
/**
* Type system node types
*/
export const AST_TYPE_TYPES = {
TYPE_ALIAS_DECLARATION: "type_alias_declaration",
UNION_TYPE: "union_type",
LITERAL_TYPE: "literal_type",
TYPE_ANNOTATION: "type_annotation",
} as const
/**
* Statement node types
*/
export const AST_STATEMENT_TYPES = {
EXPORT_STATEMENT: "export_statement",
IMPORT_STATEMENT: "import_statement",
LEXICAL_DECLARATION: "lexical_declaration",
} as const
/**
* Expression node types
*/
export const AST_EXPRESSION_TYPES = {
CALL_EXPRESSION: "call_expression",
AS_EXPRESSION: "as_expression",
} as const
/**
* Field and property node types
*/
export const AST_FIELD_TYPES = {
FIELD_DEFINITION: "field_definition",
} as const
/**
* Pattern node types
*/
export const AST_PATTERN_TYPES = {
OBJECT_PATTERN: "object_pattern",
ARRAY_PATTERN: "array_pattern",
} as const
/**
* Modifier node types
*/
export const AST_MODIFIER_TYPES = {
READONLY: "readonly",
STATIC: "static",
CONST: "const",
} as const
/**
* Special identifier node types
*/
export const AST_IDENTIFIER_TYPES = {
IDENTIFIER: "identifier",
TYPE_IDENTIFIER: "type_identifier",
PROPERTY_IDENTIFIER: "property_identifier",
IMPORT: "import",
} as const
/**
* Node field names used with childForFieldName()
*/
export const AST_FIELD_NAMES = {
NAME: "name",
DECLARATION: "declaration",
VALUE: "value",
FUNCTION: "function",
} as const
/**
* String fragment node type
*/
export const AST_STRING_TYPES = {
STRING_FRAGMENT: "string_fragment",
} as const
/**
* Common JavaScript timer functions
*/
export const TIMER_FUNCTIONS = {
SET_TIMEOUT: "setTimeout",
SET_INTERVAL: "setInterval",
} as const
/**
* Value pattern types for pattern matching
*/
export const VALUE_PATTERN_TYPES = {
EMAIL: "email",
API_KEY: "api_key",
URL: "url",
IP_ADDRESS: "ip_address",
FILE_PATH: "file_path",
DATE: "date",
UUID: "uuid",
VERSION: "version",
JWT: "jwt",
MAC_ADDRESS: "mac_address",
BASE64: "base64",
} as const

View File

@@ -119,3 +119,4 @@ export const VIOLATION_SEVERITY_MAP = {
} as const
export * from "./rules"
export * from "./ast-node-types"

View File

@@ -459,7 +459,27 @@ export const CONFIG_KEYWORDS = {
NETWORK: ["endpoint", "host", "domain", "path", "route"],
DATABASE: ["connection", "database"],
SECURITY: ["config", "secret", "token", "password", "credential"],
MESSAGES: ["message", "error", "warning", "text"],
MESSAGES: [
"message",
"error",
"warning",
"text",
"description",
"suggestion",
"violation",
"expected",
"actual",
],
TECHNICAL: [
"type",
"node",
"declaration",
"definition",
"signature",
"pattern",
"suffix",
"prefix",
],
} as const
/**

View File

@@ -1,19 +1,26 @@
{
"compilerOptions": {
"outDir": "./dist",
"rootDir": "./src",
"target": "ES2023",
"module": "CommonJS",
"moduleResolution": "node",
"declaration": true,
"declarationMap": true,
"esModuleInterop": true,
"allowSyntheticDefaultImports": true,
"strict": true,
"skipLibCheck": true,
"sourceMap": true,
"resolveJsonModule": true
},
"include": ["src/**/*"],
"exclude": ["node_modules", "dist", "**/*.spec.ts", "**/*.test.ts"]
"compilerOptions": {
"outDir": "./dist",
"rootDir": "./src",
"target": "ES2023",
"module": "CommonJS",
"moduleResolution": "node",
"declaration": true,
"declarationMap": true,
"esModuleInterop": true,
"allowSyntheticDefaultImports": true,
"strict": true,
"skipLibCheck": true,
"sourceMap": true,
"resolveJsonModule": true
},
"include": [
"src/**/*"
],
"exclude": [
"node_modules",
"dist",
"**/*.spec.ts",
"**/*.test.ts"
]
}

13
packages/ipuaro/.gitignore vendored Normal file
View File

@@ -0,0 +1,13 @@
# Build output
dist/
*.tsbuildinfo
# Dependencies
node_modules/
# Test coverage
coverage/
# Logs
*.log
npm-debug.log*

View File

@@ -0,0 +1,38 @@
# Source files (only publish dist/)
src/
*.ts
!*.d.ts
# Build artifacts
tsconfig.json
tsconfig.*.json
tsconfig.tsbuildinfo
*.tsbuildinfo
# Tests
**/*.spec.ts
**/*.test.ts
__tests__/
coverage/
# Development
node_modules/
.env
.env.*
# IDE
.vscode/
.idea/
*.swp
*.swo
# Git
.git/
.gitignore
# Other
*.log
npm-debug.log*
yarn-debug.log*
yarn-error.log*
.DS_Store

View File

@@ -0,0 +1,79 @@
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [0.2.0] - 2025-01-30
### Added
- **Redis Storage (0.2.x milestone)**
- RedisClient: connection wrapper with AOF persistence configuration
- RedisStorage: full IStorage implementation with Redis hashes
- Redis key schema: project files, AST, meta, indexes, config
- Session keys schema: data, undo stack, sessions list
- `generateProjectName()` utility for consistent project naming
- **Infrastructure Layer**
- `src/infrastructure/storage/` module
- Exports via `src/infrastructure/index.ts`
- **Testing**
- 68 new unit tests for Redis module
- 159 total tests
- 99% code coverage maintained
### Changed
- Updated ESLint config: `@typescript-eslint/no-unnecessary-type-parameters` set to warn
### Notes
Redis Storage milestone complete. Next: 0.3.0 - Indexer (FileScanner, AST Parser, Watchdog)
## [0.1.0] - 2025-01-29
### Added
- **Project Setup**
- package.json with all dependencies (ink, ioredis, tree-sitter, ollama, etc.)
- tsconfig.json for ESM + React JSX
- tsup.config.ts for bundling
- vitest.config.ts with 80% coverage threshold
- CLI entry point (bin/ipuaro.js)
- **Domain Layer**
- Entities: Session, Project
- Value Objects: FileData, FileAST, FileMeta, ChatMessage, ToolCall, ToolResult, UndoEntry
- Service Interfaces: IStorage, ILLMClient, ITool, IIndexer
- Constants: supported extensions, ignore patterns, context limits
- **Application Layer**
- IToolRegistry interface
- Placeholder structure for use cases and DTOs
- **Shared Module**
- Config schema with Zod validation
- Config loader (default.json + .ipuaro.json)
- IpuaroError class with typed errors
- Utility functions: md5 hash, token estimation
- Result type for error handling
- **CLI**
- Basic commands: start, init, index (placeholders)
- Commander.js integration
- **Testing**
- 91 unit tests
- 100% code coverage
### Notes
This is the foundation release. The following features are planned for upcoming versions:
- 0.2.0: Redis Storage
- 0.3.0: Indexer
- 0.4.0: LLM Integration
- 0.5.0+: Tools implementation
- 0.10.0+: TUI and session management

21
packages/ipuaro/LICENSE Normal file
View File

@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2025 Fozilbek Samiyev
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

284
packages/ipuaro/README.md Normal file
View File

@@ -0,0 +1,284 @@
# @samiyev/ipuaro 🎩
**Local AI Agent for Codebase Operations**
"Infinite" context feeling through lazy loading - work with your entire codebase using local LLM.
[![npm version](https://badge.fury.io/js/@samiyev%2Fipuaro.svg)](https://www.npmjs.com/package/@samiyev/ipuaro)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
> **Status:** 🚧 Early Development (v0.1.0 Foundation)
>
> Core infrastructure is ready. Active development in progress.
## Vision
Work with codebases of any size using local AI:
- 📂 **Lazy Loading**: Load code on-demand, not all at once
- 🧠 **Smart Context**: AST-based understanding of your code structure
- 🔒 **100% Local**: Your code never leaves your machine
-**Fast**: Redis persistence + tree-sitter parsing
## Planned Features
### 18 LLM Tools
| Category | Tools | Status |
|----------|-------|--------|
| **Read** | `get_lines`, `get_function`, `get_class`, `get_structure` | 🔜 v0.5.0 |
| **Edit** | `edit_lines`, `create_file`, `delete_file` | 🔜 v0.6.0 |
| **Search** | `find_references`, `find_definition` | 🔜 v0.7.0 |
| **Analysis** | `get_dependencies`, `get_dependents`, `get_complexity`, `get_todos` | 🔜 v0.8.0 |
| **Git** | `git_status`, `git_diff`, `git_commit` | 🔜 v0.9.0 |
| **Run** | `run_command`, `run_tests` | 🔜 v0.9.0 |
### Terminal UI
```
┌─ ipuaro ──────────────────────────────────────────────────┐
│ [ctx: 12%] [project: myapp] [main] [47m] ✓ Ready │
├───────────────────────────────────────────────────────────┤
│ You: How does the authentication flow work? │
│ │
│ Assistant: Let me analyze the auth module... │
│ [get_structure src/auth/] │
│ [get_function src/auth/service.ts login] │
│ │
│ The authentication flow works as follows: │
│ 1. User calls POST /auth/login │
│ 2. AuthService.login() validates credentials... │
│ │
│ ⏱ 3.2s │ 1,247 tokens │ 2 tool calls │
├───────────────────────────────────────────────────────────┤
│ > _ │
└───────────────────────────────────────────────────────────┘
```
### Key Capabilities
🔍 **Smart Code Understanding**
- tree-sitter AST parsing (TypeScript, JavaScript)
- Symbol index for fast lookups
- Dependency graph analysis
💾 **Persistent Sessions**
- Redis storage with AOF persistence
- Session history across restarts
- Undo stack for file changes
🛡️ **Security**
- Command blacklist (dangerous operations blocked)
- Command whitelist (safe commands auto-approved)
- Path validation (no access outside project)
## Installation
```bash
npm install @samiyev/ipuaro
# or
pnpm add @samiyev/ipuaro
```
## Requirements
- **Node.js** >= 20.0.0
- **Redis** (for persistence)
- **Ollama** (for local LLM inference)
### Setup Ollama
```bash
# Install Ollama (macOS)
brew install ollama
# Start Ollama
ollama serve
# Pull recommended model
ollama pull qwen2.5-coder:7b-instruct
```
### Setup Redis
```bash
# Install Redis (macOS)
brew install redis
# Start Redis with persistence
redis-server --appendonly yes
```
## Usage
```bash
# Start ipuaro in current directory
ipuaro
# Start in specific directory
ipuaro /path/to/project
# With custom model
ipuaro --model qwen2.5-coder:32b-instruct
# With auto-apply mode (skip edit confirmations)
ipuaro --auto-apply
```
## Commands
| Command | Description |
|---------|-------------|
| `ipuaro [path]` | Start TUI in directory |
| `ipuaro init` | Create `.ipuaro.json` config |
| `ipuaro index` | Index project without TUI |
## Configuration
Create `.ipuaro.json` in your project root:
```json
{
"redis": {
"host": "localhost",
"port": 6379
},
"llm": {
"model": "qwen2.5-coder:7b-instruct",
"temperature": 0.1
},
"project": {
"ignorePatterns": ["node_modules", "dist", ".git"]
},
"edit": {
"autoApply": false
}
}
```
## Architecture
Clean Architecture with clear separation:
```
@samiyev/ipuaro/
├── domain/ # Business logic (no dependencies)
│ ├── entities/ # Session, Project
│ ├── value-objects/ # FileData, FileAST, ChatMessage, etc.
│ └── services/ # IStorage, ILLMClient, ITool, IIndexer
├── application/ # Use cases & orchestration
│ ├── use-cases/ # StartSession, HandleMessage, etc.
│ └── interfaces/ # IToolRegistry
├── infrastructure/ # External implementations
│ ├── storage/ # Redis client & storage
│ ├── llm/ # Ollama client & prompts
│ ├── indexer/ # File scanner, AST parser
│ └── tools/ # 18 tool implementations
├── tui/ # Terminal UI (Ink/React)
│ └── components/ # StatusBar, Chat, Input, etc.
├── cli/ # CLI entry point
└── shared/ # Config, errors, utils
```
## Development Status
### ✅ Completed (v0.1.0)
- [x] Project setup (tsup, vitest, ESM)
- [x] Domain entities (Session, Project)
- [x] Value objects (FileData, FileAST, ChatMessage, etc.)
- [x] Service interfaces (IStorage, ILLMClient, ITool, IIndexer)
- [x] Shared module (Config, Errors, Utils)
- [x] CLI placeholder commands
- [x] 91 unit tests, 100% coverage
### 🔜 Next Up
- [ ] **v0.2.0** - Redis Storage
- [ ] **v0.3.0** - Indexer (file scanning, AST parsing)
- [ ] **v0.4.0** - LLM Integration (Ollama)
- [ ] **v0.5.0-0.9.0** - Tools implementation
- [ ] **v0.10.0** - Session management
- [ ] **v0.11.0** - TUI
See [ROADMAP.md](./ROADMAP.md) for detailed development plan.
## API (Coming Soon)
```typescript
import { startSession, handleMessage } from "@samiyev/ipuaro"
// Start a session
const session = await startSession({
projectPath: "./my-project",
model: "qwen2.5-coder:7b-instruct"
})
// Send a message
const response = await handleMessage(session, "Explain the auth flow")
console.log(response.content)
console.log(`Tokens: ${response.stats.tokens}`)
console.log(`Tool calls: ${response.stats.toolCalls}`)
```
## How It Works
### Lazy Loading Context
Instead of loading entire codebase into context:
```
Traditional approach:
├── Load all files → 500k tokens → ❌ Exceeds context window
ipuaro approach:
├── Load project structure → 2k tokens
├── Load AST metadata → 10k tokens
├── On demand: get_function("auth.ts", "login") → 200 tokens
├── Total: ~12k tokens → ✅ Fits in context
```
### Tool-Based Code Access
```
User: "How does user creation work?"
ipuaro:
1. [get_structure src/] → sees user/ folder
2. [get_function src/user/service.ts createUser] → gets function code
3. [find_references createUser] → finds all usages
4. Synthesizes answer with specific code context
```
## Contributing
Contributions welcome! This project is in early development.
```bash
# Clone
git clone https://github.com/samiyev/puaros.git
cd puaros/packages/ipuaro
# Install
pnpm install
# Build
pnpm build
# Test
pnpm test:run
# Coverage
pnpm test:coverage
```
## License
MIT © Fozilbek Samiyev
## Links
- [GitHub Repository](https://github.com/samiyev/puaros/tree/main/packages/ipuaro)
- [Issues](https://github.com/samiyev/puaros/issues)
- [Changelog](./CHANGELOG.md)
- [Roadmap](./ROADMAP.md)

1331
packages/ipuaro/ROADMAP.md Normal file

File diff suppressed because it is too large Load Diff

54
packages/ipuaro/TODO.md Normal file
View File

@@ -0,0 +1,54 @@
# ipuaro TODO
## In Progress
### Version 0.2.0 - Redis Storage
- [ ] RedisClient with AOF config
- [ ] Redis schema implementation
- [ ] RedisStorage class
## Planned
### Version 0.3.0 - Indexer
- [ ] FileScanner with gitignore support
- [ ] ASTParser with tree-sitter
- [ ] MetaAnalyzer for complexity
- [ ] IndexBuilder for symbols
- [ ] Watchdog for file changes
### Version 0.4.0 - LLM Integration
- [ ] OllamaClient implementation
- [ ] System prompt design
- [ ] Tool definitions (XML format)
- [ ] Response parser
### Version 0.5.0+ - Tools
- [ ] Read tools (get_lines, get_function, get_class, get_structure)
- [ ] Edit tools (edit_lines, create_file, delete_file)
- [ ] Search tools (find_references, find_definition)
- [ ] Analysis tools (get_dependencies, get_dependents, get_complexity, get_todos)
- [ ] Git tools (git_status, git_diff, git_commit)
- [ ] Run tools (run_command, run_tests)
### Version 0.10.0+ - Session & TUI
- [ ] Session management
- [ ] Context compression
- [ ] TUI components (StatusBar, Chat, Input, DiffView)
- [ ] Slash commands (/help, /clear, /undo, etc.)
## Technical Debt
_None at this time._
## Ideas for Future
- Plugin system for custom tools
- Multiple LLM providers (OpenAI, Anthropic)
- IDE integration (LSP)
- Web UI option
- Parallel AST parsing
- Response caching
---
**Last Updated:** 2025-01-29

View File

@@ -0,0 +1,3 @@
#!/usr/bin/env node
import "../dist/cli/index.js"

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,80 @@
{
"name": "@samiyev/ipuaro",
"version": "0.2.0",
"description": "Local AI agent for codebase operations with infinite context feeling",
"author": "Fozilbek Samiyev <fozilbek.samiyev@gmail.com>",
"license": "MIT",
"type": "module",
"main": "./dist/index.js",
"types": "./dist/index.d.ts",
"bin": {
"ipuaro": "./bin/ipuaro.js"
},
"exports": {
".": {
"types": "./dist/index.d.ts",
"import": "./dist/index.js"
}
},
"files": [
"dist",
"bin"
],
"scripts": {
"build": "tsup",
"watch": "tsup --watch",
"clean": "rm -rf dist",
"test": "vitest",
"test:run": "vitest run",
"test:coverage": "vitest run --coverage",
"test:ui": "vitest --ui",
"test:watch": "vitest --watch",
"lint": "eslint src --fix",
"format": "prettier --write src"
},
"dependencies": {
"ink": "^4.4.1",
"ink-text-input": "^5.0.1",
"react": "^18.2.0",
"ioredis": "^5.4.1",
"tree-sitter": "^0.21.1",
"tree-sitter-typescript": "^0.21.2",
"tree-sitter-javascript": "^0.21.0",
"ollama": "^0.5.11",
"simple-git": "^3.27.0",
"chokidar": "^3.6.0",
"commander": "^11.1.0",
"zod": "^3.23.8",
"ignore": "^5.3.2"
},
"devDependencies": {
"@types/node": "^22.10.1",
"@types/react": "^18.2.0",
"vitest": "^1.6.0",
"@vitest/coverage-v8": "^1.6.0",
"@vitest/ui": "^1.6.0",
"tsup": "^8.3.5",
"typescript": "^5.7.2"
},
"engines": {
"node": ">=20.0.0"
},
"keywords": [
"ai",
"agent",
"codebase",
"llm",
"ollama",
"cli",
"terminal"
],
"repository": {
"type": "git",
"url": "https://github.com/samiyev/puaros.git",
"directory": "packages/ipuaro"
},
"bugs": {
"url": "https://github.com/samiyev/puaros/issues"
},
"homepage": "https://github.com/samiyev/puaros/tree/main/packages/ipuaro#readme"
}

View File

@@ -0,0 +1,4 @@
/*
* Application DTOs
* Will be implemented in version 0.10.0+
*/

View File

@@ -0,0 +1,10 @@
// Application Layer exports
// Use Cases
export * from "./use-cases/index.js"
// DTOs
export * from "./dtos/index.js"
// Interfaces
export * from "./interfaces/index.js"

View File

@@ -0,0 +1,51 @@
import type { ITool, ToolContext } from "../../domain/services/ITool.js"
import type { ToolResult } from "../../domain/value-objects/ToolResult.js"
/**
* Tool registry interface.
* Manages registration and execution of tools.
*/
export interface IToolRegistry {
/**
* Register a tool.
*/
register(tool: ITool): void
/**
* Get tool by name.
*/
get(name: string): ITool | undefined
/**
* Get all registered tools.
*/
getAll(): ITool[]
/**
* Get tools by category.
*/
getByCategory(category: ITool["category"]): ITool[]
/**
* Check if tool exists.
*/
has(name: string): boolean
/**
* Execute tool by name.
*/
execute(name: string, params: Record<string, unknown>, ctx: ToolContext): Promise<ToolResult>
/**
* Get tool definitions for LLM.
*/
getToolDefinitions(): {
name: string
description: string
parameters: {
type: "object"
properties: Record<string, { type: string; description: string }>
required: string[]
}
}[]
}

View File

@@ -0,0 +1,2 @@
// Application Interfaces
export * from "./IToolRegistry.js"

View File

@@ -0,0 +1,4 @@
/*
* Application Use Cases
* Will be implemented in version 0.10.0+
*/

View File

@@ -0,0 +1,44 @@
#!/usr/bin/env node
import { Command } from "commander"
const program = new Command()
program
.name("ipuaro")
.description("Local AI agent for codebase operations with infinite context feeling")
.version("0.1.0")
program
.command("start")
.description("Start ipuaro TUI in the current directory")
.argument("[path]", "Project path", ".")
.option("--auto-apply", "Enable auto-apply mode for edits")
.option("--model <name>", "Override LLM model", "qwen2.5-coder:7b-instruct")
.action((path: string, options: { autoApply?: boolean; model?: string }) => {
const model = options.model ?? "default"
const autoApply = options.autoApply ?? false
console.warn(`Starting ipuaro in ${path}...`)
console.warn(`Model: ${model}`)
console.warn(`Auto-apply: ${autoApply ? "enabled" : "disabled"}`)
console.warn("\nNot implemented yet. Coming in version 0.11.0!")
})
program
.command("init")
.description("Create .ipuaro.json config file")
.action(() => {
console.warn("Creating .ipuaro.json...")
console.warn("\nNot implemented yet. Coming in version 0.17.0!")
})
program
.command("index")
.description("Index project without starting TUI")
.argument("[path]", "Project path", ".")
.action((path: string) => {
console.warn(`Indexing ${path}...`)
console.warn("\nNot implemented yet. Coming in version 0.3.0!")
})
program.parse()

View File

@@ -0,0 +1,48 @@
// Domain Constants
export const MAX_UNDO_STACK_SIZE = 10
export const SUPPORTED_EXTENSIONS = [
".ts",
".tsx",
".js",
".jsx",
".json",
".yaml",
".yml",
] as const
export const BINARY_EXTENSIONS = [
".png",
".jpg",
".jpeg",
".gif",
".ico",
".svg",
".woff",
".woff2",
".ttf",
".eot",
".mp3",
".mp4",
".webm",
".pdf",
".zip",
".tar",
".gz",
] as const
export const DEFAULT_IGNORE_PATTERNS = [
"node_modules",
"dist",
"build",
".git",
".next",
".nuxt",
"coverage",
".cache",
] as const
export const CONTEXT_WINDOW_SIZE = 128_000
export const CONTEXT_COMPRESSION_THRESHOLD = 0.8

View File

@@ -0,0 +1,61 @@
import { basename, dirname } from "node:path"
/**
* Project entity representing an indexed codebase.
*/
export class Project {
readonly name: string
readonly rootPath: string
readonly createdAt: number
lastIndexedAt: number | null
fileCount: number
indexingInProgress: boolean
constructor(rootPath: string, createdAt?: number) {
this.rootPath = rootPath
this.name = Project.generateProjectName(rootPath)
this.createdAt = createdAt ?? Date.now()
this.lastIndexedAt = null
this.fileCount = 0
this.indexingInProgress = false
}
/**
* Generate project name from path.
* Format: {parent-folder}-{project-folder}
*/
static generateProjectName(rootPath: string): string {
const projectFolder = basename(rootPath)
const parentFolder = basename(dirname(rootPath))
if (parentFolder && parentFolder !== ".") {
return `${parentFolder}-${projectFolder}`
}
return projectFolder
}
markIndexingStarted(): void {
this.indexingInProgress = true
}
markIndexingCompleted(fileCount: number): void {
this.indexingInProgress = false
this.lastIndexedAt = Date.now()
this.fileCount = fileCount
}
markIndexingFailed(): void {
this.indexingInProgress = false
}
isIndexed(): boolean {
return this.lastIndexedAt !== null
}
getTimeSinceIndexed(): number | null {
if (this.lastIndexedAt === null) {
return null
}
return Date.now() - this.lastIndexedAt
}
}

View File

@@ -0,0 +1,120 @@
import type { ChatMessage } from "../value-objects/ChatMessage.js"
import type { UndoEntry } from "../value-objects/UndoEntry.js"
import { MAX_UNDO_STACK_SIZE } from "../constants/index.js"
/**
* Session statistics.
*/
export interface SessionStats {
/** Total tokens used */
totalTokens: number
/** Total time in milliseconds */
totalTimeMs: number
/** Number of tool calls made */
toolCalls: number
/** Number of edits applied */
editsApplied: number
/** Number of edits rejected */
editsRejected: number
}
/**
* Context state for the session.
*/
export interface ContextState {
/** Files currently in context */
filesInContext: string[]
/** Estimated token usage (0-1) */
tokenUsage: number
/** Whether compression is needed */
needsCompression: boolean
}
/**
* Session entity representing a chat session.
*/
export class Session {
readonly id: string
readonly projectName: string
readonly createdAt: number
lastActivityAt: number
history: ChatMessage[]
context: ContextState
undoStack: UndoEntry[]
stats: SessionStats
inputHistory: string[]
constructor(id: string, projectName: string, createdAt?: number) {
this.id = id
this.projectName = projectName
this.createdAt = createdAt ?? Date.now()
this.lastActivityAt = this.createdAt
this.history = []
this.context = {
filesInContext: [],
tokenUsage: 0,
needsCompression: false,
}
this.undoStack = []
this.stats = {
totalTokens: 0,
totalTimeMs: 0,
toolCalls: 0,
editsApplied: 0,
editsRejected: 0,
}
this.inputHistory = []
}
addMessage(message: ChatMessage): void {
this.history.push(message)
this.lastActivityAt = Date.now()
if (message.stats) {
this.stats.totalTokens += message.stats.tokens
this.stats.totalTimeMs += message.stats.timeMs
this.stats.toolCalls += message.stats.toolCalls
}
}
addUndoEntry(entry: UndoEntry): void {
this.undoStack.push(entry)
if (this.undoStack.length > MAX_UNDO_STACK_SIZE) {
this.undoStack.shift()
}
}
popUndoEntry(): UndoEntry | undefined {
return this.undoStack.pop()
}
addInputToHistory(input: string): void {
if (input.trim() && this.inputHistory[this.inputHistory.length - 1] !== input) {
this.inputHistory.push(input)
}
}
clearHistory(): void {
this.history = []
this.context = {
filesInContext: [],
tokenUsage: 0,
needsCompression: false,
}
}
getSessionDurationMs(): number {
return Date.now() - this.createdAt
}
getSessionDurationFormatted(): string {
const totalMinutes = Math.floor(this.getSessionDurationMs() / 60_000)
const hours = Math.floor(totalMinutes / 60)
const minutes = totalMinutes % 60
if (hours > 0) {
return `${String(hours)}h ${String(minutes)}m`
}
return `${String(minutes)}m`
}
}

View File

@@ -0,0 +1,3 @@
// Domain Entities
export * from "./Session.js"
export * from "./Project.js"

View File

@@ -0,0 +1,13 @@
// Domain Layer exports
// Entities
export * from "./entities/index.js"
// Value Objects
export * from "./value-objects/index.js"
// Service Interfaces
export * from "./services/index.js"
// Constants
export * from "./constants/index.js"

View File

@@ -0,0 +1,83 @@
import type { FileAST } from "../value-objects/FileAST.js"
import type { FileData } from "../value-objects/FileData.js"
import type { FileMeta } from "../value-objects/FileMeta.js"
import type { DepsGraph, SymbolIndex } from "./IStorage.js"
/**
* Progress callback for indexing operations.
*/
export interface IndexProgress {
current: number
total: number
currentFile: string
phase: "scanning" | "parsing" | "analyzing" | "indexing"
}
/**
* Result of scanning a single file.
*/
export interface ScanResult {
path: string
type: "file" | "directory" | "symlink"
size: number
lastModified: number
}
/**
* Indexing result statistics.
*/
export interface IndexingStats {
filesScanned: number
filesParsed: number
parseErrors: number
timeMs: number
}
/**
* Indexer service interface (port).
* Handles project scanning, parsing, and indexing.
*/
export interface IIndexer {
/**
* Scan directory and yield file results.
*/
scan(root: string): AsyncGenerator<ScanResult>
/**
* Parse file content into AST.
*/
parseFile(content: string, language: "ts" | "tsx" | "js" | "jsx"): FileAST
/**
* Analyze file and compute metadata.
*/
analyzeFile(path: string, ast: FileAST, allASTs: Map<string, FileAST>): FileMeta
/**
* Build symbol index from all ASTs.
*/
buildSymbolIndex(asts: Map<string, FileAST>): SymbolIndex
/**
* Build dependency graph from all ASTs.
*/
buildDepsGraph(asts: Map<string, FileAST>): DepsGraph
/**
* Full indexing pipeline.
*/
indexProject(
root: string,
onProgress?: (progress: IndexProgress) => void,
): Promise<IndexingStats>
/**
* Update single file (incremental indexing).
*/
updateFile(path: string, data: FileData): Promise<void>
/**
* Remove file from index.
*/
removeFile(path: string): Promise<void>
}

View File

@@ -0,0 +1,81 @@
import type { ChatMessage } from "../value-objects/ChatMessage.js"
import type { ToolCall } from "../value-objects/ToolCall.js"
/**
* Tool parameter definition for LLM.
*/
export interface ToolParameter {
name: string
type: "string" | "number" | "boolean" | "array" | "object"
description: string
required: boolean
enum?: string[]
}
/**
* Tool definition for LLM function calling.
*/
export interface ToolDef {
name: string
description: string
parameters: ToolParameter[]
}
/**
* Response from LLM.
*/
export interface LLMResponse {
/** Text content of the response */
content: string
/** Tool calls parsed from response */
toolCalls: ToolCall[]
/** Token count for this response */
tokens: number
/** Generation time in milliseconds */
timeMs: number
/** Whether response was truncated */
truncated: boolean
/** Stop reason */
stopReason: "end" | "length" | "tool_use"
}
/**
* LLM client service interface (port).
* Abstracts the LLM provider.
*/
export interface ILLMClient {
/**
* Send messages to LLM and get response.
*/
chat(messages: ChatMessage[], tools?: ToolDef[]): Promise<LLMResponse>
/**
* Count tokens in text.
*/
countTokens(text: string): Promise<number>
/**
* Check if LLM service is available.
*/
isAvailable(): Promise<boolean>
/**
* Get current model name.
*/
getModelName(): string
/**
* Get context window size.
*/
getContextWindowSize(): number
/**
* Pull/download model if not available locally.
*/
pullModel(model: string): Promise<void>
/**
* Abort current generation.
*/
abort(): void
}

View File

@@ -0,0 +1,65 @@
import type { FileData } from "../value-objects/FileData.js"
import type { FileAST } from "../value-objects/FileAST.js"
import type { FileMeta } from "../value-objects/FileMeta.js"
/**
* Symbol index mapping symbol names to their locations.
*/
export interface SymbolLocation {
path: string
line: number
type: "function" | "class" | "interface" | "type" | "variable"
}
export type SymbolIndex = Map<string, SymbolLocation[]>
/**
* Dependencies graph for the project.
*/
export interface DepsGraph {
/** Map from file path to its imports */
imports: Map<string, string[]>
/** Map from file path to files that import it */
importedBy: Map<string, string[]>
}
/**
* Storage service interface (port).
* Abstracts the persistence layer for project data.
*/
export interface IStorage {
// File data operations
getFile(path: string): Promise<FileData | null>
setFile(path: string, data: FileData): Promise<void>
deleteFile(path: string): Promise<void>
getAllFiles(): Promise<Map<string, FileData>>
getFileCount(): Promise<number>
// AST operations
getAST(path: string): Promise<FileAST | null>
setAST(path: string, ast: FileAST): Promise<void>
deleteAST(path: string): Promise<void>
getAllASTs(): Promise<Map<string, FileAST>>
// Meta operations
getMeta(path: string): Promise<FileMeta | null>
setMeta(path: string, meta: FileMeta): Promise<void>
deleteMeta(path: string): Promise<void>
getAllMetas(): Promise<Map<string, FileMeta>>
// Index operations
getSymbolIndex(): Promise<SymbolIndex>
setSymbolIndex(index: SymbolIndex): Promise<void>
getDepsGraph(): Promise<DepsGraph>
setDepsGraph(graph: DepsGraph): Promise<void>
// Config operations
getProjectConfig(key: string): Promise<unknown>
setProjectConfig(key: string, value: unknown): Promise<void>
// Lifecycle
connect(): Promise<void>
disconnect(): Promise<void>
isConnected(): boolean
clear(): Promise<void>
}

View File

@@ -0,0 +1,68 @@
import type { ToolResult } from "../value-objects/ToolResult.js"
import type { IStorage } from "./IStorage.js"
/**
* Tool parameter schema.
*/
export interface ToolParameterSchema {
name: string
type: "string" | "number" | "boolean" | "array" | "object"
description: string
required: boolean
default?: unknown
}
/**
* Context provided to tools during execution.
*/
export interface ToolContext {
/** Project root path */
projectRoot: string
/** Storage service */
storage: IStorage
/** Request user confirmation callback */
requestConfirmation: (message: string, diff?: DiffInfo) => Promise<boolean>
/** Report progress callback */
onProgress?: (message: string) => void
}
/**
* Diff information for confirmation dialogs.
*/
export interface DiffInfo {
filePath: string
oldLines: string[]
newLines: string[]
startLine: number
}
/**
* Tool interface (port).
* All tools must implement this interface.
*/
export interface ITool {
/** Tool name (used in tool calls) */
readonly name: string
/** Human-readable description */
readonly description: string
/** Tool parameters schema */
readonly parameters: ToolParameterSchema[]
/** Whether tool requires user confirmation before execution */
readonly requiresConfirmation: boolean
/** Tool category */
readonly category: "read" | "edit" | "search" | "analysis" | "git" | "run"
/**
* Execute the tool with given parameters.
*/
execute(params: Record<string, unknown>, ctx: ToolContext): Promise<ToolResult>
/**
* Validate parameters before execution.
*/
validateParams(params: Record<string, unknown>): string | null
}

View File

@@ -0,0 +1,5 @@
// Domain Service Interfaces (Ports)
export * from "./IStorage.js"
export * from "./ILLMClient.js"
export * from "./ITool.js"
export * from "./IIndexer.js"

View File

@@ -0,0 +1,79 @@
import type { ToolCall } from "./ToolCall.js"
import type { ToolResult } from "./ToolResult.js"
/**
* Represents a message in the chat history.
*/
export type MessageRole = "user" | "assistant" | "tool" | "system"
export interface MessageStats {
/** Token count for this message */
tokens: number
/** Response generation time in ms (for assistant messages) */
timeMs: number
/** Number of tool calls in this message */
toolCalls: number
}
export interface ChatMessage {
/** Message role */
role: MessageRole
/** Message content */
content: string
/** Timestamp when message was created */
timestamp: number
/** Tool calls made by assistant (if any) */
toolCalls?: ToolCall[]
/** Tool results (for tool role messages) */
toolResults?: ToolResult[]
/** Message statistics */
stats?: MessageStats
}
export function createUserMessage(content: string): ChatMessage {
return {
role: "user",
content,
timestamp: Date.now(),
}
}
export function createAssistantMessage(
content: string,
toolCalls?: ToolCall[],
stats?: MessageStats,
): ChatMessage {
return {
role: "assistant",
content,
timestamp: Date.now(),
toolCalls,
stats,
}
}
export function createToolMessage(results: ToolResult[]): ChatMessage {
return {
role: "tool",
content: results.map((r) => formatToolResult(r)).join("\n\n"),
timestamp: Date.now(),
toolResults: results,
}
}
export function createSystemMessage(content: string): ChatMessage {
return {
role: "system",
content,
timestamp: Date.now(),
}
}
function formatToolResult(result: ToolResult): string {
if (result.success) {
return `[${result.callId}] Success: ${JSON.stringify(result.data)}`
}
const errorMsg = result.error ?? "Unknown error"
return `[${result.callId}] Error: ${errorMsg}`
}

View File

@@ -0,0 +1,163 @@
/**
* Represents parsed AST information for a file.
*/
export interface ImportInfo {
/** Import name or alias */
name: string
/** Source module path */
from: string
/** Line number of import statement */
line: number
/** Import type classification */
type: "internal" | "external" | "builtin"
/** Whether it's a default import */
isDefault: boolean
}
export interface ExportInfo {
/** Exported name */
name: string
/** Line number of export */
line: number
/** Whether it's a default export */
isDefault: boolean
/** Export type: function, class, variable, type */
kind: "function" | "class" | "variable" | "type" | "interface"
}
export interface ParameterInfo {
/** Parameter name */
name: string
/** Parameter type (if available) */
type?: string
/** Whether it's optional */
optional: boolean
/** Whether it has a default value */
hasDefault: boolean
}
export interface FunctionInfo {
/** Function name */
name: string
/** Start line number */
lineStart: number
/** End line number */
lineEnd: number
/** Function parameters */
params: ParameterInfo[]
/** Whether function is async */
isAsync: boolean
/** Whether function is exported */
isExported: boolean
/** Return type (if available) */
returnType?: string
}
export interface MethodInfo {
/** Method name */
name: string
/** Start line number */
lineStart: number
/** End line number */
lineEnd: number
/** Method parameters */
params: ParameterInfo[]
/** Whether method is async */
isAsync: boolean
/** Method visibility */
visibility: "public" | "private" | "protected"
/** Whether it's static */
isStatic: boolean
}
export interface PropertyInfo {
/** Property name */
name: string
/** Line number */
line: number
/** Property type (if available) */
type?: string
/** Property visibility */
visibility: "public" | "private" | "protected"
/** Whether it's static */
isStatic: boolean
/** Whether it's readonly */
isReadonly: boolean
}
export interface ClassInfo {
/** Class name */
name: string
/** Start line number */
lineStart: number
/** End line number */
lineEnd: number
/** Class methods */
methods: MethodInfo[]
/** Class properties */
properties: PropertyInfo[]
/** Extended class name */
extends?: string
/** Implemented interfaces */
implements: string[]
/** Whether class is exported */
isExported: boolean
/** Whether class is abstract */
isAbstract: boolean
}
export interface InterfaceInfo {
/** Interface name */
name: string
/** Start line number */
lineStart: number
/** End line number */
lineEnd: number
/** Interface properties */
properties: PropertyInfo[]
/** Extended interfaces */
extends: string[]
/** Whether interface is exported */
isExported: boolean
}
export interface TypeAliasInfo {
/** Type alias name */
name: string
/** Line number */
line: number
/** Whether it's exported */
isExported: boolean
}
export interface FileAST {
/** Import statements */
imports: ImportInfo[]
/** Export statements */
exports: ExportInfo[]
/** Function declarations */
functions: FunctionInfo[]
/** Class declarations */
classes: ClassInfo[]
/** Interface declarations */
interfaces: InterfaceInfo[]
/** Type alias declarations */
typeAliases: TypeAliasInfo[]
/** Whether parsing encountered errors */
parseError: boolean
/** Parse error message if any */
parseErrorMessage?: string
}
export function createEmptyFileAST(): FileAST {
return {
imports: [],
exports: [],
functions: [],
classes: [],
interfaces: [],
typeAliases: [],
parseError: false,
}
}

View File

@@ -0,0 +1,26 @@
/**
* Represents file content with metadata for change detection.
*/
export interface FileData {
/** File content split into lines */
lines: string[]
/** MD5 hash for change detection */
hash: string
/** File size in bytes */
size: number
/** Last modification timestamp (ms) */
lastModified: number
}
export function createFileData(
lines: string[],
hash: string,
size: number,
lastModified: number,
): FileData {
return { lines, hash, size, lastModified }
}
export function isFileDataEqual(a: FileData, b: FileData): boolean {
return a.hash === b.hash
}

View File

@@ -0,0 +1,50 @@
/**
* Represents computed metadata about a file.
*/
export interface ComplexityMetrics {
/** Lines of code (excluding empty and comments) */
loc: number
/** Maximum nesting depth */
nesting: number
/** Cyclomatic complexity score */
cyclomaticComplexity: number
/** Overall complexity score (0-100) */
score: number
}
export interface FileMeta {
/** Complexity metrics for the file */
complexity: ComplexityMetrics
/** Files that this file imports (internal paths) */
dependencies: string[]
/** Files that import this file */
dependents: string[]
/** Whether file is a dependency hub (>5 dependents) */
isHub: boolean
/** Whether file is an entry point (index.ts or 0 dependents) */
isEntryPoint: boolean
/** File type classification */
fileType: "source" | "test" | "config" | "types" | "unknown"
}
export function createFileMeta(partial: Partial<FileMeta> = {}): FileMeta {
return {
complexity: {
loc: 0,
nesting: 0,
cyclomaticComplexity: 1,
score: 0,
},
dependencies: [],
dependents: [],
isHub: false,
isEntryPoint: false,
fileType: "unknown",
...partial,
}
}
export function isHubFile(dependentCount: number): boolean {
return dependentCount > 5
}

View File

@@ -0,0 +1,27 @@
/**
* Represents a tool call from the LLM.
*/
export interface ToolCall {
/** Unique identifier for this call */
id: string
/** Tool name */
name: string
/** Tool parameters */
params: Record<string, unknown>
/** Timestamp when call was made */
timestamp: number
}
export function createToolCall(
id: string,
name: string,
params: Record<string, unknown>,
): ToolCall {
return {
id,
name,
params,
timestamp: Date.now(),
}
}

View File

@@ -0,0 +1,42 @@
/**
* Represents the result of a tool execution.
*/
export interface ToolResult {
/** Tool call ID this result belongs to */
callId: string
/** Whether execution was successful */
success: boolean
/** Result data (varies by tool) */
data?: unknown
/** Error message if failed */
error?: string
/** Execution time in milliseconds */
executionTimeMs: number
}
export function createSuccessResult(
callId: string,
data: unknown,
executionTimeMs: number,
): ToolResult {
return {
callId,
success: true,
data,
executionTimeMs,
}
}
export function createErrorResult(
callId: string,
error: string,
executionTimeMs: number,
): ToolResult {
return {
callId,
success: false,
error,
executionTimeMs,
}
}

View File

@@ -0,0 +1,50 @@
/**
* Represents an undo entry for file changes.
*/
export interface UndoEntry {
/** Unique identifier */
id: string
/** Timestamp when change was made */
timestamp: number
/** File path that was modified */
filePath: string
/** Content before the change */
previousContent: string[]
/** Content after the change */
newContent: string[]
/** Human-readable description of the change */
description: string
/** Tool call ID that made this change */
toolCallId?: string
}
export function createUndoEntry(
id: string,
filePath: string,
previousContent: string[],
newContent: string[],
description: string,
toolCallId?: string,
): UndoEntry {
return {
id,
timestamp: Date.now(),
filePath,
previousContent,
newContent,
description,
toolCallId,
}
}
export function canUndo(entry: UndoEntry, currentContent: string[]): boolean {
return arraysEqual(entry.newContent, currentContent)
}
function arraysEqual(a: string[], b: string[]): boolean {
if (a.length !== b.length) {
return false
}
return a.every((line, i) => line === b[i])
}

View File

@@ -0,0 +1,8 @@
// Domain Value Objects
export * from "./FileData.js"
export * from "./FileAST.js"
export * from "./FileMeta.js"
export * from "./ChatMessage.js"
export * from "./ToolCall.js"
export * from "./ToolResult.js"
export * from "./UndoEntry.js"

View File

@@ -0,0 +1,20 @@
/**
* @puaros/ipuaro - Local AI agent for codebase operations
*
* Main entry point for the library.
*/
// Domain exports
export * from "./domain/index.js"
// Application exports
export * from "./application/index.js"
// Shared exports
export * from "./shared/index.js"
// Infrastructure exports
export * from "./infrastructure/index.js"
// Version
export const VERSION = "0.2.0"

View File

@@ -0,0 +1,2 @@
// Infrastructure layer exports
export * from "./storage/index.js"

View File

@@ -0,0 +1,119 @@
import { Redis } from "ioredis"
import type { RedisConfig } from "../../shared/constants/config.js"
import { IpuaroError } from "../../shared/errors/IpuaroError.js"
/**
* Redis client wrapper with connection management.
* Handles connection lifecycle and AOF configuration.
*/
export class RedisClient {
private client: Redis | null = null
private readonly config: RedisConfig
private connected = false
constructor(config: RedisConfig) {
this.config = config
}
/**
* Connect to Redis server.
* Configures AOF persistence on successful connection.
*/
async connect(): Promise<void> {
if (this.connected && this.client) {
return
}
try {
this.client = new Redis({
host: this.config.host,
port: this.config.port,
db: this.config.db,
password: this.config.password,
keyPrefix: this.config.keyPrefix,
lazyConnect: true,
retryStrategy: (times: number): number | null => {
if (times > 3) {
return null
}
return Math.min(times * 200, 1000)
},
maxRetriesPerRequest: 3,
enableReadyCheck: true,
})
await this.client.connect()
await this.configureAOF()
this.connected = true
} catch (error) {
this.connected = false
this.client = null
const message = error instanceof Error ? error.message : "Unknown error"
throw IpuaroError.redis(`Failed to connect to Redis: ${message}`)
}
}
/**
* Disconnect from Redis server.
*/
async disconnect(): Promise<void> {
if (this.client) {
await this.client.quit()
this.client = null
this.connected = false
}
}
/**
* Check if connected to Redis.
*/
isConnected(): boolean {
return this.connected && this.client !== null && this.client.status === "ready"
}
/**
* Get the underlying Redis client.
* @throws IpuaroError if not connected
*/
getClient(): Redis {
if (!this.client || !this.connected) {
throw IpuaroError.redis("Redis client is not connected")
}
return this.client
}
/**
* Execute a health check ping.
*/
async ping(): Promise<boolean> {
if (!this.client) {
return false
}
try {
const result = await this.client.ping()
return result === "PONG"
} catch {
return false
}
}
/**
* Configure AOF (Append Only File) persistence.
* AOF provides better durability by logging every write operation.
*/
private async configureAOF(): Promise<void> {
if (!this.client) {
return
}
try {
await this.client.config("SET", "appendonly", "yes")
await this.client.config("SET", "appendfsync", "everysec")
} catch {
/*
* AOF config may fail if Redis doesn't allow CONFIG SET.
* This is non-fatal - persistence will still work with default settings.
*/
}
}
}

View File

@@ -0,0 +1,236 @@
import type { DepsGraph, IStorage, SymbolIndex } from "../../domain/services/IStorage.js"
import type { FileAST } from "../../domain/value-objects/FileAST.js"
import type { FileData } from "../../domain/value-objects/FileData.js"
import type { FileMeta } from "../../domain/value-objects/FileMeta.js"
import { IpuaroError } from "../../shared/errors/IpuaroError.js"
import { RedisClient } from "./RedisClient.js"
import { IndexFields, ProjectKeys } from "./schema.js"
/**
* Redis implementation of IStorage.
* Stores project data (files, AST, meta, indexes) in Redis hashes.
*/
export class RedisStorage implements IStorage {
private readonly client: RedisClient
private readonly projectName: string
constructor(client: RedisClient, projectName: string) {
this.client = client
this.projectName = projectName
}
async getFile(path: string): Promise<FileData | null> {
const redis = this.getRedis()
const data = await redis.hget(ProjectKeys.files(this.projectName), path)
if (!data) {
return null
}
return this.parseJSON<FileData>(data, "FileData")
}
async setFile(path: string, data: FileData): Promise<void> {
const redis = this.getRedis()
await redis.hset(ProjectKeys.files(this.projectName), path, JSON.stringify(data))
}
async deleteFile(path: string): Promise<void> {
const redis = this.getRedis()
await redis.hdel(ProjectKeys.files(this.projectName), path)
}
async getAllFiles(): Promise<Map<string, FileData>> {
const redis = this.getRedis()
const data = await redis.hgetall(ProjectKeys.files(this.projectName))
const result = new Map<string, FileData>()
for (const [path, value] of Object.entries(data)) {
const parsed = this.parseJSON<FileData>(value, "FileData")
if (parsed) {
result.set(path, parsed)
}
}
return result
}
async getFileCount(): Promise<number> {
const redis = this.getRedis()
return redis.hlen(ProjectKeys.files(this.projectName))
}
async getAST(path: string): Promise<FileAST | null> {
const redis = this.getRedis()
const data = await redis.hget(ProjectKeys.ast(this.projectName), path)
if (!data) {
return null
}
return this.parseJSON<FileAST>(data, "FileAST")
}
async setAST(path: string, ast: FileAST): Promise<void> {
const redis = this.getRedis()
await redis.hset(ProjectKeys.ast(this.projectName), path, JSON.stringify(ast))
}
async deleteAST(path: string): Promise<void> {
const redis = this.getRedis()
await redis.hdel(ProjectKeys.ast(this.projectName), path)
}
async getAllASTs(): Promise<Map<string, FileAST>> {
const redis = this.getRedis()
const data = await redis.hgetall(ProjectKeys.ast(this.projectName))
const result = new Map<string, FileAST>()
for (const [path, value] of Object.entries(data)) {
const parsed = this.parseJSON<FileAST>(value, "FileAST")
if (parsed) {
result.set(path, parsed)
}
}
return result
}
async getMeta(path: string): Promise<FileMeta | null> {
const redis = this.getRedis()
const data = await redis.hget(ProjectKeys.meta(this.projectName), path)
if (!data) {
return null
}
return this.parseJSON<FileMeta>(data, "FileMeta")
}
async setMeta(path: string, meta: FileMeta): Promise<void> {
const redis = this.getRedis()
await redis.hset(ProjectKeys.meta(this.projectName), path, JSON.stringify(meta))
}
async deleteMeta(path: string): Promise<void> {
const redis = this.getRedis()
await redis.hdel(ProjectKeys.meta(this.projectName), path)
}
async getAllMetas(): Promise<Map<string, FileMeta>> {
const redis = this.getRedis()
const data = await redis.hgetall(ProjectKeys.meta(this.projectName))
const result = new Map<string, FileMeta>()
for (const [path, value] of Object.entries(data)) {
const parsed = this.parseJSON<FileMeta>(value, "FileMeta")
if (parsed) {
result.set(path, parsed)
}
}
return result
}
async getSymbolIndex(): Promise<SymbolIndex> {
const redis = this.getRedis()
const data = await redis.hget(ProjectKeys.indexes(this.projectName), IndexFields.symbols)
if (!data) {
return new Map()
}
const parsed = this.parseJSON<[string, unknown[]][]>(data, "SymbolIndex")
if (!parsed) {
return new Map()
}
return new Map(parsed) as SymbolIndex
}
async setSymbolIndex(index: SymbolIndex): Promise<void> {
const redis = this.getRedis()
const serialized = JSON.stringify([...index.entries()])
await redis.hset(ProjectKeys.indexes(this.projectName), IndexFields.symbols, serialized)
}
async getDepsGraph(): Promise<DepsGraph> {
const redis = this.getRedis()
const data = await redis.hget(ProjectKeys.indexes(this.projectName), IndexFields.depsGraph)
if (!data) {
return {
imports: new Map(),
importedBy: new Map(),
}
}
const parsed = this.parseJSON<{
imports: [string, string[]][]
importedBy: [string, string[]][]
}>(data, "DepsGraph")
if (!parsed) {
return {
imports: new Map(),
importedBy: new Map(),
}
}
return {
imports: new Map(parsed.imports),
importedBy: new Map(parsed.importedBy),
}
}
async setDepsGraph(graph: DepsGraph): Promise<void> {
const redis = this.getRedis()
const serialized = JSON.stringify({
imports: [...graph.imports.entries()],
importedBy: [...graph.importedBy.entries()],
})
await redis.hset(ProjectKeys.indexes(this.projectName), IndexFields.depsGraph, serialized)
}
async getProjectConfig(key: string): Promise<unknown> {
const redis = this.getRedis()
const data = await redis.hget(ProjectKeys.config(this.projectName), key)
if (!data) {
return null
}
return this.parseJSON<unknown>(data, "ProjectConfig")
}
async setProjectConfig(key: string, value: unknown): Promise<void> {
const redis = this.getRedis()
await redis.hset(ProjectKeys.config(this.projectName), key, JSON.stringify(value))
}
async connect(): Promise<void> {
await this.client.connect()
}
async disconnect(): Promise<void> {
await this.client.disconnect()
}
isConnected(): boolean {
return this.client.isConnected()
}
async clear(): Promise<void> {
const redis = this.getRedis()
await Promise.all([
redis.del(ProjectKeys.files(this.projectName)),
redis.del(ProjectKeys.ast(this.projectName)),
redis.del(ProjectKeys.meta(this.projectName)),
redis.del(ProjectKeys.indexes(this.projectName)),
redis.del(ProjectKeys.config(this.projectName)),
])
}
private getRedis(): ReturnType<RedisClient["getClient"]> {
return this.client.getClient()
}
private parseJSON<T>(data: string, type: string): T | null {
try {
return JSON.parse(data) as T
} catch (error) {
const message = error instanceof Error ? error.message : "Unknown error"
throw IpuaroError.parse(`Failed to parse ${type}: ${message}`)
}
}
}

View File

@@ -0,0 +1,10 @@
// Storage module exports
export { RedisClient } from "./RedisClient.js"
export { RedisStorage } from "./RedisStorage.js"
export {
ProjectKeys,
SessionKeys,
IndexFields,
SessionFields,
generateProjectName,
} from "./schema.js"

View File

@@ -0,0 +1,95 @@
/**
* Redis key schema for ipuaro data storage.
*
* Key structure:
* - project:{name}:files # Hash<path, FileData>
* - project:{name}:ast # Hash<path, FileAST>
* - project:{name}:meta # Hash<path, FileMeta>
* - project:{name}:indexes # Hash<name, JSON> (symbols, deps_graph)
* - project:{name}:config # Hash<key, JSON>
*
* - session:{id}:data # Hash<field, JSON> (history, context, stats)
* - session:{id}:undo # List<UndoEntry> (max 10)
* - sessions:list # List<session_id>
*
* Project name format: {parent-folder}-{project-folder}
*/
/**
* Project-related Redis keys.
*/
export const ProjectKeys = {
files: (projectName: string): string => `project:${projectName}:files`,
ast: (projectName: string): string => `project:${projectName}:ast`,
meta: (projectName: string): string => `project:${projectName}:meta`,
indexes: (projectName: string): string => `project:${projectName}:indexes`,
config: (projectName: string): string => `project:${projectName}:config`,
} as const
/**
* Session-related Redis keys.
*/
export const SessionKeys = {
data: (sessionId: string): string => `session:${sessionId}:data`,
undo: (sessionId: string): string => `session:${sessionId}:undo`,
list: "sessions:list",
} as const
/**
* Index field names within project:indexes hash.
*/
export const IndexFields = {
symbols: "symbols",
depsGraph: "deps_graph",
} as const
/**
* Session data field names within session:data hash.
*/
export const SessionFields = {
history: "history",
context: "context",
stats: "stats",
inputHistory: "input_history",
createdAt: "created_at",
lastActivityAt: "last_activity_at",
projectName: "project_name",
} as const
/**
* Generate project name from path.
* Format: {parent-folder}-{project-folder}
*
* @example
* generateProjectName("/home/user/projects/myapp") -> "projects-myapp"
* generateProjectName("/app") -> "app"
*/
export function generateProjectName(projectPath: string): string {
const normalized = projectPath.replace(/\\/g, "/").replace(/\/+$/, "")
const parts = normalized.split("/").filter(Boolean)
if (parts.length === 0) {
return "root"
}
if (parts.length === 1) {
return sanitizeName(parts[0])
}
const projectFolder = sanitizeName(parts[parts.length - 1])
const parentFolder = sanitizeName(parts[parts.length - 2])
return `${parentFolder}-${projectFolder}`
}
/**
* Sanitize a name for use in Redis keys.
* Replaces non-alphanumeric characters with hyphens.
*/
function sanitizeName(name: string): string {
return name
.toLowerCase()
.replace(/[^a-z0-9-]/g, "-")
.replace(/-+/g, "-")
.replace(/^-|-$/g, "")
}

View File

@@ -0,0 +1,2 @@
// Config module exports
export * from "./loader.js"

View File

@@ -0,0 +1,89 @@
import { existsSync, readFileSync } from "node:fs"
import { join } from "node:path"
import { Config, ConfigSchema, DEFAULT_CONFIG } from "../constants/config.js"
const CONFIG_FILE_NAME = ".ipuaro.json"
const DEFAULT_CONFIG_PATH = "config/default.json"
/**
* Load configuration from files.
* Priority: .ipuaro.json > config/default.json > defaults
*/
export function loadConfig(projectRoot: string): Config {
const configs: Partial<Config>[] = []
const defaultConfigPath = join(projectRoot, DEFAULT_CONFIG_PATH)
if (existsSync(defaultConfigPath)) {
try {
const content = readFileSync(defaultConfigPath, "utf-8")
configs.push(JSON.parse(content) as Partial<Config>)
} catch {
// Ignore parse errors for default config
}
}
const projectConfigPath = join(projectRoot, CONFIG_FILE_NAME)
if (existsSync(projectConfigPath)) {
try {
const content = readFileSync(projectConfigPath, "utf-8")
configs.push(JSON.parse(content) as Partial<Config>)
} catch {
// Ignore parse errors for project config
}
}
if (configs.length === 0) {
return DEFAULT_CONFIG
}
const merged = deepMerge(DEFAULT_CONFIG, ...configs)
return ConfigSchema.parse(merged)
}
/**
* Deep merge objects.
*/
function deepMerge<T extends Record<string, unknown>>(target: T, ...sources: Partial<T>[]): T {
const result = { ...target }
for (const source of sources) {
for (const key in source) {
const sourceValue = source[key]
const targetValue = result[key]
if (isPlainObject(sourceValue) && isPlainObject(targetValue)) {
result[key] = deepMerge(
targetValue as Record<string, unknown>,
sourceValue as Record<string, unknown>,
) as T[Extract<keyof T, string>]
} else if (sourceValue !== undefined) {
result[key] = sourceValue as T[Extract<keyof T, string>]
}
}
}
return result
}
function isPlainObject(value: unknown): value is Record<string, unknown> {
return typeof value === "object" && value !== null && !Array.isArray(value)
}
/**
* Validate configuration.
*/
export function validateConfig(config: unknown): config is Config {
const result = ConfigSchema.safeParse(config)
return result.success
}
/**
* Get config validation errors.
*/
export function getConfigErrors(config: unknown): string[] {
const result = ConfigSchema.safeParse(config)
if (result.success) {
return []
}
return result.error.errors.map((e) => `${e.path.join(".")}: ${e.message}`)
}

View File

@@ -0,0 +1,107 @@
import { z } from "zod"
/**
* Redis configuration schema.
*/
export const RedisConfigSchema = z.object({
host: z.string().default("localhost"),
port: z.number().int().positive().default(6379),
db: z.number().int().min(0).max(15).default(0),
password: z.string().optional(),
keyPrefix: z.string().default("ipuaro:"),
})
/**
* LLM configuration schema.
*/
export const LLMConfigSchema = z.object({
model: z.string().default("qwen2.5-coder:7b-instruct"),
contextWindow: z.number().int().positive().default(128_000),
temperature: z.number().min(0).max(2).default(0.1),
host: z.string().default("http://localhost:11434"),
timeout: z.number().int().positive().default(120_000),
})
/**
* Project configuration schema.
*/
export const ProjectConfigSchema = z.object({
ignorePatterns: z
.array(z.string())
.default(["node_modules", "dist", "build", ".git", ".next", ".nuxt", "coverage", ".cache"]),
binaryExtensions: z
.array(z.string())
.default([
".png",
".jpg",
".jpeg",
".gif",
".ico",
".svg",
".woff",
".woff2",
".ttf",
".eot",
".mp3",
".mp4",
".webm",
".pdf",
".zip",
".tar",
".gz",
]),
maxFileSize: z.number().int().positive().default(1_000_000),
supportedExtensions: z
.array(z.string())
.default([".ts", ".tsx", ".js", ".jsx", ".json", ".yaml", ".yml"]),
})
/**
* Watchdog configuration schema.
*/
export const WatchdogConfigSchema = z.object({
enabled: z.boolean().default(true),
debounceMs: z.number().int().positive().default(500),
})
/**
* Undo configuration schema.
*/
export const UndoConfigSchema = z.object({
stackSize: z.number().int().positive().default(10),
})
/**
* Edit configuration schema.
*/
export const EditConfigSchema = z.object({
autoApply: z.boolean().default(false),
})
/**
* Full configuration schema.
*/
export const ConfigSchema = z.object({
redis: RedisConfigSchema.default({}),
llm: LLMConfigSchema.default({}),
project: ProjectConfigSchema.default({}),
watchdog: WatchdogConfigSchema.default({}),
undo: UndoConfigSchema.default({}),
edit: EditConfigSchema.default({}),
})
/**
* Configuration type inferred from schema.
*/
export type Config = z.infer<typeof ConfigSchema>
export type RedisConfig = z.infer<typeof RedisConfigSchema>
export type LLMConfig = z.infer<typeof LLMConfigSchema>
export type ProjectConfig = z.infer<typeof ProjectConfigSchema>
export type WatchdogConfig = z.infer<typeof WatchdogConfigSchema>
export type UndoConfig = z.infer<typeof UndoConfigSchema>
export type EditConfig = z.infer<typeof EditConfigSchema>
/**
* Default configuration.
*/
export const DEFAULT_CONFIG: Config = ConfigSchema.parse({})

View File

@@ -0,0 +1,3 @@
// Shared constants
export * from "./config.js"
export * from "./messages.js"

View File

@@ -0,0 +1,56 @@
/**
* User-facing messages and labels.
*/
export const MESSAGES = {
// Status messages
STATUS_READY: "Ready",
STATUS_THINKING: "Thinking...",
STATUS_INDEXING: "Indexing...",
STATUS_ERROR: "Error",
// Error messages
ERROR_REDIS_UNAVAILABLE: "Redis is not available. Please start Redis server.",
ERROR_OLLAMA_UNAVAILABLE: "Ollama is not available. Please start Ollama.",
ERROR_MODEL_NOT_FOUND: "Model not found. Would you like to pull it?",
ERROR_FILE_NOT_FOUND: "File not found",
ERROR_PARSE_FAILED: "Failed to parse file",
ERROR_TOOL_FAILED: "Tool execution failed",
ERROR_COMMAND_BLACKLISTED: "Command is blacklisted for security reasons",
ERROR_PATH_OUTSIDE_PROJECT: "Path is outside project directory",
// Confirmation messages
CONFIRM_APPLY_EDIT: "Apply this edit?",
CONFIRM_DELETE_FILE: "Delete this file?",
CONFIRM_RUN_COMMAND: "Run this command?",
CONFIRM_CREATE_FILE: "Create this file?",
CONFIRM_GIT_COMMIT: "Create this commit?",
// Info messages
INFO_SESSION_LOADED: "Session loaded",
INFO_SESSION_CREATED: "New session created",
INFO_INDEXING_COMPLETE: "Indexing complete",
INFO_EDIT_APPLIED: "Edit applied",
INFO_EDIT_CANCELLED: "Edit cancelled",
INFO_UNDO_SUCCESS: "Change reverted",
INFO_UNDO_EMPTY: "Nothing to undo",
// Help text
HELP_COMMANDS: `Available commands:
/help - Show this help
/clear - Clear chat history
/undo - Revert last file change
/sessions - Manage sessions
/status - Show status info
/reindex - Force reindexing
/auto-apply - Toggle auto-apply mode`,
HELP_HOTKEYS: `Hotkeys:
Ctrl+C - Interrupt / Exit
Ctrl+D - Exit with save
Ctrl+Z - Undo last change
↑/↓ - Navigate history
Tab - Autocomplete paths`,
} as const
export type MessageKey = keyof typeof MESSAGES

View File

@@ -0,0 +1,78 @@
/**
* Error types for ipuaro.
*/
export type ErrorType =
| "redis"
| "parse"
| "llm"
| "file"
| "command"
| "conflict"
| "validation"
| "timeout"
| "unknown"
/**
* Base error class for ipuaro.
*/
export class IpuaroError extends Error {
readonly type: ErrorType
readonly recoverable: boolean
readonly suggestion?: string
constructor(type: ErrorType, message: string, recoverable = true, suggestion?: string) {
super(message)
this.name = "IpuaroError"
this.type = type
this.recoverable = recoverable
this.suggestion = suggestion
}
static redis(message: string): IpuaroError {
return new IpuaroError(
"redis",
message,
false,
"Please ensure Redis is running: redis-server",
)
}
static parse(message: string, filePath?: string): IpuaroError {
const msg = filePath ? `${message} in ${filePath}` : message
return new IpuaroError("parse", msg, true, "File will be skipped")
}
static llm(message: string): IpuaroError {
return new IpuaroError(
"llm",
message,
true,
"Please ensure Ollama is running and model is available",
)
}
static file(message: string): IpuaroError {
return new IpuaroError("file", message, true)
}
static command(message: string): IpuaroError {
return new IpuaroError("command", message, true)
}
static conflict(message: string): IpuaroError {
return new IpuaroError(
"conflict",
message,
true,
"File was modified externally. Regenerate or skip.",
)
}
static validation(message: string): IpuaroError {
return new IpuaroError("validation", message, true)
}
static timeout(message: string): IpuaroError {
return new IpuaroError("timeout", message, true, "Try again or increase timeout")
}
}

View File

@@ -0,0 +1,2 @@
// Shared errors
export * from "./IpuaroError.js"

View File

@@ -0,0 +1,6 @@
// Shared module exports
export * from "./config/index.js"
export * from "./constants/index.js"
export * from "./errors/index.js"
export * from "./types/index.js"
export * from "./utils/index.js"

View File

@@ -0,0 +1,66 @@
/**
* Shared types for ipuaro.
*/
/**
* Application status.
*/
export type AppStatus = "ready" | "thinking" | "indexing" | "error"
/**
* File language type.
*/
export type FileLanguage = "ts" | "tsx" | "js" | "jsx" | "json" | "yaml" | "unknown"
/**
* User choice for confirmations.
*/
export type ConfirmChoice = "apply" | "cancel" | "edit"
/**
* User choice for errors.
*/
export type ErrorChoice = "retry" | "skip" | "abort"
/**
* Project structure node.
*/
export interface ProjectNode {
name: string
type: "file" | "directory"
path: string
children?: ProjectNode[]
}
/**
* Generic result type.
*/
export type Result<T, E = Error> = { success: true; data: T } | { success: false; error: E }
/**
* Create success result.
*/
export function ok<T>(data: T): Result<T, never> {
return { success: true, data }
}
/**
* Create error result.
*/
export function err<E>(error: E): Result<never, E> {
return { success: false, error }
}
/**
* Check if result is success.
*/
export function isOk<T, E>(result: Result<T, E>): result is { success: true; data: T } {
return result.success
}
/**
* Check if result is error.
*/
export function isErr<T, E>(result: Result<T, E>): result is { success: false; error: E } {
return !result.success
}

View File

@@ -0,0 +1,22 @@
import { createHash } from "node:crypto"
/**
* Calculate MD5 hash of content.
*/
export function md5(content: string): string {
return createHash("md5").update(content).digest("hex")
}
/**
* Calculate MD5 hash of file lines.
*/
export function hashLines(lines: string[]): string {
return md5(lines.join("\n"))
}
/**
* Generate short hash for IDs.
*/
export function shortHash(content: string, length = 8): string {
return md5(content).slice(0, length)
}

View File

@@ -0,0 +1,3 @@
// Shared utilities
export * from "./hash.js"
export * from "./tokens.js"

View File

@@ -0,0 +1,41 @@
/**
* Simple token estimation utilities.
* Uses approximation: ~4 characters per token for English text.
*/
const CHARS_PER_TOKEN = 4
/**
* Estimate token count for text.
*/
export function estimateTokens(text: string): number {
return Math.ceil(text.length / CHARS_PER_TOKEN)
}
/**
* Estimate token count for array of strings.
*/
export function estimateTokensForLines(lines: string[]): number {
return estimateTokens(lines.join("\n"))
}
/**
* Truncate text to approximate token limit.
*/
export function truncateToTokens(text: string, maxTokens: number): string {
const maxChars = maxTokens * CHARS_PER_TOKEN
if (text.length <= maxChars) {
return text
}
return `${text.slice(0, maxChars)}...`
}
/**
* Format token count for display.
*/
export function formatTokenCount(tokens: number): string {
if (tokens >= 1000) {
return `${(tokens / 1000).toFixed(1)}k`
}
return tokens.toString()
}

View File

@@ -0,0 +1,106 @@
import { describe, it, expect, vi, beforeEach, afterEach } from "vitest"
import { Project } from "../../../../src/domain/entities/Project.js"
describe("Project", () => {
beforeEach(() => {
vi.useFakeTimers()
vi.setSystemTime(new Date("2025-01-01T00:00:00Z"))
})
afterEach(() => {
vi.useRealTimers()
})
describe("constructor", () => {
it("should create project with generated name", () => {
const project = new Project("/home/user/projects/myapp")
expect(project.rootPath).toBe("/home/user/projects/myapp")
expect(project.name).toBe("projects-myapp")
expect(project.createdAt).toBe(Date.now())
expect(project.lastIndexedAt).toBeNull()
expect(project.fileCount).toBe(0)
expect(project.indexingInProgress).toBe(false)
})
it("should accept custom createdAt", () => {
const customTime = 1000000
const project = new Project("/path", customTime)
expect(project.createdAt).toBe(customTime)
})
})
describe("generateProjectName", () => {
it("should generate name from parent and project folder", () => {
expect(Project.generateProjectName("/home/user/projects/myapp")).toBe("projects-myapp")
})
it("should handle root-level project", () => {
expect(Project.generateProjectName("/myapp")).toBe("myapp")
})
})
describe("indexing lifecycle", () => {
it("should mark indexing started", () => {
const project = new Project("/path")
project.markIndexingStarted()
expect(project.indexingInProgress).toBe(true)
})
it("should mark indexing completed", () => {
const project = new Project("/path")
project.markIndexingStarted()
project.markIndexingCompleted(100)
expect(project.indexingInProgress).toBe(false)
expect(project.lastIndexedAt).toBe(Date.now())
expect(project.fileCount).toBe(100)
})
it("should mark indexing failed", () => {
const project = new Project("/path")
project.markIndexingStarted()
project.markIndexingFailed()
expect(project.indexingInProgress).toBe(false)
expect(project.lastIndexedAt).toBeNull()
})
})
describe("isIndexed", () => {
it("should return false when not indexed", () => {
const project = new Project("/path")
expect(project.isIndexed()).toBe(false)
})
it("should return true when indexed", () => {
const project = new Project("/path")
project.markIndexingCompleted(10)
expect(project.isIndexed()).toBe(true)
})
})
describe("getTimeSinceIndexed", () => {
it("should return null when not indexed", () => {
const project = new Project("/path")
expect(project.getTimeSinceIndexed()).toBeNull()
})
it("should return time since last indexed", () => {
const project = new Project("/path")
project.markIndexingCompleted(10)
vi.advanceTimersByTime(5000)
expect(project.getTimeSinceIndexed()).toBe(5000)
})
})
})

View File

@@ -0,0 +1,165 @@
import { describe, it, expect, vi, beforeEach, afterEach } from "vitest"
import { Session } from "../../../../src/domain/entities/Session.js"
import { createUserMessage } from "../../../../src/domain/value-objects/ChatMessage.js"
import type { UndoEntry } from "../../../../src/domain/value-objects/UndoEntry.js"
describe("Session", () => {
beforeEach(() => {
vi.useFakeTimers()
vi.setSystemTime(new Date("2025-01-01T00:00:00Z"))
})
afterEach(() => {
vi.useRealTimers()
})
it("should create session with defaults", () => {
const session = new Session("session-1", "test-project")
expect(session.id).toBe("session-1")
expect(session.projectName).toBe("test-project")
expect(session.history).toEqual([])
expect(session.undoStack).toEqual([])
expect(session.stats.totalTokens).toBe(0)
})
describe("addMessage", () => {
it("should add message to history", () => {
const session = new Session("1", "proj")
const msg = createUserMessage("Hello")
session.addMessage(msg)
expect(session.history).toHaveLength(1)
expect(session.history[0]).toBe(msg)
})
it("should update stats from message", () => {
const session = new Session("1", "proj")
const msg = {
role: "assistant" as const,
content: "Hi",
timestamp: Date.now(),
stats: { tokens: 50, timeMs: 100, toolCalls: 2 },
}
session.addMessage(msg)
expect(session.stats.totalTokens).toBe(50)
expect(session.stats.totalTimeMs).toBe(100)
expect(session.stats.toolCalls).toBe(2)
})
})
describe("undoStack", () => {
it("should add undo entry", () => {
const session = new Session("1", "proj")
const entry: UndoEntry = {
id: "undo-1",
timestamp: Date.now(),
filePath: "test.ts",
previousContent: ["old"],
newContent: ["new"],
description: "Edit",
}
session.addUndoEntry(entry)
expect(session.undoStack).toHaveLength(1)
})
it("should limit undo stack size", () => {
const session = new Session("1", "proj")
for (let i = 0; i < 15; i++) {
session.addUndoEntry({
id: `undo-${i}`,
timestamp: Date.now(),
filePath: "test.ts",
previousContent: [],
newContent: [],
description: `Edit ${i}`,
})
}
expect(session.undoStack).toHaveLength(10)
expect(session.undoStack[0].id).toBe("undo-5")
})
it("should pop undo entry", () => {
const session = new Session("1", "proj")
const entry: UndoEntry = {
id: "undo-1",
timestamp: Date.now(),
filePath: "test.ts",
previousContent: [],
newContent: [],
description: "Edit",
}
session.addUndoEntry(entry)
const popped = session.popUndoEntry()
expect(popped).toBe(entry)
expect(session.undoStack).toHaveLength(0)
})
})
describe("inputHistory", () => {
it("should add input to history", () => {
const session = new Session("1", "proj")
session.addInputToHistory("command 1")
session.addInputToHistory("command 2")
expect(session.inputHistory).toEqual(["command 1", "command 2"])
})
it("should not add duplicate consecutive inputs", () => {
const session = new Session("1", "proj")
session.addInputToHistory("command")
session.addInputToHistory("command")
expect(session.inputHistory).toHaveLength(1)
})
it("should not add empty inputs", () => {
const session = new Session("1", "proj")
session.addInputToHistory("")
session.addInputToHistory(" ")
expect(session.inputHistory).toHaveLength(0)
})
})
describe("clearHistory", () => {
it("should clear history and context", () => {
const session = new Session("1", "proj")
session.addMessage(createUserMessage("Hello"))
session.context.filesInContext = ["file1.ts"]
session.clearHistory()
expect(session.history).toHaveLength(0)
expect(session.context.filesInContext).toHaveLength(0)
})
})
describe("getSessionDurationFormatted", () => {
it("should format minutes only", () => {
const session = new Session("1", "proj")
vi.advanceTimersByTime(15 * 60 * 1000)
expect(session.getSessionDurationFormatted()).toBe("15m")
})
it("should format hours and minutes", () => {
const session = new Session("1", "proj")
vi.advanceTimersByTime(90 * 60 * 1000)
expect(session.getSessionDurationFormatted()).toBe("1h 30m")
})
})
})

View File

@@ -0,0 +1,76 @@
import { describe, it, expect, vi, beforeEach, afterEach } from "vitest"
import {
createUserMessage,
createAssistantMessage,
createToolMessage,
createSystemMessage,
} from "../../../../src/domain/value-objects/ChatMessage.js"
describe("ChatMessage", () => {
beforeEach(() => {
vi.useFakeTimers()
vi.setSystemTime(new Date("2025-01-01T00:00:00Z"))
})
afterEach(() => {
vi.useRealTimers()
})
describe("createUserMessage", () => {
it("should create user message", () => {
const msg = createUserMessage("Hello")
expect(msg.role).toBe("user")
expect(msg.content).toBe("Hello")
expect(msg.timestamp).toBe(Date.now())
})
})
describe("createAssistantMessage", () => {
it("should create assistant message without tool calls", () => {
const msg = createAssistantMessage("Response")
expect(msg.role).toBe("assistant")
expect(msg.content).toBe("Response")
expect(msg.toolCalls).toBeUndefined()
})
it("should create assistant message with tool calls", () => {
const toolCalls = [{ id: "1", name: "get_lines", params: {}, timestamp: Date.now() }]
const stats = { tokens: 100, timeMs: 500, toolCalls: 1 }
const msg = createAssistantMessage("Response", toolCalls, stats)
expect(msg.toolCalls).toEqual(toolCalls)
expect(msg.stats).toEqual(stats)
})
})
describe("createToolMessage", () => {
it("should create tool message with results", () => {
const results = [{ callId: "1", success: true, data: "data", executionTimeMs: 10 }]
const msg = createToolMessage(results)
expect(msg.role).toBe("tool")
expect(msg.toolResults).toEqual(results)
expect(msg.content).toContain("[1] Success")
})
it("should format error results", () => {
const results = [
{ callId: "2", success: false, error: "Not found", executionTimeMs: 5 },
]
const msg = createToolMessage(results)
expect(msg.content).toContain("[2] Error: Not found")
})
})
describe("createSystemMessage", () => {
it("should create system message", () => {
const msg = createSystemMessage("System prompt")
expect(msg.role).toBe("system")
expect(msg.content).toBe("System prompt")
})
})
})

View File

@@ -0,0 +1,19 @@
import { describe, it, expect } from "vitest"
import { createEmptyFileAST } from "../../../../src/domain/value-objects/FileAST.js"
describe("FileAST", () => {
describe("createEmptyFileAST", () => {
it("should create empty AST with all arrays empty", () => {
const ast = createEmptyFileAST()
expect(ast.imports).toEqual([])
expect(ast.exports).toEqual([])
expect(ast.functions).toEqual([])
expect(ast.classes).toEqual([])
expect(ast.interfaces).toEqual([])
expect(ast.typeAliases).toEqual([])
expect(ast.parseError).toBe(false)
expect(ast.parseErrorMessage).toBeUndefined()
})
})
})

View File

@@ -0,0 +1,36 @@
import { describe, it, expect } from "vitest"
import { createFileData, isFileDataEqual } from "../../../../src/domain/value-objects/FileData.js"
describe("FileData", () => {
describe("createFileData", () => {
it("should create FileData with all fields", () => {
const lines = ["line1", "line2"]
const hash = "abc123"
const size = 100
const lastModified = Date.now()
const result = createFileData(lines, hash, size, lastModified)
expect(result.lines).toEqual(lines)
expect(result.hash).toBe(hash)
expect(result.size).toBe(size)
expect(result.lastModified).toBe(lastModified)
})
})
describe("isFileDataEqual", () => {
it("should return true for equal hashes", () => {
const a = createFileData(["a"], "hash1", 1, 1)
const b = createFileData(["b"], "hash1", 2, 2)
expect(isFileDataEqual(a, b)).toBe(true)
})
it("should return false for different hashes", () => {
const a = createFileData(["a"], "hash1", 1, 1)
const b = createFileData(["a"], "hash2", 1, 1)
expect(isFileDataEqual(a, b)).toBe(false)
})
})
})

View File

@@ -0,0 +1,45 @@
import { describe, it, expect } from "vitest"
import { createFileMeta, isHubFile } from "../../../../src/domain/value-objects/FileMeta.js"
describe("FileMeta", () => {
describe("createFileMeta", () => {
it("should create FileMeta with defaults", () => {
const meta = createFileMeta()
expect(meta.complexity.loc).toBe(0)
expect(meta.complexity.nesting).toBe(0)
expect(meta.complexity.cyclomaticComplexity).toBe(1)
expect(meta.complexity.score).toBe(0)
expect(meta.dependencies).toEqual([])
expect(meta.dependents).toEqual([])
expect(meta.isHub).toBe(false)
expect(meta.isEntryPoint).toBe(false)
expect(meta.fileType).toBe("unknown")
})
it("should merge partial values", () => {
const meta = createFileMeta({
isHub: true,
fileType: "source",
dependencies: ["dep1.ts"],
})
expect(meta.isHub).toBe(true)
expect(meta.fileType).toBe("source")
expect(meta.dependencies).toEqual(["dep1.ts"])
expect(meta.dependents).toEqual([])
})
})
describe("isHubFile", () => {
it("should return true for >5 dependents", () => {
expect(isHubFile(6)).toBe(true)
expect(isHubFile(10)).toBe(true)
})
it("should return false for <=5 dependents", () => {
expect(isHubFile(5)).toBe(false)
expect(isHubFile(0)).toBe(false)
})
})
})

View File

@@ -0,0 +1,31 @@
import { describe, it, expect, vi, beforeEach, afterEach } from "vitest"
import { createToolCall } from "../../../../src/domain/value-objects/ToolCall.js"
describe("ToolCall", () => {
beforeEach(() => {
vi.useFakeTimers()
vi.setSystemTime(new Date("2025-01-01T00:00:00Z"))
})
afterEach(() => {
vi.useRealTimers()
})
describe("createToolCall", () => {
it("should create tool call with all fields", () => {
const params = { path: "test.ts", line: 10 }
const call = createToolCall("call-1", "get_lines", params)
expect(call.id).toBe("call-1")
expect(call.name).toBe("get_lines")
expect(call.params).toEqual(params)
expect(call.timestamp).toBe(Date.now())
})
it("should handle empty params", () => {
const call = createToolCall("call-2", "git_status", {})
expect(call.params).toEqual({})
})
})
})

View File

@@ -0,0 +1,32 @@
import { describe, it, expect } from "vitest"
import {
createSuccessResult,
createErrorResult,
} from "../../../../src/domain/value-objects/ToolResult.js"
describe("ToolResult", () => {
describe("createSuccessResult", () => {
it("should create success result", () => {
const data = { lines: ["line1", "line2"] }
const result = createSuccessResult("call-1", data, 50)
expect(result.callId).toBe("call-1")
expect(result.success).toBe(true)
expect(result.data).toEqual(data)
expect(result.executionTimeMs).toBe(50)
expect(result.error).toBeUndefined()
})
})
describe("createErrorResult", () => {
it("should create error result", () => {
const result = createErrorResult("call-2", "File not found", 10)
expect(result.callId).toBe("call-2")
expect(result.success).toBe(false)
expect(result.error).toBe("File not found")
expect(result.executionTimeMs).toBe(10)
expect(result.data).toBeUndefined()
})
})
})

View File

@@ -0,0 +1,59 @@
import { describe, it, expect, vi, beforeEach, afterEach } from "vitest"
import { createUndoEntry, canUndo } from "../../../../src/domain/value-objects/UndoEntry.js"
describe("UndoEntry", () => {
beforeEach(() => {
vi.useFakeTimers()
vi.setSystemTime(new Date("2025-01-01T00:00:00Z"))
})
afterEach(() => {
vi.useRealTimers()
})
describe("createUndoEntry", () => {
it("should create undo entry with all fields", () => {
const entry = createUndoEntry(
"undo-1",
"test.ts",
["old line"],
["new line"],
"Edit line 1",
)
expect(entry.id).toBe("undo-1")
expect(entry.filePath).toBe("test.ts")
expect(entry.previousContent).toEqual(["old line"])
expect(entry.newContent).toEqual(["new line"])
expect(entry.description).toBe("Edit line 1")
expect(entry.timestamp).toBe(Date.now())
expect(entry.toolCallId).toBeUndefined()
})
it("should create undo entry with toolCallId", () => {
const entry = createUndoEntry("undo-2", "test.ts", [], [], "Create file", "tool-123")
expect(entry.toolCallId).toBe("tool-123")
})
})
describe("canUndo", () => {
it("should return true when current content matches newContent", () => {
const entry = createUndoEntry("undo-1", "test.ts", ["old"], ["new"], "Edit")
expect(canUndo(entry, ["new"])).toBe(true)
})
it("should return false when content differs", () => {
const entry = createUndoEntry("undo-1", "test.ts", ["old"], ["new"], "Edit")
expect(canUndo(entry, ["modified"])).toBe(false)
})
it("should return false when length differs", () => {
const entry = createUndoEntry("undo-1", "test.ts", ["old"], ["new"], "Edit")
expect(canUndo(entry, ["new", "extra"])).toBe(false)
})
})
})

View File

@@ -0,0 +1,177 @@
import { describe, it, expect, vi, beforeEach, afterEach } from "vitest"
import type { RedisConfig } from "../../../../src/shared/constants/config.js"
import { IpuaroError } from "../../../../src/shared/errors/IpuaroError.js"
const mockRedisInstance = {
connect: vi.fn(),
quit: vi.fn(),
ping: vi.fn(),
config: vi.fn(),
status: "ready" as string,
}
vi.mock("ioredis", () => {
return {
Redis: vi.fn(() => mockRedisInstance),
}
})
const { RedisClient } = await import("../../../../src/infrastructure/storage/RedisClient.js")
describe("RedisClient", () => {
const defaultConfig: RedisConfig = {
host: "localhost",
port: 6379,
db: 0,
keyPrefix: "ipuaro:",
}
beforeEach(() => {
vi.clearAllMocks()
mockRedisInstance.status = "ready"
mockRedisInstance.connect.mockResolvedValue(undefined)
mockRedisInstance.quit.mockResolvedValue(undefined)
mockRedisInstance.ping.mockResolvedValue("PONG")
mockRedisInstance.config.mockResolvedValue(undefined)
})
afterEach(() => {
vi.restoreAllMocks()
})
describe("constructor", () => {
it("should create instance with config", () => {
const client = new RedisClient(defaultConfig)
expect(client).toBeDefined()
expect(client.isConnected()).toBe(false)
})
})
describe("connect", () => {
it("should connect to Redis", async () => {
const client = new RedisClient(defaultConfig)
await client.connect()
expect(mockRedisInstance.connect).toHaveBeenCalled()
expect(client.isConnected()).toBe(true)
})
it("should configure AOF on connect", async () => {
const client = new RedisClient(defaultConfig)
await client.connect()
expect(mockRedisInstance.config).toHaveBeenCalledWith("SET", "appendonly", "yes")
expect(mockRedisInstance.config).toHaveBeenCalledWith("SET", "appendfsync", "everysec")
})
it("should not reconnect if already connected", async () => {
const client = new RedisClient(defaultConfig)
await client.connect()
await client.connect()
expect(mockRedisInstance.connect).toHaveBeenCalledTimes(1)
})
it("should throw IpuaroError on connection failure", async () => {
mockRedisInstance.connect.mockRejectedValue(new Error("Connection refused"))
const client = new RedisClient(defaultConfig)
await expect(client.connect()).rejects.toThrow(IpuaroError)
await expect(client.connect()).rejects.toMatchObject({
type: "redis",
})
})
it("should handle AOF config failure gracefully", async () => {
mockRedisInstance.config.mockRejectedValue(new Error("CONFIG disabled"))
const client = new RedisClient(defaultConfig)
await client.connect()
expect(client.isConnected()).toBe(true)
})
})
describe("disconnect", () => {
it("should disconnect from Redis", async () => {
const client = new RedisClient(defaultConfig)
await client.connect()
await client.disconnect()
expect(mockRedisInstance.quit).toHaveBeenCalled()
expect(client.isConnected()).toBe(false)
})
it("should handle disconnect when not connected", async () => {
const client = new RedisClient(defaultConfig)
await client.disconnect()
expect(mockRedisInstance.quit).not.toHaveBeenCalled()
})
})
describe("isConnected", () => {
it("should return false when not connected", () => {
const client = new RedisClient(defaultConfig)
expect(client.isConnected()).toBe(false)
})
it("should return true when connected and ready", async () => {
const client = new RedisClient(defaultConfig)
await client.connect()
expect(client.isConnected()).toBe(true)
})
it("should return false when client status is not ready", async () => {
const client = new RedisClient(defaultConfig)
await client.connect()
mockRedisInstance.status = "connecting"
expect(client.isConnected()).toBe(false)
})
})
describe("getClient", () => {
it("should return Redis client when connected", async () => {
const client = new RedisClient(defaultConfig)
await client.connect()
const redis = client.getClient()
expect(redis).toBe(mockRedisInstance)
})
it("should throw when not connected", () => {
const client = new RedisClient(defaultConfig)
expect(() => client.getClient()).toThrow(IpuaroError)
expect(() => client.getClient()).toThrow("not connected")
})
})
describe("ping", () => {
it("should return true on successful ping", async () => {
const client = new RedisClient(defaultConfig)
await client.connect()
const result = await client.ping()
expect(result).toBe(true)
})
it("should return false when not connected", async () => {
const client = new RedisClient(defaultConfig)
const result = await client.ping()
expect(result).toBe(false)
})
it("should return false on ping failure", async () => {
mockRedisInstance.ping.mockRejectedValue(new Error("Timeout"))
const client = new RedisClient(defaultConfig)
await client.connect()
const result = await client.ping()
expect(result).toBe(false)
})
})
})

Some files were not shown because too many files have changed in this diff Show More