01 · Capture Ingest every event.
AI provider sessions, VCS commits, pull requests, deploys, manual work notes, explicit architectural decisions. Events arrive via webhooks, SDK clients, or direct API calls, and each one is timestamped, hashed, and written to the append-only store.
Multi-provider · webhooks · SDK
02 · Score Classify and weight.
Each event is classified by kind (session, commit, decision, etc.), tagged to a work item, and weighted against its benchmark category. The classifier is deterministic and versioned — re-running on the same inputs yields the same outputs.
Classifier v1 · benchmark DB
03 · Measure Convert to equivalent weeks.
The Effort Engine applies expertise multipliers, confidence intervals, and human corrections to produce equivalent engineering-week values per work item. Every derived number is a pure function of the recorded events plus the engine version.
Effort engine · HITL calibration
04 · Document Assemble narratives.
The document assembler composes the outputs the audience expects: four-part test narratives for IRS substantiation, engineering summaries for investor decks, work papers for CPAs. All derived from the same underlying measurements — no duplication, no drift.
SQL-first assembly · 22 locales
05 · Publish Seal and export.
Reports are rendered to PDF, XLSX, CSV, and JSON, hashed with SHA-256, signed, and written to tenant storage. Post-generation tampering is detectable via hash comparison. Every published artifact is independently verifiable.
PDF · XLSX · CSV · JSON · SHA-256