Our story
We are building the lineage layer AI-era software never got.
As engineers and engineering leaders, we've had firsthand experience with what happens when AI writes production code and nobody tracks it. The incident happens. The postmortem starts. The trail is cold.
Our mission
Give every engineering team complete visibility into what AI contributed — before it breaks production, not after.
The problem we see
The industry adopted AI coding tools in months. The accountability layer never followed.
Claude, Copilot, Cursor. They're in every IDE — writing retry handlers, auth flows, database migrations. In many teams, the model is writing the majority of code that goes to production.
But the PR looks like any other PR. Reviewers see a diff. They don't see which model wrote it, the acceptance rate, or whether the developer actually reviewed it before clicking accept.
When that code causes an incident, the postmortem blames the engineer who merged it. The model that generated the logic is never mentioned. The session is gone. The risk was never scored.
This isn't a people problem. It's an infrastructure problem. The tooling to track AI authorship through the entire delivery lifecycle — from IDE to production — simply didn't exist. Until now.
Why now
The window to build this is closing fast.
AI coding goes mainstream.
GitHub Copilot crosses 1M users. Claude, GPT-4, and Cursor arrive. Every engineering team starts shipping AI-generated code — but no tooling exists to understand what the model actually wrote.
The accountability gap opens.
AI-assisted code becomes the default. PRs look identical whether a human wrote them or a model did. Postmortems blame the wrong author. Compliance teams start asking questions nobody can answer.
Regulation arrives.
EO 14028, NIST SSDF, and the EU Cyber Resilience Act codify what forward-thinking teams already knew: you need provenance. SBOMs are table stakes. AIBOMs are the next frontier.
We connect the trail.
SenseLab is the lineage layer the industry skipped. Every release, every model, every decision — permanently traceable from IDE to incident.
What we believe
Principles we build by.
Authorship is accountability.
Every line of code has a history. Who wrote it, which model generated it, who accepted it. That history doesn't disappear at merge — it just becomes invisible. We make it visible.
The gap between review and incident is where trust breaks.
Teams adopt AI coding tools in days. The governance to match takes months — if it ever comes. We close that gap by design, not by policy memo.
Slowness is not safety.
The answer to AI in your codebase is not to slow down. It's to ship with proof. Full lineage means you move fast and still know exactly what you shipped.
Compliance follows observability.
AIBOMs, SBOMs, provenance records — these aren't paperwork. They're the byproduct of a system that actually knows what happened. Build observability first; compliance comes for free.
Join us
Join us if every merge deserves a trail.
Because the next incident is already in your codebase. You just can't see it yet.