About Buildermark
Buildermark is an open source tool that measures how much of a codebase was generated by AI by matching coding agent diffs with commits. It runs natively on macOS, Windows, and Linux and provides a searchable archive of agent conversations tied to project commits.
Review
Buildermark provides a clear, diff-based approach to attributing code contributions to coding agents, making the result deterministic and easy to audit. As a newly launched, free project, it focuses on giving developers and teams a concrete metric for AI involvement and a way to review agent outputs alongside committed code.
Key Features
- Diff-based attribution: matches agent-generated diffs to commits to calculate percentage of AI-written code.
- Conversation archive: stores and indexes agent conversations so teams can review outputs and decisions.
- Cross-platform native support: runs locally on macOS, Windows, and Linux.
- Open source and free at launch: source code is available and there is no cost to start using it.
- Planned team aggregation: roadmapped support for aggregating metrics across teams and projects.
Pricing and Value
Buildermark is free and open source at launch, which makes it accessible for individual developers and small teams to try without financial risk. Its value comes from providing an auditable, percentage-based metric that helps compare agent performance, track adoption, and surface which workflows or models produce the most useful output. Paid hosting or advanced team features may appear later as the project matures.
Pros
- Deterministic, diff-based measurement offers transparent attribution that is easy to verify.
- Runs locally on major desktop platforms, reducing dependency on external services.
- Open source licensing allows inspection, modification, and self-hosting.
- Conversation archiving helps teams learn from agent interactions and reproduce context for commits.
- Useful for tracking which models and workflows produce the best results across projects.
Cons
- Metric can simplify nuanced collaboration: heavy human direction or review can still register as high agent contribution.
- Early-stage feature set; integrations, dashboards, and hosted team services are limited or planned rather than complete.
- Archiving agent conversations raises privacy and storage considerations that teams must manage.
Buildermark is a good fit for developers and engineering managers who want an evidence-based view of AI contribution in codebases, for auditing or process-improvement purposes. It is most useful for teams willing to self-host or inspect open source tooling and for those who want a deterministic, reproducible metric to compare agent performance over time. For organizations seeking turnkey hosted analytics today, some features may still be missing as the project matures.
Try the demo to browse agent conversations and see example attribution: Buildermark demo.
Open 'Buildermark' Website
Your membership also unlocks:








