The tech-debt calculator: team size times PR hours times incidents times story-point drag
Every assumption is visible and overridable. The output is a range, not a point estimate, because empirical noise demands honesty. Export to CSV and deep-link your scenario.
Team
PR review drag
Incident cost
Story-point drag
Onboarding drag
Annual tech-debt cost
$242,875 – $402,300
PR review drag
$34,020 – $63,180
Cisco/SmartBear 2007, Google SEaG ch.9
Incident cost
$6,978 – $25,920
Rahman 2025, CodeScene research
Rework drag
$172,800 – $259,200
DORA State of DevOps 2024
Onboarding drag
$29,077 – $54,000
Pluralsight Onboarding 2024
Outputs are illustrative ranges. Methodology documented in detail below. Cost models are not investment advice; your mileage will vary.
Component 1: PR review drag
Source assumptions: Google's Software Engineering at Google (ch. 9) reports median PR review latency of approximately 24 hours and notes that high-complexity diffs are primary contributors to latency. Cisco/SmartBear's “Best Kept Secrets of Peer Code Review” (2007) found reviewer effectiveness drops after 60 minutes and 400 LOC. The smell overhead percentage represents the fraction of review time attributable to code-quality friction rather than legitimate review work. Default: 30%.
Component 2: Incident cost
Source assumptions: Gartner's MTTR benchmarks and incidentcost.com sibling research. The quality attribution percentage (default 40%) reflects research finding that 40-60% of production incidents have root causes traceable to code-quality factors rather than infrastructure failures or external dependencies (Rahman 2025 meta-study; CodeScene's hotspot research).
Component 3: Story-point drag
Source assumptions: DORA's State of DevOps 2024 finds low-performing teams spend 10-25% of capacity on unplanned rework and fixes. Elite teams spend under 5%. The rework percentage is the fraction of a sprint's story points consumed by re-doing completed work due to defects, integration failures, and requirement misinterpretations driven by unclear code.
Component 4: Onboarding drag
Source assumptions: Pluralsight's State of Developer Onboarding 2024 reports median time-to-first-PR of 2-4 weeks on clean codebases and 6-12 weeks on smell-dense ones. The difference (4-8 weeks) at fully-loaded cost represents the onboarding drag attributable to code quality. See /onboarding-drag for the full model.
Every output is a range. Low-end assumes favourable conditions: incidents closer to the lower bound, reviewers faster than median, smell overhead closer to 20%. High-end assumes unfavourable: incidents at the upper bound, reviewers at saturation, smell overhead at 40%.
An engineering leader who presents a range is more credible than one who presents a single number. A range acknowledges uncertainty and invites the finance team to interrogate assumptions. A single number invites them to dispute it and win.
- Run it with your team's real numbers before the meeting. Use your actual incident frequency, your actual PR review hours (check your GitHub/GitLab data), and your actual onboarding time from the last two hires.
- Download the CSV and put it in the meeting deck. Numbers in a spreadsheet are taken more seriously than numbers in a slide.
- Open the methodology live and invite challenges to the assumptions. The most effective framing: “These are the assumptions driving the model. Which one do you think is wrong, and what would you replace it with?”
Want us to run this with your real incident data?
Digital Signet runs two-week code-debt audits. We pull your actual incident history, your PR review time data, and your onboarding records, run this model with real numbers, and deliver the memo your leadership team will sign.
Email Oliver →