Section 7: Anti-Patterns and Gaming Detection
Making It Real
Section 7: Anti-Patterns and Gaming Detection
What Not to Measure:
| Anti-Pattern | Why It Destroys the Framework | What to Measure Instead |
|---|---|---|
| Lines of code shipped | Incentivizes volume over value | Lead time; Experiment velocity |
| Story points completed | Gamed; no correlation to customer value | Validated learning; Time to first feedback |
| Employee utilization rate | Incentivizes busyness over learning | Exploration time protected; Experiment velocity |
| Uptime as sole metric | Incentivizes risk avoidance | Resilience metrics (MTTR); Lead time |
| AI model accuracy alone | Ignores fairness and oversight | Accuracy + fairness + explainability + override rate |
Gaming Detection Signals:
- Experiment velocity rises, but experiments shrink to triviality → Add "validated learning" quality gate.
- Self-service rate rises, but platform quality drops → Add internal NPS and error rate.
- Failure rate drops below 30% → Teams avoiding risk. Add "exploration time protected."
- Postmortems published but no system changes follow → Add "postmortem → improvement" tracking.