Section 1: Foundational Truth Metrics

Making It Real

Section 1: Foundational Truth Metrics

Truth 1: Speed of Learning Beats Perfection of Plan

Metric Definition Owner Cadence Target Source Action If Degrading
Lead Time for Change Median time from code commit to production VP Engineering / Platform Lead Weekly < 1 hour CI/CD pipeline Remove manual gates. Parallelize tests.
Experiment Velocity Validated experiments shipped per team per month Engineering Managers Monthly ≥ 2 per team Experiment tracking Reduce scope. Lower approval requirements.
Time to First Learning Days from kickoff to first customer-facing experiment Product + Engineering Lead Per project < 14 days Project tracking Decompose projects. Ship MVPs.
Plan Half-Life Time until >50% of quarterly plan is revised CTO / Strategy Quarterly Revisions expected OKR documents Increase experimentation.
Decision Latency Time from "decision needed" to "decision made" CTO Weekly < 48 hours tactical; < 5 days strategic Calendar + decision log Replace meetings with RFCs. Delegate.

Composite: Organizational Learning Velocity (OLV) = (Experiment Velocity × % producing actionable learning) / Decision Latency. Target: increasing trend quarter over quarter.

Truth 2: Intelligence Is a Commodity Infrastructure Layer

Metric Definition Owner Cadence Target Source Action If Degrading
AI Self-Service Rate % of AI/ML capabilities accessible without a ticket ML Platform Lead Monthly ≥ 80% Platform analytics + tickets Build self-service APIs. Train teams.
Model Deployment Lead Time Time from trained model to production serving ML Platform Lead Weekly < 4 hours MLOps pipeline Automate validation.
Data Product Availability % of curated data domains in self-service catalogs Data Platform Lead Monthly 100% Data catalog Treat data as a product.
GenAI Integration Time Time for a product team to embed a generative capability Platform Team Per integration < 1 day Integration tracking Expose via standardized APIs.
Proprietary Model Ratio % of AI capabilities built on proprietary data ML Strategy Lead Quarterly > 50% trend Model registry + contracts Invest in data assets.

Truth 3: Human-Machine Symbiosis

Metric Definition Owner Cadence Target Source Action If Degrading
Augmentation Rate % of AI workflows with human oversight designed in Product / AI Ethics Lead Per release 100% high-stakes Design reviews + monitoring Redesign workflows. Build override.
Human-AI Task Ratio Tasks augmented by AI vs. performed entirely by human/machine Operations Quarterly Increasing AI assistance; zero full automation of judgment Workflow analysis Elevate humans to judgment roles.
Copilot Adoption Depth % using AI tools daily; frequency of refinement on output IT / People Analytics Monthly ≥ 70% adoption; ≥ 50% refinement Tool analytics Improve tool quality. Train.
Skill Elevation Index Survey: "AI tools have made my job more strategic" People Team Quarterly ≥ 4.0 / 5.0 Pulse survey Redesign roles. Train.

Truth 4: Local Optimization Destroys Global Systems

Metric Definition Owner Cadence Target Source Action If Degrading
Cross-System Impact Review Coverage % of architecture decisions with dependency mapping Enterprise Architecture Weekly 100% for multi-system changes Review records Mandate dependency maps.
Cascading Failure Incidents Incidents where a change in one system caused failure in another SRE Weekly 0 Incident tracker Improve integration testing.
Technical Debt Visibility Score % of known debt tracked with owners and timelines Engineering Leadership Monthly 100% tracked; 80% active remediation Debt register Make debt visible. Allocate capacity.
API Contract Violation Rate Breaking changes deployed without versioning or notification Platform Team Weekly 0 API gateway + schema registry Enforce registries. Block in CI.
System Coupling Density Direct dependencies per critical service Architecture Team Quarterly Core: < 5 Dependency mapping Refactor. Introduce APIs.

Truth 5: Ethics Is Engineering

Metric Definition Owner Cadence Target Source Action If Degrading
Ethics Review Gate Compliance % of intelligent system designs passing ethics review before code AI Ethics Lead Per feature 100% Design docs + review log Block dev without review. Train.
Bias Detection Pre-Release Demographic fairness tests run and passed before deployment ML Engineering Per model 100%; all protected attributes MLOps + fairness framework Integrate into CI.
Explainability Coverage % of production models with documented decision logic ML Engineering Quarterly 100% Model registry + docs Require explainability to ship.
Human Override Usage Rate % of AI decisions where override was available and exercised Operations / Product Monthly Tracked; anomalies investigated Production logs Ensure mechanisms exist. Train users.
External Transparency Publication Transparency reports published per quarter Ethics / Comms Quarterly ≥ 1 for high-impact systems Publication records Build into product lifecycle.

Truth 6: Organizational Structure Determines Technical Outcomes

Metric Definition Owner Cadence Target Source Action If Degrading
Cross-Functional Team Ratio % of product teams with all necessary disciplines Engineering / People Quarterly 100% Org chart + charters Restructure. Eliminate silos.
Handoff Count per Feature Average team handoffs from ideation to production Engineering Leadership Per feature < 2 Value stream mapping Redesign for end-to-end ownership.
Conway's Law Divergence Index Architectural mismatch between system and team boundaries Architecture Team Quarterly 0 major misalignments Architecture docs + org chart Restructure or refactor.
Team Autonomy Index Survey: "My team can deliver end-to-end without depending on others" Engineering Managers Quarterly ≥ 4.0 / 5.0 Team health survey Build platforms. Eliminate bottlenecks.
Internal NPS (Engineering) Net Promoter Score for internal platform and tooling Platform Team Quarterly ≥ 50 Engineering survey Treat platform as a product.

Truth 7: Failure Is Signal, Failure Without Learning Is Waste

Metric Definition Owner Cadence Target Source Action If Degrading
Experiment Failure Rate % of experiments not producing hypothesized outcome Experimentation Lead Monthly 50–70% healthy Experiment tracking If < 30%, teams playing safe. If > 80%, hypotheses poorly formed.
Postmortem Completion Rate % of incidents with published postmortem within 48 hours SRE / Culture Weekly 100% Postmortem repository Automate creation. Protect time.
Repeated Failure Rate % of failures sharing root causes with previously documented failures SRE Monthly < 10% Incident tracking + postmortems Fix systems, not symptoms.
Learning Documentation Velocity Documented learnings published per month per team Engineering Managers Monthly ≥ 1 per team Knowledge base Reward documentation.
Blame Language Detection NLP scan of incident writeups: blame-oriented vs. system-oriented language Culture Lead Quarterly < 5% blame Postmortem text analysis Train on blameless postmortems.

Truth 8: Custodial Leadership Is Active Harm

Metric Definition Owner Cadence Target Source Action If Degrading
Self-Service Transaction Rate % of requests fulfilled without leadership intervention Platform Lead Weekly ≥ 85% Tickets + platform analytics Replace tickets with self-service.
Approval Meeting Hours Leadership hours per week in approval meetings CTO / Chief of Staff Weekly < 5 hours Calendar analysis Convert approvals to frameworks.
Innovation Budget Ratio % of budget to new capabilities vs. maintenance CTO / Finance Quarterly 70/20/10 Budget ledger Sunset systems. Reallocate.
Blocker Escalation Rate Decisions per week escalated to CTO CTO / Leaders Weekly < 3 per week Escalation tracking Publish decision matrices. Delegate.
Calendar Audit: Building vs. Blocking % of CTO time on architecture, strategy, enablement vs. approvals CTO Monthly ≥ 70% building Calendar categorization Eliminate approval meetings.