Back to Resources

Assessment Methodology

Detailed explanation of how we calculate scores, interpret results, and provide actionable recommendations across all assessment tools.

Scoring Scales

Maturity Levels (0-4)

Used for assessing the maturity of practices, processes, and capabilities

0

Not Implemented

No action taken or capability does not exist

1

Initial/Ad-hoc

Informal, inconsistent, or reactive approach

2

Developing

Some formal processes, but not comprehensive

3

Defined

Documented, standardized, and consistently applied

4

Optimized

Continuously improved, measured, and proactive

Used in: AI Maturity Assessment, Telemetry Readiness Audit, Prompt Governance

Percentage Scale (0-100%)

Used for measuring coverage, adoption rates, and utilization metrics

Direct percentage representation of coverage or completion

0-40%-Low
Significant improvement needed
41-70%-Medium
On track, continue progress
71-100%-High
Maintain and optimize

Used in: License Utilization Optimizer, Shadow AI Discovery, Training completion rates

Friction Scores (1-5)

Used for measuring user experience and adoption barriers

1

Very High Friction

Major barriers preventing adoption

2

High Friction

Significant obstacles hindering usage

3

Moderate Friction

Some challenges but manageable

4

Low Friction

Minor issues, smooth experience

5

Very Low Friction

Seamless, intuitive, easy to use

Used in: Employee Friction Assessment

Calculation Methods

Weighted Dimension Scoring

Different dimensions are assigned weights based on their relative importance

Overall Score = Σ (Dimension Score × Weight) / Σ Weights

Example:

DimensionScoreWeightWeighted
Security851.5127.5
Compliance721.393.6
Performance90190

(127.5 + 93.6 + 90) / (1.5 + 1.3 + 1.0) = 311.1 / 3.8 = 81.9

Used in: AI Maturity Assessment, Risk Assessment, Telemetry Audit

Coverage Percentage

Measures the proportion of items assessed or implemented relative to the total

Coverage % = (Items Completed / Total Items) × 100

Example:

Scenario: Security controls assessment

Completed: 42

Total: 56

(42 / 56) × 100 = 75%

Used in: Shadow AI Discovery, License Optimizer, Compliance assessments

ROI Calculation

Estimates return on investment from AI initiatives or optimizations

ROI = (Net Benefit / Cost) × 100

Example:

Scenario: License optimization

Current Cost: $125,000

Optimized Cost: $87,500

Annual Savings: $37,500

Implementation Cost: $5,000

Net Benefit: $32,500

(32,500 / 5,000) × 100 = 650%

Used in: License Optimizer, Waste Detector, Value realization tools

Risk Score Aggregation

Combines multiple risk factors with severity weighting

Risk Score = Σ (Likelihood × Impact × Severity Weight)
Critical:×3
High:×2
Medium:×1.5
Low:×1

Used in: AI Risk Assessment, Security assessments, Compliance gap analysis

Compounding Value Calculation

Models exponential value growth over time as AI capabilities mature

Compounded Value = Base Value × (1 + Score/100) × Time Factor × Adoption Rate

Example:

Used in: Intelligence Compounding, Execution Intelligence, Long-term value projection

Maturity Progression Guide

Understanding where you are and what to focus on next based on your maturity level.

Level 0-1: Initial/Reactive

Characteristics

  • Ad-hoc AI usage
  • No formal policies
  • Individual experimentation
  • Untracked spending

Common Risks

  • Shadow AI proliferation
  • Security vulnerabilities
  • Compliance violations
  • Wasted resources

Next Priority

Establish basic governance foundation

Level 2: Developing

Characteristics

  • Some policies documented
  • Central AI team forming
  • Basic tracking
  • Vendor management starting

Common Risks

  • Inconsistent application
  • Coverage gaps
  • Limited visibility
  • Scaling challenges

Next Priority

Standardize processes and expand coverage

Level 3: Defined

Characteristics

  • Comprehensive policies
  • Cross-functional governance
  • Systematic monitoring
  • Clear accountability

Common Risks

  • Process rigidity
  • Innovation bottlenecks
  • Maintenance burden
  • Change resistance

Next Priority

Optimize efficiency and enable innovation

Level 4: Optimized

Characteristics

  • Continuous improvement
  • Data-driven decisions
  • Proactive risk management
  • Innovation enablement

Common Risks

  • Complacency
  • Over-optimization
  • Adaptation lag
  • Complexity creep

Next Priority

Maintain agility and strategic alignment

Score Interpretation Guide

Scores are Context-Dependent

A score of 60% for a company just starting AI adoption may be excellent, while the same score for a mature AI-first company may indicate gaps. Always compare against your industry peers and your own historical performance.

Prioritize High-Impact Areas

Not all dimensions are equally important. Focus on areas with high business impact and high risk first. A 70% score in security is more concerning than 70% in documentation.

Trends Matter More Than Absolute Scores

Regular assessments showing upward trends (even if scores are moderate) indicate healthy progress. Stagnant or declining scores warrant investigation even if absolute values seem acceptable.

Perfect Scores May Not Be Optimal

Achieving 100% in all areas may indicate over-investment in governance at the expense of innovation. Balance is key—aim for "good enough" governance that enables, not hinders, AI adoption.