Assessment Methodology
Detailed explanation of how we calculate scores, interpret results, and provide actionable recommendations across all assessment tools.
Scoring Scales
Used for assessing the maturity of practices, processes, and capabilities
Not Implemented
No action taken or capability does not exist
Initial/Ad-hoc
Informal, inconsistent, or reactive approach
Developing
Some formal processes, but not comprehensive
Defined
Documented, standardized, and consistently applied
Optimized
Continuously improved, measured, and proactive
Used in: AI Maturity Assessment, Telemetry Readiness Audit, Prompt Governance
Used for measuring coverage, adoption rates, and utilization metrics
Direct percentage representation of coverage or completion
Used in: License Utilization Optimizer, Shadow AI Discovery, Training completion rates
Used for measuring user experience and adoption barriers
Very High Friction
Major barriers preventing adoption
High Friction
Significant obstacles hindering usage
Moderate Friction
Some challenges but manageable
Low Friction
Minor issues, smooth experience
Very Low Friction
Seamless, intuitive, easy to use
Used in: Employee Friction Assessment
Calculation Methods
Different dimensions are assigned weights based on their relative importance
Example:
| Dimension | Score | Weight | Weighted |
|---|---|---|---|
| Security | 85 | 1.5 | 127.5 |
| Compliance | 72 | 1.3 | 93.6 |
| Performance | 90 | 1 | 90 |
(127.5 + 93.6 + 90) / (1.5 + 1.3 + 1.0) = 311.1 / 3.8 = 81.9
Used in: AI Maturity Assessment, Risk Assessment, Telemetry Audit
Measures the proportion of items assessed or implemented relative to the total
Example:
Scenario: Security controls assessment
Completed: 42
Total: 56
(42 / 56) × 100 = 75%
Used in: Shadow AI Discovery, License Optimizer, Compliance assessments
Estimates return on investment from AI initiatives or optimizations
Example:
Scenario: License optimization
Current Cost: $125,000
Optimized Cost: $87,500
Annual Savings: $37,500
Implementation Cost: $5,000
Net Benefit: $32,500
(32,500 / 5,000) × 100 = 650%
Used in: License Optimizer, Waste Detector, Value realization tools
Combines multiple risk factors with severity weighting
Used in: AI Risk Assessment, Security assessments, Compliance gap analysis
Models exponential value growth over time as AI capabilities mature
Example:
Used in: Intelligence Compounding, Execution Intelligence, Long-term value projection
Maturity Progression Guide
Understanding where you are and what to focus on next based on your maturity level.
Characteristics
- ✓Ad-hoc AI usage
- ✓No formal policies
- ✓Individual experimentation
- ✓Untracked spending
Common Risks
- Shadow AI proliferation
- Security vulnerabilities
- Compliance violations
- Wasted resources
Next Priority
Establish basic governance foundation
Characteristics
- ✓Some policies documented
- ✓Central AI team forming
- ✓Basic tracking
- ✓Vendor management starting
Common Risks
- Inconsistent application
- Coverage gaps
- Limited visibility
- Scaling challenges
Next Priority
Standardize processes and expand coverage
Characteristics
- ✓Comprehensive policies
- ✓Cross-functional governance
- ✓Systematic monitoring
- ✓Clear accountability
Common Risks
- Process rigidity
- Innovation bottlenecks
- Maintenance burden
- Change resistance
Next Priority
Optimize efficiency and enable innovation
Characteristics
- ✓Continuous improvement
- ✓Data-driven decisions
- ✓Proactive risk management
- ✓Innovation enablement
Common Risks
- Complacency
- Over-optimization
- Adaptation lag
- Complexity creep
Next Priority
Maintain agility and strategic alignment
Scores are Context-Dependent
A score of 60% for a company just starting AI adoption may be excellent, while the same score for a mature AI-first company may indicate gaps. Always compare against your industry peers and your own historical performance.
Prioritize High-Impact Areas
Not all dimensions are equally important. Focus on areas with high business impact and high risk first. A 70% score in security is more concerning than 70% in documentation.
Trends Matter More Than Absolute Scores
Regular assessments showing upward trends (even if scores are moderate) indicate healthy progress. Stagnant or declining scores warrant investigation even if absolute values seem acceptable.
Perfect Scores May Not Be Optimal
Achieving 100% in all areas may indicate over-investment in governance at the expense of innovation. Balance is key—aim for "good enough" governance that enables, not hinders, AI adoption.