AI Governance Gap Analyzer

Assess control implementation across key governance frameworks

Frameworks Covered: NIST AI RMF, ISO 42001, ISO 27001, SOC 2, EU AI Act | Output: Gap register, framework compliance scores, policy recommendations

Organization Information

Framework Filter

21
Total Controls
0
Assessed
0
Implemented
0.0
Avg Maturity
0%
Avg Coverage

Control Assessment

Evaluate each control's implementation status, maturity, and evidence

NIST-AI-RMFAI Governance Structure
Establish AI governance and oversight structure
Define roles, responsibilities, and accountability for AI governance
NIST-AI-RMFAI Risk Management
Implement AI risk management processes
Systematic identification, assessment, and mitigation of AI risks
NIST-AI-RMFAI Policies & Standards
Document AI policies and standards
Establish and maintain policies governing AI development and use
NIST-AI-RMFAI Inventory
Maintain inventory of AI systems and tools
Catalog all AI systems, tools, and use cases across the organization
NIST-AI-RMFAI Performance Measurement
Measure AI system performance and impacts
Track metrics for AI effectiveness, efficiency, and risk indicators
ISO-42001AI Management System
Establish AI Management System (AIMS)
Implement systematic approach to managing AI throughout its lifecycle
ISO-42001AI Lifecycle Controls
Implement AI lifecycle stage gates
Define approval and review gates across AI development and deployment
ISO-42001Data Governance for AI
Establish data governance for AI training and operation
Control data quality, provenance, and privacy for AI systems
ISO-42001AI Impact Assessment
Conduct AI impact assessments
Assess potential impacts on stakeholders, society, and environment
ISO-42001AI Transparency
Ensure AI transparency and explainability
Document AI decision-making processes and enable explanations
ISO-27001Information Security for AI
Integrate AI systems into ISMS
Include AI systems in information security management scope
ISO-27001Access Control
Implement access controls for AI systems
Role-based access control, authentication, and authorization for AI tools
ISO-27001Logging & Monitoring
Log and monitor AI system usage
Capture audit trails of AI system access and usage
SOC2Security
Protect AI systems from unauthorized access
Implement security controls to protect AI infrastructure and data
SOC2Availability
Ensure AI system availability
Maintain availability of AI systems according to SLAs
SOC2Confidentiality
Protect confidential data in AI processing
Safeguard confidential information used by or generated by AI
SOC2Processing Integrity
Validate AI processing accuracy and completeness
Ensure AI outputs are accurate, complete, and timely
EU-AI-ActRisk Classification
Classify AI systems by risk level
Determine if AI systems are high-risk, limited-risk, or minimal-risk
EU-AI-ActHigh-Risk AI Requirements
Comply with high-risk AI system requirements
Meet documentation, testing, and oversight requirements for high-risk AI
EU-AI-ActHuman Oversight
Implement human oversight for high-risk AI
Ensure meaningful human control over high-risk AI decisions
EU-AI-ActTransparency Obligations
Meet AI transparency and disclosure requirements
Inform users when interacting with AI systems