Mapping AI safety regulation

How does policy and regulation help AI alignment?

AI alignment ensures that artificial intelligence agents serve human interests and safety, mitigating risks such as adverse behaviour or harm. AI capabilities are advancing, and policy plays a crucial role in establishing governance, risk management, and compliance frameworks to guide safe AI development.

Securiti.ai defines AI safety as:

a combination of operational practices, philosophies, and mechanisms, aims to ensure that any developed AI systems and models operate in the manner originally envisioned by the developers without resulting in any unintended consequences or harm.

Why is AI safety important?

AI ethics and safety address the individual and societal harms caused by the misuse, poor design, or unintended consequences of AI systems. I created the following flowchart from The Turing Institute which highlights key potential harms:

Flowchart code
graph TD
    A[Potential Harms of AI] --> B[Bias and Discrimination]
    A --> C[Denial of Autonomy and Rights]
    A --> D[Non-transparent Outcomes]
    A --> E[Invasions of Privacy]
    A --> F[Social Isolation]
    A --> G[Unreliable Outcomes]

    B --> B1[Data Bias]
    B --> B2[Designer Bias]
    B --> B3[Non-representative Data]

    C --> C1[Accountability Gap]
    C --> C2[Autonomy Violation]

    D --> D1[Opaque Models]
    D --> D2[Discriminatory Outputs]

    E --> E1[Consent Violations]
    E --> E2[Privacy Invasion]

    F --> F1[Reduced Human Interaction]
    F --> F2[Social Polarization]

    G --> G1[Irresponsible Design]
    G --> G2[Public Trust Undermined]

Regulatory landscape

This flowchart was generated by mapping key AI governance, risk, and compliance standards, including ISO, NIST, IEEE, and the EU AI Act, based on their roles in shaping AI safety and alignment policies. Understanding these policies and how they interrelate is crucial for organizations to ensure compliance, manage risks, and develop AI systems that align with ethical guidelines and regulatory frameworks. Grasping these relationships helps create AI systems that are safe, secure, and aligned with human values.

Flowchart code
flowchart LR
    A[AI Governance, Risk, and Compliance] --> B[ISO Standards]
    A --> C[NIST AI RMF]
    A --> D[IEEE Standards]
    A --> E[EU AI Act]

    B -->|Includes| B1[ISO/IEC 22989 - Foundational AI Concepts]
    B -->|Includes| B2[ISO/IEC 42001 - AI Management Systems]
    B1 --> |Covers|B1a[AI System Design Guidelines]
    B1 --> |Covers|B1b[Ethics in AI Development]
    B2 --> |Covers|B2a[Risk Mitigation Procedures]
    B2 --> |Covers|B2b[AI Operational Governance]

    C -->|Includes| C1[Risk Management]
    C -->|Includes| C2[Trustworthy AI]
    C1 --> |Covers|C1a[Risk Identification]
    C1 --> |Covers|C1b[Impact Assessment]
    C2 --> |Covers|C2a[Security Controls]
    C2 --> |Covers|C2b[Human Oversight]

    D -->|Includes| D1[IEEE 2937 - AI Performance Metrics]
    D1 --> |Covers|D1a[Safety and Reliability Metrics]
    D1 --> |Covers|D1b[AI Bias Evaluation]

    E -->|Includes| E1[Risk Categories]
    E -->|Includes| E2[Accountability and Transparency]
    E1 --> |Covers| E1a[Unacceptable Risk]
    E1 --> |Covers|E1b[High Risk]
    E1 --> |Covers|E1C[Limited Risk]
    E1 --> |Covers|E1D[Minimal Risk]
    E2 --> |Covers|E2a[Transparency Obligations]
    E2 --> |Covers|E2b[Human-In-The-Loop Requirements]

Last updated