The Raydorf AI Maturity Standard

The Raydorf Standard is a published methodology for evaluating how organizations adopt, govern, and deploy artificial intelligence. It is open, versioned, and reviewed annually by the Standards Council.


§ II The Tiers

The five tiers

Each tier is cumulative. A firm certifies into a tier only when every one of the seven dimensions meets the tier's threshold. A firm is certified at the floor of its weakest dimension. There is no aggregate score; tiers are categorical, not numeric.

  1. AI-Aware

    Foundations in place. Leadership has working AI literacy. A written AI policy exists. Initial governance and risk classification have been performed. AI use is limited or experimental, and properly disclosed where required.

    Tier I · Foundations
  2. AI-Enabled

    AI tools are deployed across the firm with proper controls. Sensitivity labeling, access governance, and a documented AI policy are in place. Existing workflows are augmented but not yet redesigned. Staff are trained in basic AI fluency.

    Tier II · Adoption
  3. AI-Integrated

    Core workflows have been redesigned around AI as the default first pass. Institutional knowledge is captured systematically at the close of every matter. Outcomes are measured. A designated operator role exists. The firm reports AI-related metrics to leadership on a defined cadence.

    Tier III · Redesign
  4. AI-First

    AI is the default execution layer for the firm's core work. Human attention is reserved for judgment, relationships, and exceptions. The operating model has been redesigned around AI capacity, not around headcount. Client-facing services reflect AI-enabled delivery.

    Tier IV · Default
  5. AI-Native

    The firm's operating model is architecturally dependent on AI. The firm could not deliver its services in their current form without AI. AI is embedded in the structure of the organization, not added to it.

    Tier V · Native

§ III The Dimensions

The seven dimensions

Every tier is evaluated across the same seven dimensions. The same rubric applies at every level; what changes is the threshold.

  1. Strategy & Leadership

    Founder- or partner-level ownership. Strategic clarity on where AI creates value and where it does not. Board-level reporting cadence. A named accountable individual for the firm's AI program.

  2. Governance, Risk & Compliance

    Written AI policy. Risk classification of AI use cases against the EU AI Act, sector regulation, and data protection regimes. Incident response procedures. Documentation discipline appropriate to the firm's risk profile.

  3. Data & Knowledge Infrastructure

    Sensitivity labeling and access governance. Defined retention. A queryable institutional knowledge base. A closed-loop process that captures completed work into the knowledge base at matter close.

  4. Workflow Redesign & Operations

    Identified core workflows that have been redesigned around AI. Measurable change in cycle time, quality, or unit cost. Evidence that redesign — not just augmentation — has occurred.

  5. Talent & Operating Model

    AI fluency across all relevant roles. A designated operator role for the firm's AI infrastructure. Hiring and progression criteria that reflect AI-era competencies. Defined training cadence.

  6. Client Experience

    Disclosure practice for AI involvement in client work. Client-facing AI features where appropriate. Service delivery improvements attributable to AI maturity.

  7. Measurement & Accountability

    Defined KPIs. Model performance monitoring where applicable. Logged human oversight on AI-assisted output. An audit trail that allows the assessor — and the firm itself — to reconstruct what AI did, when, and under whose authority.

§ IV Attestation

EU AI Act Readiness Attestation

A Raydorf certification can be issued with a parallel EU AI Act Readiness Attestation. The attestation evaluates the deployer-side practices that the EU AI Act requires of organizations using AI systems — distinct from, and complementary to, the conformity assessment that AI providers may need to perform for their products.

The attestation covers, at minimum:

  • Correct risk classification of the firm's AI use cases under the Act.
  • Where applicable, the controls required of high-risk AI deployers, including human oversight, record-keeping, and transparency to affected persons.
  • General-Purpose AI deployer obligations, including AI literacy under Article 4 of the Act.
  • Documentation discipline: AI inventories, supplier attestations, data protection impact assessments where required.
  • Readiness to report serious incidents to the appropriate authority.

The attestation is an organizational readiness assessment. It does not constitute conformity assessment by a Notified Body, and Raydorf does not issue the CE marking required of high-risk AI systems under the Act. Where such conformity assessment is required, Raydorf certification is intended to complement it, not replace it.

§ V Governance

Versioning and governance

The Raydorf Standard is versioned. Each version is published in full. Material changes are reviewed by the Standards Council, with public notice and a defined transition period for currently certified firms. The current version is 1.0, published in 2026. A changelog is maintained from version 1.1 forward.