§ I — Methodology Version 1.0 Published 2026 Standards Council
The Raydorf AI Maturity Standard
The Raydorf Standard is a published methodology for evaluating how organizations adopt, govern, and deploy artificial intelligence. It is open, versioned, and reviewed annually by the Standards Council.
§ II The Tiers
The five tiers
Each tier is cumulative. A firm certifies into a tier only when every one of the seven dimensions meets the tier's threshold. A firm is certified at the floor of its weakest dimension. There is no aggregate score; tiers are categorical, not numeric.
§ III The Dimensions
The seven dimensions
Every tier is evaluated across the same seven dimensions. The same rubric applies at every level; what changes is the threshold.
-
Strategy & Leadership
Founder- or partner-level ownership. Strategic clarity on where AI creates value and where it does not. Board-level reporting cadence. A named accountable individual for the firm's AI program.
-
Governance, Risk & Compliance
Written AI policy. Risk classification of AI use cases against the EU AI Act, sector regulation, and data protection regimes. Incident response procedures. Documentation discipline appropriate to the firm's risk profile.
-
Data & Knowledge Infrastructure
Sensitivity labeling and access governance. Defined retention. A queryable institutional knowledge base. A closed-loop process that captures completed work into the knowledge base at matter close.
-
Workflow Redesign & Operations
Identified core workflows that have been redesigned around AI. Measurable change in cycle time, quality, or unit cost. Evidence that redesign — not just augmentation — has occurred.
-
Talent & Operating Model
AI fluency across all relevant roles. A designated operator role for the firm's AI infrastructure. Hiring and progression criteria that reflect AI-era competencies. Defined training cadence.
-
Client Experience
Disclosure practice for AI involvement in client work. Client-facing AI features where appropriate. Service delivery improvements attributable to AI maturity.
-
Measurement & Accountability
Defined KPIs. Model performance monitoring where applicable. Logged human oversight on AI-assisted output. An audit trail that allows the assessor — and the firm itself — to reconstruct what AI did, when, and under whose authority.
§ IV Attestation
EU AI Act Readiness Attestation
A Raydorf certification can be issued with a parallel EU AI Act Readiness Attestation. The attestation evaluates the deployer-side practices that the EU AI Act requires of organizations using AI systems — distinct from, and complementary to, the conformity assessment that AI providers may need to perform for their products.
The attestation covers, at minimum:
- Correct risk classification of the firm's AI use cases under the Act.
- Where applicable, the controls required of high-risk AI deployers, including human oversight, record-keeping, and transparency to affected persons.
- General-Purpose AI deployer obligations, including AI literacy under Article 4 of the Act.
- Documentation discipline: AI inventories, supplier attestations, data protection impact assessments where required.
- Readiness to report serious incidents to the appropriate authority.
The attestation is an organizational readiness assessment. It does not constitute conformity assessment by a Notified Body, and Raydorf does not issue the CE marking required of high-risk AI systems under the Act. Where such conformity assessment is required, Raydorf certification is intended to complement it, not replace it.
§ V Governance
Versioning and governance
The Raydorf Standard is versioned. Each version is published in full. Material changes are reviewed by the Standards Council, with public notice and a defined transition period for currently certified firms. The current version is 1.0, published in 2026. A changelog is maintained from version 1.1 forward.