§ V — Insights Regulatory Note Vol. I, No. 4 MMXXVI
EU AI Act Article 4 and the Professional Services Firm
The deployer-side AI literacy obligation, applied to law and accounting practice.
On 2 February 2025, the first substantive provisions of Regulation (EU) 2024/1689, the EU AI Act, entered into force. Most attention fell, predictably, on the prohibited practices set out at Article 5. Less attention fell on Article 4, which entered into force on the same date and which applies, in practical terms, to a far larger population of firms than the prohibitions at Article 5. Article 4 imposes an obligation of artificial intelligence literacy on providers and deployers of AI systems. It applies regardless of whether the firm is using a high-risk system, a limited-risk system, or a general-purpose AI system put to ordinary use. It is the first obligation under the Act that a law firm, audit practice, or advisory partnership cannot dispose of by reference to its risk classification.
The Institute's view is that Article 4 is widely under-prepared for in the sector, and widely over-engineered in the small number of firms that have prepared. The under-prepared firms assume that literacy is satisfied by an annual e-learning module on data protection, or by an internal memorandum announcing the firm's policy on a particular deployed AI tool. The over-prepared firms have commissioned bespoke curricula, certified internal trainers, and a documentation apparatus disproportionate to the obligation as drafted. Both miss the structure of the Article, which is principles-based, contextual, and explicit about the criteria against which sufficiency is to be assessed.
§ I What Article 4 Imposes
Article 4 requires that providers and deployers of artificial intelligence systems "take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on which the AI systems are to be used" (EU AI Act, Article 4). The defined term sits at Article 3(56), which describes AI literacy as the skills, knowledge, and understanding that allow providers, deployers, and affected persons to make an informed deployment of AI systems and to gain awareness about the opportunities and risks of AI and the possible harm it can cause.
Three features of the drafting bear emphasis.
First, the obligation runs to deployers and not only to providers. Most professional services firms are deployers within the meaning of Article 3(4): natural or legal persons using AI systems under their authority, excluding personal non-professional activity. A law firm running matter management against a foundation model, an audit practice using a model-assisted journal-entry tool, an advisory partnership drafting against a general-purpose AI system: each is a deployer in the Act's sense, and each falls within Article 4 on the basis of that deployment alone.
Second, the obligation is to take measures, not to achieve an outcome. The drafting is "to their best extent" and "sufficient", which are standards of effort and contextual appropriateness rather than guarantees of competence. A firm that has put in place a calibrated programme, documented it, and reviewed it cannot be said to have failed Article 4 because an individual user has nevertheless made a mistake. The standard is the firm's measures, not the marginal user's performance.
The standard is the firm's measures, not the marginal user's performance.
Third, the obligation is contextual on three axes specified within the Article itself: the existing technical knowledge, experience, education, and training of the persons concerned; the context in which the AI systems are to be used; and the persons or groups on which the AI systems are to be used. This is the most important feature of the Article for professional services. The training appropriate for a managing partner who supervises the use of a deployed AI tool in litigation is not the training appropriate for a paralegal operating the tool, which is not the training appropriate for the firm's general counsel certifying the firm's compliance, which is not the training appropriate for the technology team configuring the tool. A firm that delivers the same module to all four has not implemented Article 4; it has implemented a tick-box programme, and may not have done that well.
§ II Three Reference Firms
The Institute is regularly asked what reasonable compliance with Article 4 looks like at different scales of practice. We sketch three reference firms below. The sketches are not safe harbours; the Article does not provide for safe harbours, and the Institute does not certify compliance with Article 4 as such. What follows is calibration, not prescription.
A 50-fee-earner firm (a boutique disputes practice, a specialist tax counsel, a small audit firm) that uses one or two deployed AI tools across its work can meet Article 4 with a programme of modest extent. A baseline session for all staff covering what the firm's tools do and do not do, an annual refresh, role-differentiated guidance for fee-earners and supervisors, written instructions for use, and a documented point of contact for questions that arise in matter execution will, in most contexts, be sufficient. The documentation that supports it need not exceed a single short policy, an annotated curriculum, an attendance register, and a record of the firm's reasoning on why this is calibrated to its use.
A 250-fee-earner firm with practice diversity, such as a mid-size full-service firm or a mid-tier accountancy, will need more. The firm's deployments are likely to span practice groups with different risk profiles, different supervisory structures, and different client expectations. A single programme will not serve. The Institute typically expects to see role-differentiated curricula keyed to function (fee-earners, supervisors, support, technology, compliance), supplemented by practice-specific modules where a particular tool's use is concentrated. Cadence becomes more important: an annual baseline is not sufficient where tools are added or materially updated mid-year. A short release-note training regime, triggered by the introduction or significant modification of a tool, is the more defensible model.
A 1,000-fee-earner firm, or larger, has Article 4 obligations that extend into its supervisory and governance fabric. Curricula are necessarily differentiated by practice group and by seniority. The firm is likely to deploy multiple AI systems with materially different operational characteristics, and literacy in one does not transfer to another. The programme should distinguish between general AI literacy (sufficient understanding of what AI systems are, what they characteristically err in, and what oversight they require) and system-specific literacy (sufficient understanding of the deployed AI tool in front of the user to operate it within firm guidance and applicable rules of practice). At this scale, the documentation apparatus, comprising training matrices, completion records, evidence of role-based calibration, evidence of refresh cadence, and evidence of remediation where gaps are identified, becomes a substantive piece of regulatory infrastructure, maintained alongside the firm's quality, risk, and compliance functions rather than as a one-off project.
What distinguishes the three is not the depth of the training delivered to the most engaged user. A fee-earner in the small firm using the deployed AI tool daily may, in fact, be more literate than her counterpart in the large firm, by virtue of practice rather than programme. What distinguishes them is the firm's capacity to demonstrate, across its whole population, that it has taken the measures Article 4 requires.
§ III Literacy and the Adjacent Trainings
A common error in the sector is to fold artificial intelligence literacy into existing data protection or cybersecurity training. The error is understandable. These are the trainings already familiar to professional services firms, and the temptation to extend an existing programme is administratively rational. It is an error nonetheless.
Data protection training addresses, in substance, the firm's obligations under the General Data Protection Regulation and equivalent regimes: lawful basis, data minimisation, data subject rights, breach notification. The competence it instils is the competence to recognise personal data, to handle it appropriately, and to escalate when its handling is in question. This is necessary for the operation of any deployed AI tool that processes personal data, but it is not sufficient for Article 4. A user who knows that she must not paste client personal data into a general-purpose AI system does not, by virtue of that knowledge, understand the system's tendency to fabricate citations, the limits of its training data, the appropriate scope of human oversight under Article 14 where it applies, or the firm's specific guidance on the system's use.
Cybersecurity training addresses threats: phishing, credential hygiene, social engineering, the handling of suspicious files. It instils habits of caution. It does not, in itself, instil any understanding of what an artificial intelligence system is or how its outputs should be received.
Article 4 literacy occupies its own substantive territory. It concerns the user's capacity to recognise an AI system in operation, to form a calibrated expectation of what its outputs can and cannot be relied upon for, to apply the firm's guidance on its use, and to know the points at which professional judgement must intervene. Where data protection training instils a duty of care over data, and cybersecurity training instils a duty of caution about threats, Article 4 literacy instils a duty of calibrated reliance on a tool whose outputs vary in ways the user must learn to anticipate.
Where data protection training instils a duty of care over data and cybersecurity training instils a duty of caution about threats, Article 4 literacy instils a duty of calibrated reliance.
The practical implication is that the trainings should be delivered as adjacent but distinct programmes. They may share a delivery platform; they may be sequenced in a single annual training window; they should not be collapsed into a single module that gestures at each.
§ IV Conduct Rules and Standards of Practice
For professional services firms, Article 4 does not stand alone. It interacts, sometimes redundantly and sometimes additively, with the conduct rules that already govern competence and supervision.
In the legal profession, the SRA Principles and the SRA Code of Conduct in England and Wales impose duties of competence and effective supervision; the rules of the Union of Turkish Bar Associations contain equivalent obligations; the 2012 amendment to Comment 8 of ABA Model Rule 1.1, in jurisdictions that have adopted it, expressly addresses the duty to keep abreast of the benefits and risks of relevant technology1. A firm's existing supervisory infrastructure, calibrated to those rules, will already address part of what Article 4 requires. It will not address all of it: the conduct rules are framed around the lawyer's competence, while Article 4 is framed around the deployer's measures across all staff and other persons dealing with the operation of the deployed AI system. A paralegal, a knowledge management professional, or a technology officer falls within Article 4 in a way she may not fall within the conduct rules.
In audit practice, ISA 220 (Revised) on quality management at the engagement level imposes responsibilities on the engagement partner for the competence and capabilities of the engagement team; ISA 540 (Revised) and ISA 500 bear on the use of models and the audit evidence they produce. A firm's quality management system at the firm level under ISQM 1 already requires a process for human resources that addresses competence. As with the legal rules, this infrastructure addresses part of Article 4's terrain but does not exhaust it.
The institutional position is straightforward: where conduct rules and practice standards already require competence, Article 4 is unlikely to add a wholly novel obligation, but it requires that the firm's measures be visible and assessable as measures, in a form a regulator (or an assessor) can examine. A firm that satisfies its conduct duties through tacit professional habit, without documentation, may satisfy the bar; it will not satisfy Article 4.
§ V The Shape of Reasonable Compliance
The Article does not prescribe a form. The Commission's AI Office has produced a non-binding repository of literacy practices and continues to publish clarifying guidance; the Article itself remains principles-based2. The Institute's view is that a reasonable Article 4 programme has the following elements, in the following order of priority:
- A role-based curriculum that maps each role in the firm to a calibrated set of learning outcomes, with general AI literacy outcomes for all roles and system-specific outcomes for those who operate particular deployed AI tools.
- A cadence that includes a baseline at hire or at programme commencement, an annual refresh, and a release-note training triggered by the introduction or material modification of a deployed AI system.
- Written guidance on the firm's permitted and impermissible uses of each deployed AI system, accessible at the point of use rather than only in a policy document.
- A documented mechanism for escalation when a user encounters a use that is not covered by guidance, owned by a named function (general counsel, head of risk, or equivalent).
- Records sufficient to demonstrate, on inspection, that the programme has been delivered, who has completed it, and how the firm has responded to gaps.
The five elements are unremarkable, and that is the point. Article 4 does not call for a programme more elaborate than the firm's other regulatory trainings; it calls for a programme that is materially specific to artificial intelligence and that is calibrated to the roles in which AI systems are operated. Over-engineering is a particular risk for firms whose first instinct on a new regulation is to instruct a vendor to build a bespoke product. The Article asks for proportion, not expenditure.
§ VI Necessary but Not Sufficient
The Institute issues the EU AI Act Readiness Attestation as a discrete instrument, parallel to but distinct from the AI Maturity Standard. The reason is that compliance with the Act is a regulatory threshold, while maturity is a structural assessment. A firm can satisfy Article 4 and certify low on the Standard. A firm can certify high on the Standard and remain exposed under Article 4 if its measures are present in substance but undocumented as measures.
Within the Standard itself, Article 4 literacy indexes the fifth dimension, Talent and Operating Model, and informs evidence in two others, Governance, Risk and Compliance, and Measurement and Accountability. It does not, by itself, raise a firm's certification floor. A firm that has done excellent literacy work but has not redesigned its workflows, governed its deployments, or measured their outputs is a firm with a strong fifth dimension and weaker scores elsewhere. Since the Standard certifies at the floor of the weakest dimension, such a firm certifies where its weakest dimension sits, regardless of how thoroughly literate its staff are.
The institutional position is that Article 4 literacy is necessary but not sufficient. Necessary, because no firm operating deployed AI systems within the Union's regulatory reach can sustain a posture of non-attention to it; the Article entered into force in February 2025 and is enforceable. Not sufficient, because literacy is a property of the firm's people, and a firm is more than its people: it is also its workflows, its governance, its measurement, and the standards to which its leadership holds itself. Article 4 indexes one dimension of the seven. A firm that satisfies it satisfies a threshold, and is then asked the harder questions.