EU AI Act 2026: obligations for SMEs and finance teams
EU AI Act 2026 for SMEs: timeline, roles, high-risk use cases, AI literacy, finance controls and a practical compliance checklist.
Expert note: This article was written by our chartered accountancy firm. Information is current as of 2026. For a personalised review of your situation, contact us.
The EU AI Act becomes a board-level issue in 2026. Even SMEs that do not build AI systems often deploy them: content generation, sales scoring, accounting automation, HR screening, customer support, cash forecasting or document review. The question is no longer whether AI is used, but whether the company can classify, document and control those uses.
See also our pieces on accounting AI, AI and accounting, accounting automation, compliance audit and DPO requirements.
Executive summary#
The EU AI Act entered into force on 1 August 2024. The European Commission states that it becomes fully applicable on 2 August 2026, with exceptions: prohibited practices and AI literacy from 2 February 2025, governance and general-purpose AI obligations from 2 August 2025, and certain high-risk rules for regulated products from 2 August 2027. As of 3 May 2026, companies should also monitor the Commission's simplification proposals.
| AI use | SME decision |
|---|---|
| Internal generative AI | Control data, prompts, human review and confidentiality |
| Customer or credit scoring | Check automated-decision and bias risks |
| HR tool | Treat recruitment and evaluation as sensitive |
| Accounting automation | Keep human validation on entries and filings |
| In-house AI product | Clarify whether you are provider, deployer or integrator |
Who is concerned?#
An SME can be concerned even if it does not publish an AI model. The AI Act distinguishes roles such as provider, deployer, importer and distributor. A company buying an AI SaaS tool is commonly a deployer. A startup putting an AI tool on the market may be a provider.
The underestimated risk: confidential data#
AI Act compliance does not replace GDPR, trade secrecy, client confidentiality or professional duties. The first SME incident is often simple: an employee pastes a customer list, payroll file, contract or accounting balance into an unapproved tool.
AI governance checklist#
- map AI tools already used by teams;
- classify uses by risk level;
- prohibit sensitive data in unapproved tools;
- document human review for important decisions;
- retain supplier contracts and settings;
- train teams on AI literacy;
- keep an incident and correction register.
Finance teams can connect this work to digital finance transformation, French accounting services and outsourced CFO support. For tech companies, see our startups and tech page.
What management must decide#
Management must balance productivity and control. The goal is not to ban AI, but to decide which uses are authorised, who validates them and which tools are approved.
| Decision | Why it matters |
|---|---|
| Internal AI policy | Prevents undocumented tool sprawl |
| Approved-tool list | Reduces data-leakage risk |
| Human validation | Protects HR, financial and tax decisions |
| Supplier review | Clarifies contractual responsibilities |
| ROI indicators | Avoids buying tools without measured impact |
Good AI adoption often starts with focused cases: invoice extraction through Dext, bank matching in Pennylane, or reporting in Power BI. ROI should be measured in time saved, control quality and fewer manual corrections.
Practical finance workflow#
For a finance team, the AI Act workstream should be embedded into the monthly close and internal-control roadmap. The safest approach is to maintain a simple register that links each AI use case to a business process: invoice capture, supplier review, bank matching, management reporting, cash forecasting, payroll checks or customer support. Each line should state the tool used, the data processed, the human reviewer, the supplier contract and the fallback process if the tool is unavailable or wrong.
This matters because AI errors are rarely isolated. A wrong supplier classification can create VAT issues. A hallucinated management comment can mislead investors. A payroll or HR recommendation can create legal and social risk. A cash forecast generated from incomplete data can push management toward the wrong financing decision. The finance function therefore needs a control mindset: AI can accelerate work, but the company must still evidence who reviewed the output and how exceptions are corrected.
Evidence file for SMEs#
An SME does not need an overly complex compliance binder for every low-risk use. It does need a practical evidence file. Keep the approved-tool list, supplier terms, data-processing notes, internal policy, training log, risk classification, and examples of human review. For higher-risk or customer-facing systems, add model documentation received from the vendor, incident logs, testing notes and board-level approval of the use case.
The strongest E-E-A-T signal for this type of YMYL content is not a promise that AI is safe. It is the opposite: a clear statement that finance, tax, HR and legal decisions remain under human responsibility, with documented review.
2026 watch points#
- Monitor any EU changes to the high-risk implementation timeline.
- Do not use consumer AI tools for sensitive finance or HR data.
- Treat HR and scoring use cases with particular care.
- Train teams: AI literacy is already part of the EU timetable.
- Add AI clauses to supplier contracts where critical data is processed.
Frequently asked questions
Is an SME using ChatGPT automatically an AI provider?+
No. It is usually a deployer, but it must still control data, use cases, human review and internal rules.</details>
Does the AI Act apply to accounting automation?+
It can, depending on the tool and use case. Even outside high-risk categories, finance teams need internal controls over entries, filings and decisions.</details>
What is AI literacy?+
It is the ability of teams to understand AI uses, limits and risks. It requires training, internal instructions and role-specific guidance.</details>
Should SMEs stop AI projects in 2026?+
No. They should prioritise, document and govern them. Low-risk, high-ROI projects can proceed with proportionate controls.</details>
What is the first document to prepare?+
An AI use-case map: tool, team, data, purpose, supplier, risk level, human control and internal owner.</details>
Related pillar guide#
To move from isolated AI tests to a controlled finance workflow, read AI in accounting 2026: use cases, ROI, risks and the EU AI Act. It helps management decide on tools, sensitive data, human review and ROI.

Article written by Samuel HAYOT
Chartered Accountant, registered with the Institute of Chartered Accountants.
Regulated French accounting and audit firm based in Paris 8, built to support companies across France with a digital and decision-oriented approach.
Sources
Official and operational sources cited for this page.
This topic is part of our service Finance transformation | Automation & dashboards
Need a quote or personalised advice?
Our accountancy firm supports you through all your steps. Get a free quote to review your situation and receive a bespoke fee proposal, or contact us directly.