Most midmarket organisations now have an AI policy document. It sits in a SharePoint folder, was written by a consultant six months ago, and has been largely forgotten since. The compliance lead has seen it. The board has nodded. And the AI projects continue exactly as they always have — unregistered, unclassified, uncontrolled.
This is not a criticism. The policy document is the natural first step. It demonstrates intent. But intent is not compliance, and compliance is not just text.
The EU AI Act is entering into force progressively through 2027. For midmarket organisations — typically 200 to 5,000 employees, with one or two people who have AI in their job but not the title "AI Officer" — the challenge is not to write a better policy. The challenge is to convert that policy into four concrete deliverables that actually get used.
A policy document tells people what you intend to do. The four deliverables show what you actually do. That is the difference a supervisory authority looks for.
What the EU AI Act actually requires of midmarket
The AI Act is risk-based. That means requirements depend on what the system does, not who built it. An organisation using an AI system to screen job applicants is subject to high-risk requirements — regardless of whether they built the system themselves or are using a third-party SaaS product.
For the vast majority of midmarket organisations, existing AI systems fall into one of three categories:
Minimal risk: Chatbots, content generation, summarisation, internal tools. No specific requirements beyond general transparency.
Limited risk: Systems that interact directly with end-users without them being aware they are interacting with AI. Disclosure obligations apply.
High risk (Annex III): Systems that make decisions about employment, education, credit, health, or critical infrastructure. Requirements here are most extensive: technical documentation, human oversight, logging, accuracy and robustness requirements.
Most organisations have not done this systematically. They do not know which category each system falls into — and that is the starting point.
The register: your first AI inventory
The first deliverable is a register of all AI systems and AI use cases in the organisation. Not a list of vendors. A list of uses.
The register does not need to be complex. Six fields are sufficient to get started:
System name — what do you call it internally?
Purpose — what does the system actually do? One sentence. "Classifies incoming support tickets and prioritises them by expected resolution time."
Data inputs — what data does the system process? Personal data? Special categories (health, financial)?
Business owner — who owns the system? Not IT — the business area that requested it.
Accountable person — who can answer questions about the system today, and in six months?
Risk category — your initial classification based on Annex III.
A register of ten systems in a spreadsheet is better than no register. The most important thing is that it is updated when new systems are adopted, and that one named person is responsible for keeping it current — not the board, not the compliance department, but a specific individual.
Risk classification: three categories, not fifty
The second deliverable is classifying each system. Not as a one-time event, but as an ongoing process triggered each time a new system is adopted or an existing one changes materially.
A pragmatic risk classification for midmarket needs only three tiers:
Green — no special measures required: The system falls in the minimal risk category. It is used internally, does not process special categories of personal data, and does not make decisions with significant consequences for individuals. Examples: AI-assisted note-taking, internal draft generation, code generation for internal development projects.
Yellow — simplified controls: The system communicates with end-users or processes personal data, but does not fall under Annex III. Disclosure obligations and basic output logging apply. Examples: customer-facing chatbot, AI-generated email drafts for customer communications.
Red — full Annex III controls: The system falls under the high-risk categories. Requirements for technical documentation, human oversight, and logging are most extensive. Examples: AI-assisted recruitment screening, credit scoring, medical decision support.
Classification is not a legal assessment. It is a working tool. Do not let it stall in the legal department. Produce a first draft, then get legal input on the red systems.
Classify based on the system's actual function, not its marketing copy. An "AI-powered analytics tool" for personnel evaluation is Annex III. An AI chatbot for internal FAQs is not.
The decision matrix: who approves what
The third deliverable answers one question: who is authorised to adopt a new AI system, and what does it require?
Without a decision matrix, two things happen. Either everything gets approved (because everyone wants to use AI, and no one wants to be the blocker), or nothing gets approved (because compliance fears the liability). Both outcomes are damaging.
A simple matrix has three levels:
Level 1 — Green systems: Can be adopted by the individual department following simple registration. Business owner approves. IT confirms security requirements are met. No compliance review required.
Level 2 — Yellow systems: Requires approval from IT security and a brief data processing assessment (DPIA-light). Business owner, CISO, and DPO (where relevant) sign off. Maximum two-week process time.
Level 3 — Red systems: Requires full technical documentation, a human oversight model, a logging strategy, and external legal review. Director-level approval. Process time: four to eight weeks.
The matrix should be accessible to project managers, not just the compliance team.
The one-pager that actually gets used
The fourth and most underrated deliverable is the one-pager that explains AI governance to the rest of the organisation.
Not a FAQ. Not an excerpt from the AI Act. One page that answers the three questions the project manager, sales director, and marketing coordinator will ask when considering a new AI tool:
"Are we allowed to use it?" — Yes, if you register it and classify it as green or yellow, and follow the approval process.
"What do we need to document?" — For green systems: the registration. For yellow: DPIA-light and disclosure text for end-users. For red: technical documentation and oversight protocol.
"Who do we ask?" — [Name], [email], [response time].
A one-pager that actually gets used is worth more than a forty-page policy that does not.
Integration with architecture and decisions
AI governance is not an isolated compliance exercise. It connects directly to the organisation's IT architecture and strategic decisions.
An AI system adopted without architecture review creates technical debt. A system approved without data lineage documentation creates compliance debt. Both types of debt are cheapest to pay at adoption time.
In practice this means the register should be linked to the application portfolio. Each AI system is an application asset — it should be evaluated using the same methods as other critical systems: what does it cost, what does it deliver, who owns it, and what happens when it fails?
What to do tomorrow
AI governance in midmarket does not require a dedicated team, a new system, or an external consulting engagement. It requires four deliverables, one accountable person, and the willingness to start with an incomplete register rather than a perfect document.
Week 1: Build the register. Use a spreadsheet. Record every AI system currently in use — including third-party SaaS products with AI features enabled by default.
Week 2: Classify each system using the simplified three-tier model. Flag red systems for separate follow-up.
Week 3: Write the decision matrix. Keep it to one page. Distribute it to project managers.
Week 4: Write the one-pager for the rest of the organisation.
That is it. Not because the AI Act does not require more for red systems. But because the four deliverables give you a foundation to build on — and a starting point for a meaningful conversation with a supervisory authority if the need arises.
Truth as a first principle: your AI governance is only as good as your most recent register check.
References
[1] Regulation (EU) 2024/1689 of the European Parliament and of the Council on artificial intelligence (AI Act), Annex III — High-Risk AI Systems, available at eur-lex.europa.eu.
[2] European Data Protection Supervisor (EDPS), "AI Act and Data Protection — First Guidance", 2024.
Spekir builds the layer that connects strategy to the IT portfolio. See Atlas →
Related articles
Model routing: why choosing Haiku, Sonnet, or Opus matters more than your prompt
80% of AI cost reduction comes from sending the right request to the right model — not from prompt engineering. A practical guide to model routing in production.
7 min read →
Prompt caching: the 90% cost reduction nobody talks about
Anthropic's ephemeral cache discount is mechanically simple but operationally hard. The placement pattern, the 1024-token threshold, and what can and cannot be cached.
6 min read →
Observability for AI features: from black box to audit trail
AI features without traces are not features, they are liabilities. The trace pattern, ADR-0001 metadata fields, and EU AI Act Article 15 in practice.
9 min read →