Gå til indhold
Spekir
AI

AI Governance Without the Bureaucracy

Founder, Spekir·Apr 16, 2026·6 min read
AI GovernanceEU AI ActRisk Management

Most organisations that start thinking about AI governance end up in one of two places. Either they do nothing — and the number of unmanaged AI tools quietly multiplies across departments — or they build a governance framework so comprehensive that it takes six months to produce and nobody reads it when it's done.

There is a third option, and it starts with a register.

The Problem Is Not Complexity. It's Visibility.

When a CIO or Head of IT asks "what AI are we actually using?", the answer is almost always incomplete. Marketing has three tools. Finance piloted something last quarter. Product built a prototype with Claude that's now in production. Someone in HR signed up for a hiring tool that processes candidate data.

None of this is unusual. But the lack of a central register means no one can answer the questions that actually matter. Where does the data go? Who approved it? What happens when the EU AI Act requires documentation?

Governance is not about stopping people from using AI. It's about knowing what you have, who decided, and what the risk profile looks like.

What Minimal Viable Governance Looks Like

A pragmatic AI governance setup for a midmarket organisation needs exactly four things.

An AI register that lists every AI system in use, its purpose, data inputs, risk level, and the person who owns it. Not a 200-field enterprise taxonomy — a structured list that can be maintained by one person and reviewed quarterly.

A risk classification that separates high-risk systems (customer-facing, personal data, financial decisions) from low-risk ones (internal productivity, drafting, summarisation). The EU AI Act already provides a reasonable framework for this. You do not need to invent your own.

A decision matrix that clarifies who can approve what. Can a department head approve a low-risk AI tool? Does anything involving personal data require IT sign-off? These are not hard questions, but without explicit answers, every decision defaults to the loudest voice or the most enthusiastic early adopter.

A governance one-pager — not a 40-slide framework deck — that describes the above in plain language. One page. Readable by someone who is not an architect or compliance specialist.

What You Do Not Need (Yet)

You do not need an AI ethics board. You do not need a dedicated AI governance team. You do not need a custom-built compliance platform. You do not need to solve model bias, explainability, and responsible AI in the same breath as getting a register in place.

All of those things have their moment. That moment is not when you are still trying to figure out how many AI tools you are running.

The instinct to be thorough is understandable but counterproductive. A perfect governance framework that takes nine months to implement protects you from nothing during those nine months. A register and a risk classification that takes two weeks gives you visibility immediately.

The EU AI Act Makes This Urgent

The EU AI Act is not future legislation. It is here. Providers and deployers of AI systems have obligations around documentation, risk assessment, and human oversight. The details vary by risk level, but the direction is clear: you need to know what you are running and be able to document why.

For most midmarket organisations, the exposure is not existential. You are probably not building foundation models or deploying high-risk biometric systems. But you are likely deploying AI in areas that touch personal data, customer communication, or internal decisions — and those carry documentation requirements.

Starting with a register and risk classification is not just good practice. It is the minimum you need to be audit-ready when regulators start asking questions.

Getting Started Without a Project

The biggest barrier to AI governance is not knowledge or tooling. It is the assumption that it requires a project. A steering committee. A vendor selection. A phased rollout.

It does not. You need someone — one person — to spend two weeks collecting what exists, classifying the risk, and writing it down. The output is a register, a classification, a decision matrix, and a one-pager. Four deliverables. Two weeks.

That is enough to move from "we should probably look at AI governance" to "we know what we have, we know the risk, and we know who decides."

Everything else — approval workflows, monitoring, model evaluation, advanced compliance — can be layered on later, proportional to what you actually need.

Start with visibility. The rest follows.


Spekir helps organisations build exactly this foundation — an AI register, risk classification, and governance model proportional to your actual needs.