An AI policy is the document that defines the rules of engagement for AI use in your organisation. Without it, employees do not know what is permitted, management does not know what is running, and you have no framework for navigating incident situations.
But most AI policies fail not because they are legally incorrect — they fail because they are written to look like a compliance artefact rather than a governance instrument. A 40-page AI policy that nobody reads is worth less than a two-page policy everyone knows.
This article covers the eight sections that are mandatory in any solid AI policy document, common mistakes per section, and what a review cadence looks like for a midmarket organisation.
Section 1: Purpose and Scope
What it should contain: A brief description of why you have an AI policy, who it applies to (all employees? IT only? Vendors?), and which types of AI systems it covers.
Common mistake: Scope is too broad ("applies to all forms of automation and data processing") or too narrow ("applies only to internal AI projects"). Both result in a policy that does not match reality.
Better formulation: "This policy applies to all employees and contractors at [company] who use, deploy, or procure AI systems as part of their work. It covers AI systems we deploy in business processes, AI-assisted products we deliver to customers, and generative AI tools used in daily work."
Section 2: Definition of AI and Risk Levels
What it should contain: A clear definition of what you consider "AI" for the policy's purposes, and a description of your risk framework. Use the EU AI Act's four levels: unacceptable / high-risk / limited risk / minimal risk.
Common mistake: Using a technical definition that no business user understands. "Machine learning using gradient descent optimisation" is not useful in a policy for business people.
Better approach: Define AI broadly ("systems that generate text, images, code, or decisions based on data") and add a list of examples from your actual AI inventory. Then define risk levels with concrete examples from your context.
Section 3: Acceptable and Unacceptable Use
What it should contain: A list of acceptable uses, a list of prohibited uses, and a grey-zone category that requires prior approval.
Common mistake: The prohibited list is legal boilerplate that does not reflect your actual risk profile. The acceptable list is so broad it does not help anyone understand limits.
Prohibited categories that should always be included:
- Systems using social scoring to rank individuals
- Facial recognition in public spaces for mass surveillance
- Systems that exploit vulnerability (age, mental state) to manipulate individuals' decisions
- AI for profiling individuals without explicit consent and legal basis
Examples of approval-required grey zone:
- New generative AI integrations in customer-facing products
- AI systems used in recruitment processes
- AI-assisted risk assessment in credit cases
Section 4: Responsibility and Governance
What it should contain: Who owns AI compliance in the organisation? Who approves new AI systems? Who handles incidents?
Common mistake: Responsibility is diffusely distributed ("everyone is responsible for AI compliance") or concentrated in one person without the capacity for it.
Recommended minimum structure:
- AI responsible (or AI Officer): Overall responsibility for policy and compliance. Can be IT manager, CTO, or compliance lead.
- System owners: Each AI system has a business owner responsible for correct use and ongoing assessment.
- Approval process: Describe who must approve new AI systems before deployment.
Section 5: AI Inventory and Registration
What it should contain: A requirement that all AI systems in operation are registered in a central register, with minimum information per system: name, vendor, purpose, user group, risk assessment, and system owner.
Common mistake: The inventory requirement is in the policy, but nobody maintains it in practice. A register that is six months out of date is not a register — it is a risk.
Practical recommendation: Link the inventory requirement to the onboarding process for new vendor agreements. Any new AI system procured requires registration as part of the approval process.
Section 6: Requirements for High-Risk AI
What it should contain: A clear description of which additional requirements apply to systems classified as high-risk under the EU AI Act.
Minimum requirements for high-risk deployers:
- FRIA (Fundamental Rights Impact Assessment) before deployment
- Technical documentation received and retained from the vendor
- Human oversight protocol implemented and documented
- Logging of relevant system activity
- Periodic re-assessment (at least once per year)
Common mistake: The policy describes requirements abstractly without explaining who is responsible for fulfilling them and what must be documented.
Section 7: Employee Rights and Transparency
What it should contain: A description of the rights employees and other affected individuals have in connection with AI systems, including the right to explanation, human review, and objection.
Common mistake: The section is written as a legal disclaimer rather than a real description of what people can do if affected.
What should be written:
- Employees assessed by AI systems (performance, recruitment) have the right to request human review.
- Customers affected by AI-based decisions (credit, service access) have the right to an explanation.
- The procedure for raising an objection is [describe concrete process].
Section 8: Incidents and Updates
What it should contain: What is an AI incident? Who reports to whom? What is the timeframe?
Definition of an AI incident:
- Unexpected or harmful outputs affecting individuals
- Violations of the policy's acceptable-use rules
- Vendor notifications about known defects in high-risk systems
- Use of AI systems without prior approval (shadow AI)
Review cadence: The AI policy should be reviewed at least once per year and updated when there are significant changes to the AI inventory, legislation, or organisation. State who owns the review process.
Common Mistakes That Undermine an AI Policy
-
Written by legal alone: Legally correct but operationally unusable. The policy should be written in collaboration with IT, HR, and a business representative.
-
No version history: An AI policy without a version number and update date gives no signals about when it was last relevant.
-
No onboarding link: The policy exists as a PDF but is not part of onboarding, vendor agreements, or project approval. Nobody sees it.
-
Too specific on current technology: Policies written for GPT-4 are outdated before you publish them. Write technology-agnostically.
-
No consequence for violations: The policy does not describe what happens when rules are broken. That signals it is not serious.
Template: Minimum Structure for an AI Policy
An AI policy does not need to be long. Here is a minimum structure under two pages:
- Purpose and who it applies to (3–5 sentences)
- Definition of AI and risk levels (with examples from your inventory)
- Acceptable and unacceptable uses (concrete list)
- Responsibility: AI responsible, system owners, approval process
- Inventory requirement: what is registered, who owns it
- High-risk AI: minimum requirements
- Employee rights and transparency
- Incident definitions and procedure
The AI Policy and Your Vendor Dialogue
An underestimated function of the AI policy is that it defines the requirements you place on new AI vendors. Many organisations now discover they cannot answer questions from vendors' compliance teams because they do not have a clear internal policy to refer to.
A concrete application: Include your AI policy as an appendix to the standard contract for IT procurement. This sets clear expectations that the vendor must document the system's AI Act status, provide technical documentation for high-risk systems, and notify you of significant changes.
Another application: Use the policy's acceptable-use section to define what employees may use generative AI services for, and what is prohibited (e.g. processing confidential customer data in unapproved AI services).
AI Policy in Onboarding and Training
An AI policy that only lives as a PDF in a compliance drive has limited value. For it to function as a governance instrument, it must be integrated into the organisation's day-to-day:
Onboarding: All new employees should read and confirm understanding of the AI policy as part of onboarding. It takes five minutes and establishes a clear expectation.
AI-related training: Use the AI policy as the foundation for AI literacy training. What is acceptable use? What is prohibited? What do you do if you are unsure?
Vendor agreements: Include a requirement that vendors are aware of your AI policy and agree to the relevant section on documentation requirements.
What Supervisory Authorities Look For
When a supervisory authority reviews your AI policy, they will typically look at:
- Coverage: Does the policy cover all AI systems you are responsible for, including embedded AI and vendor systems?
- Operationalisation: Is the responsible role concretely assigned to a person or position, or is it abstract?
- Updates: Is there documentation that the policy has been updated since the first version?
- Awareness: Can you document that employees know the policy?
Having a document is not sufficient. You must show it is actively in use — part of onboarding, linked to the AI approval process, and that the incident procedure has actually been followed when incidents occurred.
Next Steps
Download our checklist and use it as a structured starting point for your AI policy. Remember: a simple policy you actually use is better than an advanced policy nobody follows.
When you have a first draft, test it against three questions: Can a new employee read it and understand what is allowed? Can the IT manager use it to reject a new AI procurement that does not meet requirements? Can HR use it to handle an employee complaint about AI-based assessment?
Three "yes" answers — you have a usable AI policy.
An AI policy is not a compliance document. It is a governance instrument. Write it for the people who will use it, not for those who will audit it.
Spekir builds the layer that connects strategy to the IT portfolio. See Atlas →
Related articles
EU AI Act for Midmarket — What You Actually Need to Do
A pragmatic roadmap for the IT manager or compliance coordinator who needs to translate the EU AI Act into action without a dedicated compliance team. The 20 things, prioritisation, and what is realistic.
9 min read →
Annex III Explained — When Is Your AI 'High-Risk'?
The eight Annex III categories explained with concrete examples from Nordic midmarket. When is your recruitment tool, credit scoring, or OT system high-risk under the EU AI Act?
8 min read →
DPIA and FRIA — Two Documents, Two Purposes
The difference and overlap between GDPR's DPIA and the AI Act's FRIA. When do you need which, who is responsible, and how do you avoid duplication with a coordinated workflow?
9 min read →