This article is also available in Danish.
EU AI & Sikkerhed

EU AI Act for SMEs: Practical compliance without consultant fees

By Daniel Wegener 12 April 2026 6 min read

You have heard about the EU AI Act. Some of your team panicked. A consulting firm sent a budget with eight figures for "compliance implementation."

Breathe. Much of that panic is not justified. Especially not for companies using AI for strategy work, not hiring or credit decisions.

We need to go through three things: What the law actually says, where you realistically have compliance work, and how you document it without a consultant.

What does the EU AI Act actually say?

The law groups AI systems into four risk categories:

1. Minimal risk — Most AI applications. ChatGPT. Translation tools. General-purpose things.

2. Limited risk — Systems that require transparency. People should know they are talking to AI. Your strategy platform falls here. You should be reasonably open about AI analyzing your data.

3. High risk — Critical infrastructure, credit scoring, hiring decisions, and several others. Your bank uses AI to approve your loan? High risk.

4. Unacceptable risk — Political manipulation, social scoring. Effectively banned in the EU.

Strategic work and business model analysis is minimal-to-limited risk. Not because the work is trivial, but because it does not decide people's lives. It gives you analytical background. You make the actual decisions.

Where is your compliance work?

If you use AI for strategy analysis, you fall in category 2. That means three practical things:

1. Documentation

You need to be able to show: "What did the AI do? What assumptions did it make? How did we choose the parameters?"

This sounds heavy. It is not. It means you save:

A simple log. One folder per strategy sprint. Done.

2. Transparency

If you use AI to generate insights that inform a board decision, board members should know it.

You do not need to be dramatic. "This analysis was generated via AI-assisted intelligence. Model parameters were [X]. We validated through [Y]."

Since mid-2024 it has been best practice anyway. Transparency builds trust.

3. Human oversight

You, or someone on the leadership team, need to be able to explain why you chose one analysis over another. You do not need to be an AI expert. You just need to have thought: "Okay, the AI said this. Does it pass the intuition test? What do I know from practice?"

Again, this is not new. It is due diligence.

What compliance work should you absolutely NOT dump on yourself?

If you are running hiring or credit decisions through AI: High risk. This needs real compliance review. Get help here.

If you are handling financial personal data (KYC, AML) through AI: High risk. Compliance team should be involved.

If your AI model is a black box where you cannot explain output: Problem. Even if limited risk. Get transparency first. Then compliance.

But if you are using AI to analyze markets, competitors, your strengths, alternative business models? Limited risk. Manageable.

Practical compliance checklist for SMEs

Before you use strategic AI analysis for the first time:

This checklist takes 2 hours per year to maintain.

What compliance actually looks like

Scenario: Your board wants to evaluate a potential acquisition.

Without AI: Board members read market reports, compare financials by hand, discuss. Bias creeps in. Documentation is "meeting notes."

With limited-risk AI assist:

Compliance documentation: "April 4 we analyzed [Company A] across [5 lenses]. Output showed [X]. Board decision was [Y]. Rationale: [Z]."

It is valuable. It is also documented and defensible.

Why EU-only hosting matters

Some Western AI platforms send data to US servers by default.

For strategy work where you analyze your markets, competitors, or commercial models, that is sometimes sensitive. Not because it is secret, but because it is about what you are thinking.

Platform providers that run EU-only (Scaleway Paris, Azure EU West, AWS Frankfurt) mean: Your data never leaves the EU.

It is not encryption-level security. It is data residency security. Your data is processed within the EU. Period.

For strategy work it is the right setup. Compliance also becomes: "Data stays in the EU. Treated under GDPR. Documentation in local language."

The final detail: If your platform has a "Data Guardian"

A good strategy platform builds guardrails in. Instead of leaders thinking "what is safe to send?", the platform has already classified your data fields:

Some fields never go to AI (RESTRICTED). Some go anonymized (INTERNAL). Some go as-is (PUBLIC).

If your platform has this, you have already solved 60% of compliance work.

When this gets complicated

If you have a custom-built AI system, or if you are merging external data (competitors, market figures) with internal data without data classification first, it gets more complex.

Your team has something that looks like an in-house data warehouse, just with AI attached. Compliance work becomes: How do we secure the flow? Who can see what? Where does data stay?

It is bigger work. It is also less common for SMEs than for enterprise.

Next step

Ask your platform vendor:

1. "Where are your servers?" (If the answer is not clearly EU, that is a red flag.) 2. "Which data fields do you never send to AI models?" (If they do not have classification, that is a problem.) 3. "Can you give me a compliance template for limited-risk systems?" (If no, that is poor service.)

If the answers are good, you have a compliance-friendly platform. The rest is administrative.

If the answers are bad? Switch. Or get help classifying your data flow before you use the system.

The EU AI Act is not insurmountable for SMEs using AI for strategy. It requires transparency and documentation. It is better to have both anyway.