This article is also available in Danish.
Bestyrelse & AI

Three things your board should demand from your AI strategy

By Daniel Wegener 12 April 2026 7 min read

You're on the board. Your job isn't to be an AI expert. Your job is to ensure that the company's strategic decisions are based on good data and sound practice.

Now you hear your new strategy platform uses AI. Automatic analysis. Assistants working without asking permission each time.

You have three questions:

1. Who has access to our data? Where is it? 2. Can we see what the AI did? What did it look at? 3. If the AI recommends X, can we always say no and do Y instead?

Those three questions roll up into three demands. A board should insist these three are in place before you "activate" AI in strategy work.

Not because AI is malicious. Because it's responsible governance.

Demand 1: Data sovereignty — You must know where your information is

Question: Where does your customer data, financial data, and strategy documents live when AI analyzes them?

What board members expect: On Danish servers or at least EU servers.

What actually often happens: "Yes, we use ChatGPT" or "We use cloud AI without asking where."

Requirement: Your company must have EU data sovereignty. Not just "we're GDPR-compliant." But: where are the servers physically? Who has the keys? Can you see where data is?

Concrete checklist:

Why this matters:

If your AI data lives in US datacenters, that data is legally accessible to US government agencies under FISA legislation — even if you didn't intend that. It's not a conspiracy — it's just the rules.

If data lives in EU it stays under GDPR and Danish law.

If you don't know where data is, you can't tell anyone else where it is.

What to ask at next meeting:

"Where does our strategy data live when AI analyzes it? I want to see the server name and a copy of the data-residency clause in the contract."

If the answer is "We're not entirely sure" or "Both US and EU" — meeting adjourned. Fix it first.

Demand 2: Transparency — You must be able to see what the AI did

Question: If AI comes back with a strategic recommendation, can I ask: what did you read to reach that conclusion?

What board members expect: "Here are the sources I used, here's my reasoning."

What actually often happens: "Here's an answer. World-class AI came up with it." No audit trail.

Requirement: Your system must have a Knowledge Graph — a living record of what the AI analyzed.

Concretely, that means:

If the AI was blocked from reading certain sources (because they're classified CONFIDENTIAL), it should say "AI was asked not to read X because of classification Y." Nothing hidden.

Why this matters:

If the AI recommends you enter a new market and it fails, someone will ask: what was that decision based on? If the answer is "The AI said so," without showing what data it was based on, you haven't done your due diligence.

A Knowledge Graph is your audit trail. It's not about distrusting AI — it's about being able to explain that you did the work.

What to ask at next meeting:

"If the AI recommends we invest in X, can I see which sources it used and why it rejected Y?"

If the answer is "That's internal to the system" — flag it. You need to see it. It's not secret — it's your own data.

Demand 3: Human override — If the AI says A you're free to choose B

Question: If the AI recommends strategy X and I as a board member think strategy Y is better, can we choose Y without the system complaining?

What board members expect: Of course. This is my board.

What actually often happens: "The AI recommended that. Are you sure?" Or worse: the system prevents you from going against the AI's recommendation.

Requirement: Your system must have autonomy levels, and humans must have final say.

Concretely, that means three possible settings:

Level 1: Insights-only AI collects data and presents it. Humans make 100% of decisions. AI is statistician, not advisor.

Level 2: Auto-with-approval AI proposes step 1, human approves step 2 before it runs. Example: AI suggests "terminate supplier A," human says "yes" or "no" before it happens. Human has override.

Level 3: Full auto AI makes decision and executes. Human can reverse after. Only for non-critical work. Example: "send weekly team digest" — if it's bad that week, remove it next week.

You are not on Level 3 for strategy. You're on Level 1 or 2.

Why this matters:

If the AI makes all strategic decisions without possibility of override, it's not your strategy anymore. It's the AI's.

And if it fails, you can't say "I didn't agree." You need to be able to say "I evaluated and chose differently."

What to ask at next meeting:

"If the AI recommends X can we always choose Y? What does that mean for the system if we do?"

If the answer contains something like "the system becomes unbalanced" or "it makes the algorithm confused" — there's your answer. The system is built so humans should listen to the AI, not the other way around.

It should be the other way around.

Board checklist: 3 × 3 questions

Print this and bring it to next strategy meeting.

Data sovereignty:

Transparency:

Override:

If you can check all 9 boxes, the system is sound. If not — meeting with IT and legal. Now.

What it means in practice

These three demands are not "nice to have."

They mean you can:

They also mean AI can be what it should be — a department helping you, not running you.

A board that insists on these three things is a board taking responsibility for AI use. Not because you're paranoid. Because you're professional.

Next meeting: ask the three questions. And demand the three answers.

Your job isn't to be an AI expert. Your job is to make sure the strategy is yours.