You're on the board. Your job isn't to be an AI expert. Your job is to ensure that the company's strategic decisions are based on good data and sound practice.
Now you hear your new strategy platform uses AI. Automatic analysis. Assistants working without asking permission each time.
You have three questions:
1. Who has access to our data? Where is it? 2. Can we see what the AI did? What did it look at? 3. If the AI recommends X, can we always say no and do Y instead?
Those three questions roll up into three demands. A board should insist these three are in place before you "activate" AI in strategy work.
Not because AI is malicious. Because it's responsible governance.
Demand 1: Data sovereignty — You must know where your information is
Question: Where does your customer data, financial data, and strategy documents live when AI analyzes them?
What board members expect: On Danish servers or at least EU servers.
What actually often happens: "Yes, we use ChatGPT" or "We use cloud AI without asking where."
Requirement: Your company must have EU data sovereignty. Not just "we're GDPR-compliant." But: where are the servers physically? Who has the keys? Can you see where data is?
Concrete checklist:
- [ ] Server location: Scaleway Paris, Azure EU West, or AWS Frankfurt (EU datacenters)
- [ ] Contract: You have signed that data doesn't move out of EU without permission
- [ ] Access: Only your IT team and board can see where data lives
- [ ] Deletion: You can delete all data within 24 hours if you want to
Why this matters:
If your AI data lives in US datacenters, that data is legally accessible to US government agencies under FISA legislation — even if you didn't intend that. It's not a conspiracy — it's just the rules.
If data lives in EU it stays under GDPR and Danish law.
If you don't know where data is, you can't tell anyone else where it is.
What to ask at next meeting:
"Where does our strategy data live when AI analyzes it? I want to see the server name and a copy of the data-residency clause in the contract."
If the answer is "We're not entirely sure" or "Both US and EU" — meeting adjourned. Fix it first.
Demand 2: Transparency — You must be able to see what the AI did
Question: If AI comes back with a strategic recommendation, can I ask: what did you read to reach that conclusion?
What board members expect: "Here are the sources I used, here's my reasoning."
What actually often happens: "Here's an answer. World-class AI came up with it." No audit trail.
Requirement: Your system must have a Knowledge Graph — a living record of what the AI analyzed.
Concretely, that means:
- [ ] Every strategy analysis comes with "sources" — which 3-5 documents did the AI read?
- [ ] Every recommendation comes with "reasoning" — why this over the alternative?
- [ ] Every node in the analysis shows what the input was — if the data layer "top 5 customer segments" changes, we can see it was based on April data, not May data
- [ ] Log of what the AI was asked and what it answered — not to micromanage, but for audit purposes
If the AI was blocked from reading certain sources (because they're classified CONFIDENTIAL), it should say "AI was asked not to read X because of classification Y." Nothing hidden.
Why this matters:
If the AI recommends you enter a new market and it fails, someone will ask: what was that decision based on? If the answer is "The AI said so," without showing what data it was based on, you haven't done your due diligence.
A Knowledge Graph is your audit trail. It's not about distrusting AI — it's about being able to explain that you did the work.
What to ask at next meeting:
"If the AI recommends we invest in X, can I see which sources it used and why it rejected Y?"
If the answer is "That's internal to the system" — flag it. You need to see it. It's not secret — it's your own data.
Demand 3: Human override — If the AI says A you're free to choose B
Question: If the AI recommends strategy X and I as a board member think strategy Y is better, can we choose Y without the system complaining?
What board members expect: Of course. This is my board.
What actually often happens: "The AI recommended that. Are you sure?" Or worse: the system prevents you from going against the AI's recommendation.
Requirement: Your system must have autonomy levels, and humans must have final say.
Concretely, that means three possible settings:
Level 1: Insights-only AI collects data and presents it. Humans make 100% of decisions. AI is statistician, not advisor.
Level 2: Auto-with-approval AI proposes step 1, human approves step 2 before it runs. Example: AI suggests "terminate supplier A," human says "yes" or "no" before it happens. Human has override.
Level 3: Full auto AI makes decision and executes. Human can reverse after. Only for non-critical work. Example: "send weekly team digest" — if it's bad that week, remove it next week.
You are not on Level 3 for strategy. You're on Level 1 or 2.
Why this matters:
If the AI makes all strategic decisions without possibility of override, it's not your strategy anymore. It's the AI's.
And if it fails, you can't say "I didn't agree." You need to be able to say "I evaluated and chose differently."
What to ask at next meeting:
"If the AI recommends X can we always choose Y? What does that mean for the system if we do?"
If the answer contains something like "the system becomes unbalanced" or "it makes the algorithm confused" — there's your answer. The system is built so humans should listen to the AI, not the other way around.
It should be the other way around.
Board checklist: 3 × 3 questions
Print this and bring it to next strategy meeting.
Data sovereignty:
- [ ] Where does our data live? (Answer: EU datacenter, name and address)
- [ ] Who has access? (Answer: Only us)
- [ ] Can we delete it? (Answer: Yes, within 24 hours)
Transparency:
- [ ] Can we see the sources the AI used? (Answer: Yes, every response includes sources)
- [ ] Can we see the reasoning? (Answer: Yes, explanation of why A not B)
- [ ] Is there an audit log? (Answer: Yes, what was asked and what was answered)
Override:
- [ ] If the AI recommends X can we choose Y? (Answer: Yes, always)
- [ ] Does it alarm if we do? (Answer: No, just noted)
- [ ] Can we reverse after if it goes wrong? (Answer: Yes, always)
If you can check all 9 boxes, the system is sound. If not — meeting with IT and legal. Now.
What it means in practice
These three demands are not "nice to have."
They mean you can:
- Sleep at night knowing data isn't in the US
- Defend your strategy choices if criticized ("Here's the data it was based on, here's what the AI thought, here's what we chose instead")
- Come back and say "that didn't work," change course, without being locked by a previous AI recommendation
They also mean AI can be what it should be — a department helping you, not running you.
A board that insists on these three things is a board taking responsibility for AI use. Not because you're paranoid. Because you're professional.
Next meeting: ask the three questions. And demand the three answers.
Your job isn't to be an AI expert. Your job is to make sure the strategy is yours.