You're in a strategy session with your leadership team evaluating a new customer segment. Someone says: "We can just ask ChatGPT what the data shows."
Then you think: what if we paste customer emails, sales pipelines, or salary ranges in there? We're in the EU. GDPR applies.
Here's where most SMEs stop. They think AI strategy means leaking data to Silicon Valley.
It doesn't have to. But you need to know what you're doing.
If you want to use AI in strategy work without risking GDPR fines, or without giving away other people's data, here's how.
What data types exist in strategy
First: what is strategy work actually made of?
Customer data: list of names, emails, purchases, ages. When you analyze your customer segment you use these numbers.
Financial data: revenue, costs, customer margins, executive salaries. When you plan budgets you use this.
Competitive intelligence: articles about rivals, their pricing, their ads. It's public but you need to be careful how you use it.
Employee data: headcount, departments, sick leave, education, salary ranges. When you plan resources you use it.
Internal documents: meeting notes, email threads, strategy drafts. Often contain sensitive discussions.
Each type has different GDPR risk.
The classification system: 4 levels
GDPR doesn't have "public/secret." But good practice does.
PUBLIC Open data like your price sheets, public links, press releases. When you tell a journalist "we're 60 people" — that's PUBLIC. AI can work with it without restrictions.
INTERNAL Data only some employees see. Like the count of sales reps and their regional split. Data that wouldn't hurt if it leaked, but is company-intimate. AI can work with it, but shouldn't go to cloud providers you don't know.
CONFIDENTIAL Data where leakage costs. Customer lists. Lucrative customer contract terms. Executive salaries. Technical roadmap. When you send CONFIDENTIAL to AI it must be EU-hosted and contractually protected.
RESTRICTED Data that should never touch AI. Unique customer access codes. Credit card data. Employee health data. Criminal records. RESTRICTED shouldn't go to analysis AI at all — handled by humans only.
When you start strategy work: classify each data source first. It's a 5-minute question.
Data Guardian: The automatic blocker
Now comes the clever part. There's a system called Data Guardian that detects protected data automatically.
It has 9 patterns built in:
1. ID numbers: when it sees "000101-XXXX" patterns it blocks. 2. Email addresses: when it sees "john@company.com" it notes that this is personal data. 3. Credit card numbers: when it sees "4532 1234 5678 9010" patterns it blocks. 4. Confidential customer names: if you've configured that "BigBank Inc" never goes to AI, and someone tries to send "BigBank," it detects it. 5. Salary numbers: patterns that look like monthly or annual salary are blocked. 6. Patent descriptions: if a document is labeled "Patent Application" it's auto-blocked. 7. Medical data: when it sees "patient number" or health notes. 8. Financial forecasts: when it sees "CONFIDENTIAL FINANCIAL PROJECTION" or similar. 9. Custom patterns: you can add your own. For example, if you never want to share "furniture design patent," it gets flagged.
How does it work in practice?
You set up a strategy node: "Analyze our top 10 customer segments." You load a dataset with customer lists. Data Guardian scans the document. It finds "Segment A: BigFactory Inc, annual purchase 2.3M kr, contact: erik@bigfactory.dk"
The system says: "STOP. This contains personal data (email), financial data (2.3M), and a customer name classified as CONFIDENTIAL. You can't send this to a standard cloud AI. Would you instead: 1. Remove names and emails, keep only purchase volume? (that can be sent) 2. Use an EU-hosted AI with a CONFIDENTIAL contract? 3. Do the analysis without AI help?"
It's secure without being rigid.
Output Guard: When AI might leak
You've sent data. Data Guardian approved it. AI gives you an answer.
This is where Output Guard comes in.
Output Guard ensures the AI didn't leak secret data in its response.
Scenario: You asked "What does our top customer segment buy most?" AI answered: "BigFactory Inc buys 2.3M annually and that's your top segment." You only wanted "Your top segment is 40% of volume."
Output Guard detects that the AI wrote the specific customer name and amount. Before it reaches you, the response is blocked. You get the anonymized version instead: "Top segment is 40% of volume."
The point: even if mistakes happen — even if you accidentally send data you shouldn't — Output Guard catches it on the way out.
Data Guardian + Output Guard working together
Layer 1: Data Guardian says "You can't send that data." Layer 2: If data is sent anyway (or if another one of your agents sends it without you knowing), Output Guard says "AI is not allowed to say that in the response."
It's like an airport with both security and gate control.
Practical steps: Before you start
If you want to use AI in strategy without GDPR risk, do this:
Step 1: Classify your data sources (30 minutes) What is PUBLIC, INTERNAL, CONFIDENTIAL, RESTRICTED?
A matrix might look like:
- Public news articles → PUBLIC
- Internal customer list with purchase amounts → CONFIDENTIAL
- Executive employment terms → RESTRICTED
- 2019 bylaws → PUBLIC
- Buried meeting notes where you discussed secret product pricing → RESTRICTED
Step 2: Configure Data Guardian for your patterns (1 hour) Beyond the 9 standard patterns: what should be RESTRICTED for you?
- Some companies: "Customer name should never go to AI"
- Others: "All financial projections must be blocked"
- Some: "Patent descriptions must be blocked"
Write your own rules.
Step 3: Sign a CONFIDENTIAL contract if you use AI If you use 360° Sprint or another AI platform: verify it only uses EU hosting (Scaleway Paris, Azure EU West, AWS Frankfurt). Get it in writing. Takes 1 email.
Step 4: Start small Let your leaders try an analysis with PUBLIC data first. "Analyze our 5 biggest market trends." Nothing personal. See how the system works. Then scale to INTERNAL level.
Common misconceptions
"If AI is EU-hosted it's safe." No. EU hosting helps, but if you send unanonymized customer lists from there it's still a GDPR violation. Data Guardian and classification first.
"If the data is 'anonymized' it's free to send to AI." Almost. If you can identify the customer from context, it's still personal data in GDPR's eyes. "Segment A," "Segment B" without names is safe.
"We just need to tell regulators we use AI and we're covered." You need to be in compliance from day 1. Talk to your legal counsel before you share data.
"If an AI provider says they delete data after 30 days it's safe." Not if you sent it without being allowed to. Deletion after an error doesn't make the error okay.
What should happen before next leadership meeting
1. Print this article 2. Give it to your legal counsel: "We want to use AI in strategy. What do we need to be careful about?" 3. Ask your IT lead: "If we use 360° Sprint, where does data live?" 4. Next meeting: look at your strategy data, classify it, decide what can be sent to AI
If you do this, AI won't be dangerous. It becomes a tool.
And if you don't, you become the leader who pasted customer lists into a cloud AI without thinking. Bad look.
GDPR and AI can work together. But it requires you to think first.