This article is also available in Danish.
AI Agenter & Strategi

Data security in AI strategy: Four classification levels you need to know

By Daniel Wegener 12 April 2026 5 min read

You have financial data. You have HR records. You have meeting notes. You have competitive intelligence. You have customer segmentations.

You want to let an AI analyze it.

But how much of that data can you safely send to an AI service?

That's not a trivial question. And the answer is: not all of it. But with the right classification levels you can send much more than you think.

The four-tier system

There are four classification levels for company data:

1. PUBLIC

Anything that can be public without issue.

Examples: industry reports, published competitor data, publicly disclosed financials, case studies you've released, public market research.

Handling: PUBLIC data can go anywhere. Cloud AI, local models, external consultants. No security boundary.

An AI can analyze "your competitor's published annual report" without concern.

2. INTERNAL

Company-specific data but not confidential.

Examples: internal strategy notes, team meeting summaries, internal analyses (not involving salary or private information), process documentation, internal roadmaps, customer segmentation frameworks.

Handling: INTERNAL data can go to AI systems, but only if they're EU-based and controlled. This means data stays on EU servers, has an audit trail, and never gets shared further.

An AI can analyze "your Q2 internal strategy notes" if it's EU-hosted. It cannot share that with other customers or third parties.

3. CONFIDENTIAL

Data that's sensitive and must be limited.

Examples: financial figures under NDA, payroll administration, partnership terms with specific terms, pricing decisions, non-public customer data, HR evaluations, M&A planning.

Handling: CONFIDENTIAL data must be tightly controlled. If it goes to an AI, only with extra safeguards: a locally-run system (on your server), dedicated audit trail, and terms prohibiting further development or training.

An AI cannot casually analyze "your confidential partnership terms." It can if every condition down to infrastructure is controlled.

4. RESTRICTED

Data that never goes to an AI.

Examples: national ID numbers, credit card data, passwords, tokens, secret verification codes, medical or legal metadata, financial login credentials.

Handling: RESTRICTED data never goes to any AI system. Not even to EU-based systems. Not to local systems. The system itself should filter it out and block it.

If an analysis needs context around RESTRICTED data it should work with anonymized or pseudonymized versions.

Data Guardian: Automatic filtering

360° Sprint uses something called the "Data Guardian"—a system that automatically classifies and filters data before it goes to the AI.

Data Guardian knows 9 PII patterns (Personally Identifiable Information):

Every time data goes to the AI, Data Guardian scans it and automatically removes anything matching these patterns. If it finds something it blocks and alerts you.

This means if you accidentally copy data with a social security number in it, the system is designed to catch it before it reaches the AI.

Practical implementation

To set up the classification system in your organization:

1. Identify your data categories

What kinds of data does your business intelligence cover? In most cases:

2. Define classification for each

A simple list: "Meeting notes are INTERNAL. Payroll is CONFIDENTIAL. Passport numbers are RESTRICTED."

It's not complicated. Takes 30 minutes to write down.

3. Train the team

Your manager says: "When you provide data for strategy analysis, classify it. Unsure? Use CONFIDENTIAL as a safety margin."

4. Configure in the system

360° Sprint setup lets you define classification at field level. You say: "This field is RESTRICTED. This table is INTERNAL."

The system enforces it.

Why this makes AI strategy safe

Without classification: you send everything to cloud AI. You hope it's secure. You don't know.

With classification: you send only what's safe. INTERNAL data to EU servers under control. CONFIDENTIAL on local systems. RESTRICTED blocked entirely.

Result: you get 80% of the benefits of AI-assisted analysis without compromising security.

You can analyze "here's our strategy, here's the market, here's our competitors" without worrying that payroll data or customer IDs leak.

Compliance and audit

Another benefit: when data is classified there's an audit trail. If something goes wrong you can see exactly what was sent when. If you need to account for it to an auditor, data regulator, or others you can document: "here's precisely what we used, where it went, who had access."

That documentation isn't just security. It's compliance documentation.

Start here

If you don't have data classification yet:

1. Ask yourself: "Which data would be a problem if it became public?" 2. That data is at least CONFIDENTIAL. 3. Everything else falls under INTERNAL or PUBLIC. 4. Your starting point is to define your categories and begin.

You don't need perfect security to start. You need awareness of what your data means and how you handle it.

With classification in place you can use AI to analyze much more of your business without worrying about security.

That's where AI-assisted strategy becomes real within a secure framework.