This article is also available in Danish.
AI Agenter & Strategi

Autonomy levels: When should AI run solo, and when should you approve?

By Daniel Wegener 12 April 2026 6 min read

I met a CEO who got anxious when his AI system started making independent decisions.

Not because the system made mistakes. But because he had no control. The system monitored competitors, generated reports, and sent conclusions directly to the board—without anyone reviewing first.

It worked for three months. Then the system made a decision about pricing that didn't age well. So he shut it down completely. Tapped out. Back to spreadsheets and meetings.

That's the wrong way to solve it. The problem wasn't AI. The problem was he hadn't thought through autonomy levels.

The three levels

There are three autonomy levels for AI-based strategic systems. And you must choose which level for each task.

Level 1: Insights Only

AI analyzes. Humans decide everything.

When should you use it?

When it's sensitive or high-stakes. Mergers. Layoffs. Strategic pivots. Massive investments.

A CEO with an M&A coming asked his AI to analyze a potential acquisition from every angle. Finance. Culture fit. Technology. HR.

AI returned: "Financially a great buy. But their culture is completely different from yours, and their key people will probably leave when you take over."

The leadership team came together. They decided the price needed to be lower to compensate for the risk. They negotiated harder. It meant they kept some people post-acquisition instead of losing them.

If the system had run at Level 2, the deal would have gone differently. When risk is high, humans need to be in control.

Practice: Level 1 for strategic pivots, M&A, major leadership shifts, budget decisions over XXX DKK.

Level 2: Auto with Approval

AI does the work. Humans approve it.

When should you use it?

When the task is semi-routine but important. Competitor monitoring. Quarterly strategy updates. Risk assessments. Market trends.

A company set their system to monitor competitors daily. Every Friday it sent a "Weekly Competitor Snapshot" — what have competitors done? New products? Price changes? Hiring? New markets?

Their CMO looked at it, approved it, and sent it to the board. Usually took 20 minutes. But it meant the board was never surprised. They knew competitors before competitors knew themselves.

If the system had run at Level 1, leadership would need to re-analyze everything every Friday. If it had run at Level 3, there'd be no quality control.

Level 2 gave the perfect balance.

Practice: Level 2 for recurring reports, trend analysis, competitive intelligence, culture monitoring, performance dashboards.

Level 3: Full Auto

AI executes within pre-defined boundaries.

When should you use it?

Only when the task is hyper-routine, low-risk, and has clear rules.

A company used their system for daily data collection. Tracks new job postings on competitors' websites. Collects prices from 50 competitors daily. Fetches new business registrations from the Registry.

The system did it all automatically. No humans talked about it. It just ran. And every month came a dataset that their analysts used to map trends.

If someone had done it manually it would be 30 hours per month. And people would forget it half the time.

Level 3 freed up many hours for more important work.

Practice: Level 3 for data collection, routine monitoring, alert systems, documentation generation (from structured data), scheduling.

Decision matrix

Here's how you choose the level for each task:

| Task | Consequence of error | Creativity required | Data stability | Level | |--------|----------|----------|----------|----------| | M&A analysis | Catastrophic | High | Low | 1 | | Quarterly review | High | High | Low | 1-2 | | Competitor monitoring | Medium | Low | High | 2-3 | | Data collection | Low | None | High | 3 | | Strategy choice | Catastrophic | High | Low | 1 | | Risk assessment | High | High | Low | 1-2 | | Trend detection | Medium | Low | High | 2-3 | | Alert generation | Low | None | High | 3 |

Rule of thumb:

The credit budget system

Here's a practical guardrail: credit budget.

Give your AI system a "budget" of credits per month. Every time it acts autonomously (Level 3) or recommends something (Level 2), it "spends" a credit.

When the budget is gone—the system needs human approval again.

Example:

If the system acts "too aggressively" it runs out of credits. Then human approval is required again.

It means you don't need to trust the system blindly—you trust it in a controlled way.

The practical approach

Getting started:

1. Inventory all strategic tasks. Analyses, reports, decisions, actions.

2. Classify by risk. High: major consequence if it fails. Medium: some consequence. Low: minor consequence.

3. Assign autonomy level. - High risk = Level 1 (AI analyzes, humans decide) - Medium risk = Level 2 (AI acts, humans approve) - Low risk = Level 3 (AI acts, humans get reported to)

4. Set credit budget. If unsure, start conservative. 50–100 credits per month for small companies.

5. Monitor and adjust. Each month: "Did the system do something we wouldn't have approved?" If yes, reduce autonomy.

What the CEO should have done

The CEO I mentioned at the start? He should have done exactly this before letting the system loose.

He should have said:

Then he should have set a credit budget. And monitored what the system did.

The result would be that he had control, and the system had freedom where it could be trustworthy. Today he wouldn't be without AI. He'd be partners with it.

Autonomy levels aren't about stopping AI. They're about structuring where you use it—and when you keep the wheel.