Alright, lads and lasses — hello again, friends.
Today we’re diving straight into the delightful chaos otherwise known as Microsoft Security Copilot being bundled into M365 E5.
You may have seen the announcement and thought:
“Brilliant, free AI for security! What a lovely day.”
Calm down, Shakespeare.
This isn’t a fairy tale — it’s enterprise security. And as always, the devil is not merely in the details; he’s chairing the architecture review.
Strap in. Kettle on. Let’s get into it.
What Security Copilot actually is (minus the glittery marketing brochures)
Security Copilot is no longer a fancy chat window pretending to be a SOC analyst.
It’s now a proper agent-based system, stitched directly into the guts of Microsoft’s security stack.
You’re getting a whole squad of AI-powered helpers:
Defender Agent
The one that digs through attack chains, correlates MITRE techniques, and basically does the grim work junior analysts cry over.
Entra Agent
Your new bouncer for identity anomalies, dodgy sign-ins from suspicious time zones, and broken Conditional Access rules that your team “meant to fix last quarter”.
Intune Agent
For vulnerabilities, misconfigurations, risk modelling — basically the one poking at all your Windows devices asking, “Mate, why are you like this?”
Purview Agent
Your digital gossip detector: “Who touched what, where, why, and should they even have access to it?”
Copilot Chat & Playbooks
Now with its own scripting language, because of course we needed yet another one.
On the bright side, you can finally automate triage workflows without sacrificing a goat.

The fuel: SCUs — the mysterious crypto-currency of Microsoft security
Security Copilot doesn’t run on fairy dust — it runs on SCU (Security Compute Units), Microsoft’s version of calories for AI agents.
Here’s the catch:
-
Every 1,000 E5 users give you 400 SCU/month.
-
The included maximum is 10,000 SCU/month.
-
Anything heavyweight — incident correlation, phishing triage, multi-stage investigations, large context reports — will absolutely chew through SCU like a drunk rugby team through a buffet.
-
A mature SOC can burn a month’s SCU in about two weeks.
If you hit zero, the system doesn’t die, but it becomes very British:
Slow, polite, and not particularly helpful.
Architectural bits that will absolutely crumble if you don’t prepare
1. API strain — your poor service bus
Copilot constantly hammers internal Defender, Entra, Intune and Purview APIs.
If your integrations are held together with duct tape and good intentions — expect throttle gates to slam shut faster than a London pub at 11pm.
2. RBAC suddenly matters more than your morning coffee
If you haven’t segmented roles, congratulations — you just gave half the company a VIP pass to your security telemetry.
No AI magic will fix that.
3. Telemetry quality = output quality
If your logs are shallow, outdated, inconsistent, or stored “somewhere in Western Europe because IT said so”, Copilot will produce the analytic equivalent of horoscopes.
4. Multivendor SOC setups will feel the pain
Copilot plays nicest inside the Microsoft theme park.
If you’ve got Palo Alto here, Okta there, a bit of CrowdStrike duct-taped somewhere — well… Copilot will do its best, bless it, but don’t expect miracles.
What Copilot actually does well
When your architecture isn’t a fire hazard, you get:
-
Phishing triage in seconds, not hours.
-
Lateral movement analysis without twenty Chrome tabs.
-
Attack chain visualisation that doesn’t look like a crime map from a dodgy detective drama.
-
Hardening recommendations that don’t require sacrificing sleep.
-
Identity risk analysis that’s actually reasonable.
-
Incident enrichment that feels like having a seasoned IR analyst nearby.
If everything’s tuned correctly, your SOC performance boosts by 2× to 4×.
And yes — that’s real. Not marketing nonsense.
The risks everyone politely avoids mentioning
-
SCU bottlenecks — you will hit them. Everyone hits them.
-
AI ≠ replacement for engineers — don’t even start.
-
Bad logs = bad output — physics still applies.
-
Non-Microsoft tooling = partial visibility — don’t pretend otherwise.
-
Agent actions need auditing — because letting an AI run wild in production is how horror films begin.
How to deploy this properly (a British survival guide)
1. Audit everything in M365 E5
Know what you actually own.
Shocking how many organisations can’t answer that.
2. Turn on proper logging
If your logs last 7 days — congratulations, you’re flying blind.
3. Pick 3–5 SOC scenarios for automation
Phishing, lateral movement, DLP, identity anomalies — start somewhere sane.
4. Monitor SCU burn rate
Otherwise, you’ll be writing tearful emails to your CFO.
5. Fix RBAC
If everyone has admin rights — you don’t have security, you have chaos.
6. Build your own playbooks
Copilot is powerful, but only if your workflows exist outside someone’s imagination.
7. Set SOC performance metrics
No metrics → no results → no budget next year.
Final thoughts (aka the “stop dreaming and start engineering” bit)
Security Copilot is genuinely brilliant if your infrastructure isn’t a circus.
If:
-
your logs are clean,
-
your pipelines stable,
-
your roles segmented,
-
your team trained,
-
your processes consistent,
-
and your architecture not drawn on a napkin,
—you’ll get a security boost that feels like cheating (legally).
Right, but if your whole setup is held together with sticky tape and a prayer—well, then your Copilot will end up as just another posh button that everyone’s too bricking it to press, in case the whole kit and caboodle goes pear-shaped.
Cheers for sticking with me through all that gaff. Proper champion you are.
Alex