Govern Entra risky-user unblocks via ITSM approvals and audit

The problem: “Risky user” tickets bounce between SecOps and Service Desk

A common identity-ops pain point:

  • Microsoft Entra ID Protection flags a risky user / risky sign-in.
  • A ticket gets created (often P2/P1 depending on the user).
  • The service desk can’t safely unblock the user, and SecOps doesn’t want to manually chase context + approvals.
  • Result: long delays, inconsistent decisions, and high risk if someone “just clicks unblock.”

Microsoft explicitly calls out that remediation/unblocking needs careful validation (and can be self-remediated in some cases), but in practice many orgs still end up with helpdesk tickets and manual admin actions. (learn.microsoft.com)

This is exactly where AI can triage, but should not directly execute changes in identity systems.


Proposed pattern: AI triage + governed execution (Autom Mate as the control layer)

Use AI to summarize and recommend, but route all real actions through Autom Mate as the deterministic execution + governance layer (policy checks, approvals, audit trail, idempotency).

Autom Mate is designeice workflows across ITSM + identity + collaboration channels with guardrails and audit visibility.


End-to-end workflow (one complete blueprint)

1) Trigger

  • Trigger: A new incident is created in ServiceNow with category “Identity / Sign-in blocked / Risky user” (or created from a security alert pipeline).

2) Validation (context + policy checks)

Autom Mate runs a “Risky User Triage” hyperflow:

  • Pull ticket fields + requester identity from ServiceNow.
  • Query Entra context (risk level, sign-in details, user status, recent changes).
  • Apply deterministic policy checks, for example:
    • Is the user a privileged role / break-glass / service account?
    • Is the risk “high confidence” block reason?
    • Has the user completed self-remediation (password reset / MFA registration) within the last X hours?
    • Is the request coming from the user’s manager / verified channel?

Integrations used

  • ServiceNow: REST/HTTP/Webhook action (if you don’t have a library connector available)
  • Microsoft Entra ID: REST/HTTP/Webhook action (Microsoft Graph)

3) Approval (human or rule-based)

  • If policy says “needs human sign-off,” Autom Mate posts an approval card to Microsoft Teams:
    • Approver group: SecOps on-call + Identity Ops
    • Includes AI-generated summary + deterministic evidence bundle (risk level, correlation IDs, last successful sign-in, device/IP hints)

Teams approvals are a common enterprise pattern for routing sign-off. (learn.microsoft.com)

4) Deterministic execution across systems

After approval, Autom Mate executes a controlled set of actions:

  • Option A (safe default): Force password reset + revoke sessions (containment) then instruct user to re-auth.
  • Option B (if validated): Dismiss user risk / unblock (only if policy conditions are met).

Important governance note:

  • AI can recommend Option A vs B, but Autom Mate enforces:
    • which actions are allowed,
    • which require appr are blocked for privileged identities.

5) Logging / audit

Autom Mate writes back to the ticket:

  • What was checked (policy outcomes)
  • Who approved
  • What actions executed (with timestamps)
  • Execution result + correlation IDs

Autom Mate emphasizes centralized governance, guardrails, and audit visibility for orchestrated workflows.

6) Exception handling / rollback

  • If Graph calls fail or return ambiguous state:
    • Autom Mate marks the ticket “Pending manual review,”
    • posts to Teams with the failure reason,
    • and prevents partial execution from being repeated incorrectly.
  • If the workflow executed containment but unblock fails:
    • keep containment in place,
    • attach a “next manual steps” checklist to the incident.

Two mini examples

Mini example 1: VIP user locked out during travel

  • Trigger: “CEO can’t sign in” + risky sign-in flagged.
  • Policy: VIP + high risk → containment only (revoke sessions + force reset), no unblock.
  • Approval: SecOps lead approves in Teams.
  • Autom Mate updates ServiceNow with actions taken + user instructions.

Mini example 2: False positive after new device enrollment

  • Trigger: Multiple “impossible travel” detections after user enrolls a new phone.
  • Policy: Medium risk + successful MFA + known device enrollment within 24h → allow “dismiss risk” after manager confirmation.
  • Approval: Manager confirms identity in Teams; SecOps approves dismissal.
  • Autom Mate executes dismissal and closes the incident with full audit notes.

Why this is an AI governance issue (not just automation)

  • **Risk:ectly unblock users in Entra is dangerous (prompt injection, missing context, inconsistent decisions).
  • Need: deterministic controls—policy checks, approvals, and auditable execution.
  • Positioning: AI does triage + summarization; Autom Mate is the execution + control layer between AI and identity/ITSM systems.

Discussion questions

  1. What are your “never auto-unblock” rules (privileged roles, service accounts, geo anomalies, etc.)?
  2. Would you rather default to containment-first (revoke sessions/reset) and require explicit approval for unblock, or allow auto-unblock for low-risk cases?