Is your organization ready for AI?
Scenario-based. Anonymized. Actionable. An assessment that reveals your team's real AI safety knowledge — with a readiness report you can take to leadership.
No credit card. No sales call. Results in your inbox the same day.
How the assessment works
Your team takes a scenario-based assessment
Clinicians and staff answer scenario-based questions covering PHI identification, safe AI usage, policy awareness, and incident response. Your team interacts with Guardian’s detection engine as they go.
You get an anonymized readiness report
Results are aggregated with k-anonymity protections and regulatory citations. The report is ready when 5 employees complete — no individual scores are ever exposed.
Each gap maps to a specific requirement
Each gap maps to a specific HIPAA or ONC requirement — with a clear remediation path. Share the report with leadership to justify action.
See what your report looks like
Every organization that completes the assessment gets a report like this — with real scores, regulatory citations, and prioritized recommendations.
AI Readiness Report — Sample
Overall Readiness: 74/100
This organization shows developing readiness across four assessed categories. Two categories fall below their regulatory targets.
Not Ready
0–39
Emerging
40–59
Developing
60–79
Proficient
80–100
Category Breakdown
Scores vs. HIPAA-derived targets. Each target reflects minimum competency for the cited regulation.
Baseline Measurement
These scores reflect your team's existing knowledge — before any training or intervention. Use them to measure the true impact of your remediation efforts.
Priority Findings
PHI Identification below §164.502(a) target
62% of clinical staff correctly identified PHI in AI prompt scenarios, against an 85% target derived from the HIPAA minimum necessary standard. Recommend targeted training on PHI boundaries in AI interactions.
Policy Awareness exceeds target
84% of staff demonstrated awareness of organizational AI policies — above the 75% target. Current training and policy communication are effective in this area.
The assessment runs on the same engine that protects your organization.
Guardian Health isn't a quiz tool. It's an AI compliance platform with a Three-Gate architecture that ensures AI systems only receive data your organization has explicitly authorized.
Gate 1
What is this data?
Classification
5 detection engines identify PHI, PII, and sensitive content in real time.
Gate 2
Who's asking?
Authorization
Role-based authorization with context-aware policy enforcement.
Gate 3
Where can it go?
Routing
Risk-based routing to the right AI model with the right protections.
Every AI interaction in your organization passes through these three gates. Including the assessment your team just took.
Layer 1
Your data, protected.
Our AI works with patient data it can never see.
Guardian's PHI-blind architecture tokenizes sensitive data before it reaches any AI model. Five detection engines — NLP, Azure Health Data Services, regex, custom clinical recognizers, and image analysis — work in parallel to catch what rule-based systems miss.
PHI-Blind Execution
AI models never see raw patient data. PHI is tokenized before processing and rehydrated only at authorized execution boundaries.
5 Detection Engines
Presidio NLP, Azure Health Data Services, regex patterns, custom clinical recognizers (MRN, DEA, NPI), and image PHI analysis run in a detection cascade.
Real-Time Scanning
Debounced detection as users type. PHI is flagged in real time — before content leaves your tenant.
Layer 2
Your tools, compliant.
Already using AI? Route it through Guardian. One line of code.
Guardian's API gateway sits between your team and any AI provider — Azure OpenAI, Anthropic, OpenAI, Ollama, or custom models. Every request passes through the Three-Gate pipeline: classified, authorized, and routed with the right protections.
API Gateway
A unified proxy for all AI interactions. PHI scanning, rate limiting, streaming support, and a complete audit trail on every request.
Multi-Provider Routing
Six registered providers with health tracking, automatic failover, and org-level model routing. Switch providers without changing a line of application code.
Browser Extension
Side panel chat, floating action button, and context menu actions — all with real-time PHI preview and the same policy enforcement as the API.
Layer 3
Your team, trained.
They don't just learn about compliance. They experience it.
Guardian's training modules put your team inside real AI scenarios — identifying PHI, enforcing policies, responding to incidents. Interactive exercises use the same detection engine that protects your production data.
Interactive Sandbox
Six exercise types covering PHI detection, tokenization, policy enforcement, and clinical scenarios. Hands-on practice with the real detection engine.
Readiness Assessment
Scenario-based assessment across four compliance categories. Org-level readiness reports with k-anonymity protections and baseline-locked scoring.
Certification & LMS
Open Badges 3.0 certificates, SCORM/xAPI export for your existing LMS, and compliance evidence packages with full audit trail.
Layer 4
Your compliance, provable.
Show auditors exactly what your AI touched, when, and why.
Every AI interaction generates an immutable audit record. Guardian's compliance command center aggregates risk events, generates scheduled reports, and provides the executive summaries your leadership team needs.
Unified Audit Trail
Immutable logs for every AI request, policy decision, and configuration change. Designed for 7-year retention with encryption key operation tracking.
Compliance Reports
Compliance command center with health scoring, risk event detection, scheduled PDF reports, and executive summaries — generated automatically.
AI Configuration Assistant
25 natural-language tools for managing policies, detection settings, and routing — with read-only, write, and destructive operation tiers.
73% of healthcare workers have used AI with patient data.
Shadow AI isn't a future problem — it's happening now. Staff are using ChatGPT, Copilot, and other consumer AI tools with sensitive data because nobody gave them a safe alternative.
The question isn't whether your team is using AI. It's whether you know how.
5
detection engines
Zero
PHI in any AI model
316
automated security tests
7-year
retention architecture
WCAG 2.1 AA
accessible
25
AI configuration tools