Case Studies

How Irish public sector bodies can use AiEGIS to comply with the EU AI Act. Real scenarios. Real layers. Real compliance.

01
Health Services Executive (HSE)
Healthcare Triage AI Governance High Risk
Ireland's 'AI for Care' national strategy (2026-2030) is deploying AI across medical imaging, clinical decision support, and documentation. The HSE's Triage Link (Yellow Schedule) is piloting AI triage at hospitals. Under the EU AI Act, clinical decision support AI is HIGH risk (Annex III, Section 5 — Essential Services).

The Challenge

The triage AI makes time-critical decisions that directly affect patient safety. If it under-prioritises a heart attack patient or over-prioritises a minor injury, the consequences are severe. The HSE needs to ensure the AI is continuously monitored, its decisions are auditable, and human clinicians can override it at any time.

How AiEGIS Helps

AiEGIS monitors the triage AI in real-time. Every triage decision is logged with confidence scoring — when the AI's confidence drops below a configurable threshold (e.g., 70%), the case is automatically routed to a human clinician for review. Behavioural baselines detect if the AI starts making unusual patterns of decisions (drift detection). If the AI degrades, AiEGIS quarantines it and alerts the medical team.

AiEGIS Layers Applied

L1 Agent Identity — triage AI registered with unique certificate
L4 Agent Police — real-time monitoring, risk scoring
L5 Quality Gate — output validation for clinical decisions
L10 Data Protection — PII scanning on patient data
L12 Behavioural Intelligence — drift detection on triage patterns
L14 Confidence Scoring — human-in-the-loop review routing
Art. 9
Continuous risk management
Art. 10
Patient data governance
Art. 14
Clinician override capability
Art. 72
Post-market monitoring
02
Central Bank of Ireland
Financial AI Compliance High Risk
85% of Irish financial firms already use AI, and the Central Bank has flagged AI governance as a 2026 supervisory priority. Financial institutions deploy AI for credit scoring, fraud detection, and loan approval. Under the EU AI Act, AI assessing creditworthiness is HIGH risk (Annex III, Section 5).

The Challenge

AI credit scoring models can embed bias — discriminating by postcode, age, or nationality without explicit programming. The Central Bank needs visibility into how these models make decisions, whether they drift over time, and whether customers are being treated fairly. Every decision must be auditable for regulatory inspection.

How AiEGIS Helps

AiEGIS registers each credit scoring AI with a unique identity and monitors every decision. The compliance engine enforces spending limits and approval workflows — decisions above a configurable threshold require human review. Data protection scanning catches any PII leaking through model outputs. The full audit trail satisfies Central Bank inspection requirements.

AiEGIS Layers Applied

L1 Agent Identity — AI system registered and tracked
L3 Compliance Engine — spending limits, approval workflows
L4 Agent Police — real-time monitoring and quarantine
L10 Data Protection — PII and credential scanning
L12 Behavioural Intelligence — bias drift detection
L14 Confidence Scoring — human review for borderline cases
Art. 9
Risk management system
Art. 11
Technical documentation
Art. 12
Immutable audit logs
Art. 14
Human oversight on approvals
03
Data Protection Commission (DPC)
AI Data Governance High Risk
The DPC has been vocal about AI compliance since their Visions of Data Protection 2025 report. With GDPR and the EU AI Act intersecting, the Commission needs tooling to audit AI systems for data protection compliance — detecting PII exposure, monitoring data flows, and ensuring data classification standards are met.

The Challenge

AI chatbots handling citizen queries can inadvertently collect, store, or expose personal data. An AI assistant trained on public data might memorise PII and leak it in responses. The DPC needs to verify that AI systems across regulated entities handle personal data correctly — without manually inspecting every system.

How AiEGIS Helps

AiEGIS scans every input and output of monitored AI systems for PII — emails, phone numbers, PPS numbers, credit card numbers, IP addresses. Data is classified as PUBLIC, INTERNAL, CONFIDENTIAL, or RESTRICTED. Egress monitoring blocks RESTRICTED data from leaving the system. The DPC gets automated compliance reports per AI system showing what data exists, where, and who accessed it.

AiEGIS Layers Applied

L6 Input Sanitiser — blocks PII injection and prompt attacks
L8 Memory Integrity — detects data tampering
L10 Data Protection — PII scanning, masking, classification
L11 Network Security — egress monitoring, data flow control
Art. 10
Data governance standards
Art. 12
Record-keeping
Art. 13
Transparency obligations
GDPR
Data protection alignment
04
Department of Education / Minister for Education
Student-Facing AI Monitoring High Risk
Ireland's Digital Strategy for Schools includes AI literacy as a priority. Schools are deploying AI tutoring systems and automated exam grading. Under the EU AI Act, AI used in education to determine access to education or assess students is HIGH risk (Annex III, Section 3). Additional protection applies when AI interacts with minors.

The Challenge

An AI tutoring platform for primary school children adapts content based on student performance. If the AI develops bias — recommending less challenging material to students from certain backgrounds — it could reinforce educational inequality. The AI must be monitored for fairness, its decisions must be transparent, and parents must be able to understand how their child is being assessed.

How AiEGIS Helps

AiEGIS registers the tutoring AI and monitors its interactions with students. Behavioural Intelligence detects if the AI treats different student groups differently (fairness monitoring). Confidence scoring ensures low-confidence assessments are reviewed by teachers. Data protection prevents student PII from being exposed or misused. The compliance engine generates transparency reports that parents and educators can understand.

AiEGIS Layers Applied

L1 Agent Identity — AI system registered with education role
L3 Compliance Engine — education-specific compliance rules
L5 Quality Gate — output validation for educational content
L10 Data Protection — student PII protection (minors)
L12 Behavioural Intelligence — fairness and bias monitoring
L14 Confidence Scoring — teacher review for uncertain assessments
Art. 9
Risk management for minors
Art. 13
Transparency for parents
Art. 14
Teacher oversight capability
Art. 15
Accuracy and robustness

See AiEGIS in action

Try the compliance checker. Register an AI agent. Watch the dashboard monitor in real-time.

Live Demo Compliance Checker