
AI Security & Compliance Engineer
- London
- Contract
- Full-time
- Partner with engineering teams to embed security-by-design and privacy-by-design principles into AI agents, copilots, and automation workflows.
- Define and implement technical controls for:
- Data access and protection
- Model transparency and explainability
- Human oversight and fallback mechanisms
- Audit logging and traceability
- Design and enforce compliance frameworks for high-risk AI systems, aligned with the EU AI Act, FCA/PRA AI Principles, and ISO/IEC 42001.
- Conduct technical risk assessments on AI use cases, focusing on model behaviour, data governance, and user interaction.
- Collaborate on the development of model cards, risk registers, and post-market monitoring plans.
- Use Microsoft Purview to implement and manage:
- Data classification and sensitivity labels
- Data loss prevention (DLP) policies
- Information protection and access controls
- Compliance reporting and audit trails for AI-related data flows
- Work with the AI Governance Lead to assess new AI systems being introduced into the bank.
- Evaluate solutions for compliance with internal policies and external regulations.
- Provide technical input on risk mitigation strategies and onboarding documentation.
- Integrate AI security controls into CI/CD pipelines and MLOps workflows.
- Use tools such as Azure Key Vault, Microsoft Entra ID, and GitHub Actions for secure deployment and access management.
- Monitor AI systems using Azure Monitor, Log Analytics, and Application Insights.
- Translate regulatory requirements into actionable engineering guidelines and reusable controls.
- Ensure AI systems avoid prohibited practices and meet obligations around:
- Transparency and user awareness
- Data minimisation and lawful processing
- Continuous monitoring and incident response
- Partner with legal, compliance, and architecture teams to align AI development with enterprise risk and governance frameworks.
- Contribute to internal working groups on Responsible AI, AI governance, and ethical design.
- Educate stakeholders on emerging AI risks and mitigation strategies.
- Strong technical background in AI/ML systems, with experience embedding security and compliance into product design.
- Expert-level knowledge of Microsoft Purview for data governance, classification, and compliance.
- Familiarity with AI governance frameworks (e.g., NIST AI RMF, ISO/IEC 42001, Microsoft Responsible AI Standard).
- Hands-on experience with:
- Azure AI services, Microsoft Copilot Studio, and Power Platform
- Secure deployment tools (e.g., Azure Key Vault, RBAC, CI/CD pipelines)
- Data protection and privacy controls (e.g., DLP, masking, classification)
- Knowledge of regulatory frameworks including the EU AI Act, GDPR, and FCA guidance.
- Experience working in cross-functional teams across engineering, legal, and risk domains.
- Excellent communication and documentation skills, with the ability to translate complex requirements into technical solutions.