Title: AI: Assuring Large Language Models (LLM)
Format: Onine. Up to 1 CPE for attendance.
Synopsis: A deep dive into the requirements of AI assurance, exploring how organisations can provide assurance on large language models - ensuring they’re safe, trustworthy, and fit for purpose.
We will cover risks, frameworks, assurance techniques, and practical examples.
Key Takeaways:
- Assurance is critical for trust and compliance
- Risks span technical, ethical, and operational domains
- Frameworks and methods that exist today
We hope you can join us for what promises to be an informative session
Speaker Details:
Speaker 1: Alex Hunt, Data Services Leader
He leads a team of data & AI audit specialists to provide innovative solutions to support internal audit, governance, risk and compliance management. He has over 16 years’ experience supporting clients in technology risk and enhancing their use of data and AI to support their internal needs
Speaker 2: Lachlan MacDonald, Data Services Manager
An experienced internal auditor specialising in AI Assurance, I help organisations to navigate the complex and evolving landscape of AI development. I provide strategic guidance to ensure compliance with current and emerging regulations, as well as alignment with industry-specific frameworks for responsible AI deployment.
Speaker 3: Anne Lucas, Data Services Manager
She is certified Internal Auditor with extensive experience in internal audit analytics, risk management, data governance, and AI-related reviews. As part of the Data Services team at GT, She advise clients on best practices for AI adoption by assessing governance structures, policies, quality assurance methodologies, and strategic alignment to support effective implementation.
Any questions?
Please contact admin@isaca-london.org