Your course
An immersive, intensive 2-day journey into the dynamic world of artificial intelligence. As LLMs increasingly becoming an integral part of various products and services, grasping their implementation nuances and securing these implementations is paramount for maintaining robust, efficient, and trustworthy systems.
Who’s it for?
- Security Professionals
- Back-End / Front-End Developers
- System Architects
- Product Managers
- Anyone directly involved in the integration and application of LLM technologies
This course is designed for individuals with a beginner-to-intermediate understanding of artificial intelligence and cybersecurity. Whether you are a security consultant, developer, AI / LLM architect, or prompt engineer, you should have a foundational grasp of AI / LLM concepts and some experience with cybersecurity practices.
Student requirements
Basic Understanding of AI: A foundational knowledge of AI and LLM principles and applications is essential.
Familiarity with Programming: Some experience with coding, particularly in languages commonly used in AI development (e.g., Python), will be beneficial, though advanced proficiency is not required.
Understanding of cybersecurity concepts: A basic understanding of cybersecurity threats and mitigation strategies will be advantageous.
Laptop: AI labs are served in the cloud, access to python IDE is via Jupiter notebooks, the only hardware requirement is access to the latest version of Chrome or Firefox.
What students will be provided with
Comprehensive Course Materials: Detailed handouts, slides, and digital resources covering all key concepts and techniques.
Interactive Labs: Access to structured hands-on labs for practical experience in securing & hacking AI systems.
Case Studies: documented examples of real-world AI breaches and security implementations.
Direct Support: Access to the instructor for post-course questions, clarifications, and additional guidance to ensure you can apply what you have learnt effectively.
What you will learn
This course follows a practical “defense by offense” approach, anchored in real-world scenarios and hands-on labs rather than abstract theory. By the end of the course, you’ll be able to:
- Think and behave like a sophisticated attacker targeting LLM-based systems
- Understand how attackers discover and exploit prompt injections, insecure output handling, data poisoning, and other vulnerabilities in AI workflows
- Identify and exploit security weaknesses specific to LLM integrations
- Practice detecting and attacking common pitfalls (e.g., plugin misconfiguration, overreliance, and supply chain exposures) in real-world lab environments
- Implement effective prompt engineering and defensive measures
- Learn to craft prompts that minimize leakage, prevent injection, and ensure your LLM responds reliably within controlled security parameters
- Design LLM applications with minimal attack surface
- Explore best practices for restricting AI agent functionality (excessive agency), hardening plugin interfaces, and securing AI-driven workflows
- Apply forward-thinking strategies to protect training and inference data
- Develop robust security controls in real-world deployments
- Translate lab exercises into practical solutions by integrating logging, monitoring, and guardrails for continuous protection of LLM-based services