Introduction
As AI systems become more integrated into critical applications, securing them against adversarial threats is increasingly important. This course covers adversarial attacks, model exploitation risks, and defense techniques. Participants will learn how attackers manipulate inputs, poison datasets, or reverse-engineer models. Hands-on exercises help learners understand how vulnerabilities arise and how to mitigate them. By the end, participants will be able to identify and defend against AI security risks.
Course Objectives
- Understand adversarial machine learning concepts
- Explore attack types on AI systems
- Learn defensive strategies for robust models
- Analyze threats to training pipelines
- Build secure AI workflows
Target Audience
- ML engineers
- Cybersecurity professionals
- Researchers in AI safety
- Data engineers
- Students studying adversarial ML
Course Outline
- 5 Sections
- 0 Lessons
- 5 Days
Expand all sectionsCollapse all sections
- Day 1: AI Security Overview• Threat landscape
• Vulnerability types
• Attack surfaces
• Defense categories
• Case studies0 - Day 2: Adversarial Attacks• Evasion attacks
• Perturbation crafting
• FGSM, PGD
• Model extraction
• Hands-on: Create adversarial examples0 - Day 3: Poisoning & Backdoor Attacks• Data poisoning strategies
• Supply-chain risks
• Backdoor triggers
• Training pipeline vulnerabilities
• Hands-on: Simulate poisoning0 - Day 4: Defenses & Robustness• Adversarial training
• Detection systems
• Certified defenses
• Input sanitization
• Hands-on: Defend a model0 - Day 5: Secure AI Deployment• Red-teaming AI systems
• Continuous monitoring
• Model governance
• Risk assessment frameworks
• Capstone project0







