Agentic AI Red Teaming | Rob van der Veer & Ken Huang (Cloud Security Alliance)
Cloud Security Alliance experts discuss agentic AI red teaming methodologies and frameworks.
External Resources and Content Disclaimer
No Endorsement: The learning resources, websites, courses, and external content linked or referenced on this platform are provided for informational purposes only. We do not endorse, maintain, or take responsibility for the accuracy, quality, or availability of any third-party content or services.
No Direct Support: We do not provide technical support, customer service, or assistance for any external websites, platforms, or content providers. Users must contact the respective service providers directly for support, billing, or technical issues.
Use at Your Own Risk: We do not recommend or guarantee the effectiveness, safety, or suitability of any external resources for your specific learning needs or career goals. Users should conduct their own research and due diligence before enrolling in courses, purchasing materials, or following external guidance.
Content Changes: External websites and resources may change, become unavailable, or modify their content without notice. We are not responsible for broken links, outdated information, or changes to third-party services that may affect your learning experience.
Resource Link
View ResourceAdded
January 29, 2026
AI Analysis Summary
In this expert presentation from the Cloud Security Alliance, Rob van der Veer and Ken Huang discuss the emerging field of agentic AI red teaming. Learn about testing autonomous AI agents for security vulnerabilities, the unique risks of agentic AI systems, and frameworks for evaluating AI agent security. Covers multi-agent system vulnerabilities, agent hijacking, goal manipulation, and comprehensive methodologies for red teaming AI agents in enterprise environments.