Two-day, lab-heavy course on exploiting and defending LLM-based systems.
February 19, 2026
February 2026 two-day commercial course covering both exploiting and defending LLM-based systems: prompt injection, data poisoning, excessive agency, plugin exploitation, and guardrail/monitoring patterns. Labs include direct injection, RAG poisoning, agent prompt injection, data poisoning, insecure plugin design, excessive agency, and overreliance risk. Also covers designing secure workflows, defenses in plugin interfaces and AI agent frameworks, and monitoring/guardrails for LLM deployments.