Design LLM red-teaming scenarios, build adversarial tests, and implement content-safety filters.
February 19, 2026
December 2025 course in the AI Security specialization. Covers designing LLM red-teaming scenarios, building structured adversarial tests, and implementing content-safety filters while preserving UX and model performance. Includes threat modeling, vulnerability assessment, and continuous monitoring for LLM applications. Targets intermediate learners with basic ML and programming.