Workshop on preventing unauthorized knowledge use from LLMs: un-distillable, un-finetunable, un-compressible, un-editable, un-usable models.
February 19, 2026
November 2025 NeurIPS workshop on preventing unauthorized knowledge use from LLMs: un-distillable, un-finetunable, un-compressible, un-editable, un-usable models. Topics include resisting malicious fine-tuning, protecting against editing/compression attacks, and watermarking/verification for misuse prevention. Live content requires NeurIPS registration; website and accepted papers are free reading.