Lock-LLM Workshop at NeurIPS 2025
Workshop on preventing unauthorized knowledge use from LLMs: un-distillable, un-finetunable, un-compressible, un-editable, un-usable models.
External Resources and Content Disclaimer
No Endorsement: The learning resources, websites, courses, and external content linked or referenced on this platform are provided for informational purposes only. We do not endorse, maintain, or take responsibility for the accuracy, quality, or availability of any third-party content or services.
No Direct Support: We do not provide technical support, customer service, or assistance for any external websites, platforms, or content providers. Users must contact the respective service providers directly for support, billing, or technical issues.
Use at Your Own Risk: We do not recommend or guarantee the effectiveness, safety, or suitability of any external resources for your specific learning needs or career goals. Users should conduct their own research and due diligence before enrolling in courses, purchasing materials, or following external guidance.
Content Changes: External websites and resources may change, become unavailable, or modify their content without notice. We are not responsible for broken links, outdated information, or changes to third-party services that may affect your learning experience.
Resource Link
View ResourceAdded
February 19, 2026
AI Analysis Summary
November 2025 NeurIPS workshop on preventing unauthorized knowledge use from LLMs: un-distillable, un-finetunable, un-compressible, un-editable, un-usable models. Topics include resisting malicious fine-tuning, protecting against editing/compression attacks, and watermarking/verification for misuse prevention. Live content requires NeurIPS registration; website and accepted papers are free reading.