Call for Papers
Topics
We encourage paper submissions relevant to (but not limited to) the following topics:
- Survey workflow-level defenses (e.g., sandboxing, provenance tracking) to prevent data leakage and unauthorized actions 
- Explore model evaluation protocols and compliance criteria inspired by the EU AI Act’s conformity assessments (risk assessments, technical documentation, logging) and investigate the role of independent third-party audits and “safety labels” for AI systems. 
- Analyze liability frameworks for harm caused by misaligned or compromised AI agents and evaluate the implications of cross-border coordination, drawing on cooperation insights on technical AI safety collaboration among geopolitical rivals 
- Present the “AI Scientist” paradigm, which advocates for non-agentic, uncertainty-aware models to avoid runaway autonomy and discuss design patterns (e.g., provable separation of planning and execution, human-in-the-loop gates) that enforce safe operating envelopes. 
Important Dates
All deadlines are due 23:59 PM in GMT time zone.
- Submission Deadline: Aug 29, 2025
- Acceptance Notification: Sep 22, 2025
- Camera Ready Deadline: Oct 23, 2025
Submission Instruction
This workshop is non-archival. The review process is double-blinded. Submissions should be anonymized appropriately.
Abstracts and papers can be submitted through OpenReview.
Format
We welcome both short papers no longer than 4 pages and long papers of up to 9 pages, excluding references and unlimited supplementary materials. Please use the RegML @ NeurIPS 2025 template and submit your paper(s) in PDF format.