Generative and Protective AI for Content Creation

1st Workshop on GenProCC, NeurIPS 2025
Date: December 6, 2025  ·  Room: Upper Level Room 23ABC  ·  Location: San Diego Convention Center, USA

Workshop Overview

Recent advancements in generative AI (GenAI) have empowered machines to create high-quality content across diverse modalities - text, image, audio, and video - with impressive fluency and creativity. From GPT-4o and Stable Diffusion to Sora and MMAudio, the explosion of X-to-X generation (e.g., text-to-image, video-to-audio) is unlocking new frontiers in science, education, entertainment, and art.

While GenAI has shown significant potential in creative applications (e.g., music, films, arts), these breakthroughs also raise pressing concerns related to safety, copyright, and ethical use. Generative models can be exploited to spread misinformation, violate intellectual property rights, or diminish human agency in creative processes. As such, there is an increasing need to balance innovation with protection, ensuring that AI-powered creative tools are used responsibly and ethically.

This workshop, GenProCC: Generative and Protective AI for Content Creation, brings together researchers, creators, and practitioners at the intersection of content generation and IP protection. By uniting the generative AI and creator communities, the GenProCC workshop aims to explore the latest advances, challenges, and opportunities in the rapidly evolving field.

Topics Include:

  • Controllable Generative AI for Content Creation: This area advances generative models toward controllable X-to-X synthesis across any modalities (e.g., text, image, audio, video). Controllability is an emerging research frontier, enabling models to produce outputs that precisely follow user intent and empowering creators to integrate GenAI into sophisticated creative workflows.
  • Protective AI Approaches for Content Creation: This area focuses on developing techniques to ensure the traceability and integrity of creator- or AI-generated content, including methods such as digital watermarking, fingerprinting, and provenance tracking. Moreover, benchmarking these protective techniques is essential for establishing robust standards and evaluating their real-world effectiveness. As generative models rapidly improve, safeguarding creators’ rights and mitigating broader societal risks are indispensable for the responsible AI deployment.
  • Creative Practices with Generative AI: This area explores how artists and practitioners apply GenAI in real-world creative contexts, uncovering practical challenges, workflows, and ethical considerations. Insights from these studies help researchers align technical innovations with genuine user needs and inform the design of human-centered AI tools.

Schedule

Workshop Schedule — December 6, 2025

Note: All times are local (San Diego).

8:00 - 8:10

Opening Remarks

8:10 - 8:55
Mark Plumbley

Invited Tech Speaker: Mark Plumbley

King's College London

" Audio Generative AI: Technologies and Implications"

8:55 - 9:10

Guiding Audio Editing with Audio Language Model (Oral)

Authors: Zitong Lan, Yiduo Hao, Mingmin Zhao

9:10 - 9:45

Coffee break / Poster session

9:45 - 10:30
Luba Elliott

Invited Creator Speaker: Luba Elliott

"The Search for Originality in the Age of Mainstream AI"

10:30 - 11:15
Nanyun (Violet) Peng

Invited Tech Speaker: Nanyun (Violet) Peng

University of California, Los Angeles & Amazon

"Collaborative and Controllable AI for Creative Co-Creation"

11:15 - 11:30

EnTruth: Tracing the Unauthorized Dataset Usage in Diffusion Models (Oral)

Authors: Jie Ren, Yingqian Cui, Chen Chen, Yue XING, Hui Liu, Lingjuan Lyu

11:30 - 11:45

FreeBlend: Advancing Concept Blending with Staged Feedback-Driven Interpolation Diffusion (Oral)

Authors: Yufan Zhou, Haoyu Shen, Huan Wang

11:45 - 12:30

Lunch break / Poster session

12:30 - 13:15
Chris Donahue

Invited Tech Speaker: Chris Donahue

Carnegie Mellon University & Google DeepMind

"The expanding horizons of music AI research"

13:15 - 14:00
Adam Cole

Invited Creator Speaker: Adam Cole

University of the Arts London

"The Perfect Use Of An Imperfect Medium: Towards A Morality of AI Art"

14:00 - 14:45
Yu-Chiang Frank Wang

Invited Tech Speaker: Yu-Chiang Frank Wang

National Taiwan University & Nvidia

"Teaching Vision Language Models to See, Forget, and Imagine"

14:45 - 15:00

On Forging Semantic Watermarks in Diffusion Models: A Theoretical Perspective (Oral)

Authors: Cheng-Yi Lee, Yu-Feng Chen, Chun-Shien Lu, Jun-Cheng Chen

15:00 - 15:45

Coffee break / Poster session

15:45 - 16:00

TokenSwap: A Lightweight Method to Disrupt Memorized Sequences in LLMs (Oral)

Authors: Kaustubh Ponkshe, Parjanya Prashant, Babak Salimi

16:00 - 16:15

Perturb your Data: Paraphrase-Guided Training Data Watermarking (Oral)

Authors: Pranav Shetty, Mirazul Haque, Petr Babkin, Zhiqiang Ma, Xiaomo Liu, Manuela Veloso

16:15 - 16:30

Prefilled responses enhance zero-shot detection of AI-generated images (Oral)

Authors: Zoher Kachwala, Danishjeet Singh, Danielle Yang, Filippo Menczer

16:30 - 17:00

Closing Remarks / Networking

Accepted Papers

  1. "Guiding Audio Editing with Audio Language Model (Oral)"

    Authors: Zitong Lan, Yiduo Hao, Mingmin Zhao

  2. "EnTruth: Tracing the Unauthorized Dataset Usage in Diffusion Models (Oral)"

    Authors: Jie Ren, Yingqian Cui, Chen Chen, Yue XING, Hui Liu, Lingjuan Lyu

  3. "FreeBlend: Advancing Concept Blending with Staged Feedback-Driven Interpolation Diffusion (Oral)"

    Authors: Yufan Zhou, Haoyu Shen, Huan Wang

  4. "On Forging Semantic Watermarks in Diffusion Models: A Theoretical Perspective (Oral)"

    Authors: Cheng-Yi Lee, Yu-Feng Chen, Chun-Shien Lu, Jun-Cheng Chen

  5. "TokenSwap: A Lightweight Method to Disrupt Memorized Sequences in LLMs (Oral)"

    Authors: Kaustubh Ponkshe, Parjanya Prashant, Babak Salimi

  6. "Perturb your Data: Paraphrase-Guided Training Data Watermarking (Oral)"

    Authors: Pranav Shetty, Mirazul Haque, Petr Babkin, Zhiqiang Ma, Xiaomo Liu, Manuela Veloso

  7. "Prefilled responses enhance zero-shot detection of AI-generated images (Oral)"

    Authors: Zoher Kachwala, Danishjeet Singh, Danielle Yang, Filippo Menczer

  8. "Multimodal Robustness Benchmark for Concept Erasure in Diffusion Models"

    Authors: Ju Weng, Jia-Wei Liao, Cheng-Fu Chou, Jun-Cheng Chen

  9. "Localized-Attention-Guided Concept Erasure for Text-to-Image Diffusion Models"

    Authors: Zhuan Shi, Alireza Farashah, Rik de Vries, Golnoosh Farnadi

  10. "VSF: Simple, Efficient, and Effective Negative Guidance in Few-Step Image Generation Models By Value Sign Flip"

    Authors: Wenqi Guo, Shan Du

  11. "Bridging Reading Accessibility Gaps: Responsible Multimodal Simplification with Generative AI"

    Authors: Sharv Murgai, Shivatmica Murgai

  12. "Dynamic VLM-Guided Negative Prompting for Diffusion Models"

    Authors: Hoyeon Chang, Seungjin Kim, Yoonseok Choi

  13. "My Art My Choice: Adversarial Protection Against Image Generation"

    Authors: Ilke Demir, Anthony Rhodes, Ram Bhagat, Nese Alyuz, Sinem Aslan, Umur Ciftci

  14. "Emotional Framing as a Control Channel: Effects of Prompt Valence on LLM Performance"

    Authors: Enmanuel Felix-Pena, Tiki Li, Ayo Akinkugbe, Kevin Zhu, Shu Ze (Wayne) Chen, Ethan Hin

  15. "Evolve to Inspire: Novelty Search for Diverse Image Generation"

    Authors: Passawis Chaiyapattanaporn, Alex Inch, Yuan Lu, Yuchen Zhu, Ting-Wen Ko, Davide Paglieri

  16. "Membership and Dataset Inference Attacks on Large Audio Generative Models"

    Authors: Jakub Proboszcz, Paweł Kochański, Karol Korszun, Katarzyna Stankiewicz, Giorgio Strano, Donato Crisostomi, Emanuele Rodolà, Kamil Deja, Jan Dubiński

  17. "SHIELD: A Benchmark Study on Zero-Shot Detection of AI-Edited Images with Vision Language Models"

    Authors: Siyuan Cheng, Hanxi Guo, Zhenting Wang, Xiangyu Zhang, Lingjuan Lyu

  18. "Explainable AI-Generated Image Detection RewardBench"

    Authors: Michael Yang, Shijian Deng, William Doan, Kai Wang, Tianyu Yang, Harsh Singh, Yapeng Tian

  19. "Not All That’s Colorful Is Real: Rethinking Metrics for Image Colorization"

    Authors: Swarnim Maheshwari, Syed Ali, Arkaprava Majumdar, Panshul Jindal, Vineeth N Balasubramanian

  20. "GenAIM: A Multimodal Artificial Intelligence Music Generation Web Tool Using Lyrics or Images"

    Authors: Callie Liao, Ellie Zhang, Duoduo Liao

  21. "CPT: Controllable & Editable Design Variations with Language Models"

    Authors: Karthik Suresh, Amine Ben Khalifa, Li Zhang, Wei-ting Hsu, Fangzheng Wu, Vinay More, Asim Kadav

  22. "Hallucination as an Upper Bound: A New Perspective on Text-to-Image Evaluation"

    Authors: Seyed Amir Kasaei, Mohammad Hossein Rohban

  23. "SpotEdit: Evaluating Visually-Guided Image Editing Methods"

    Authors: Sara Ghazanfari, Wei-An Lin, Haitong Tian, Ersin Yumer

  24. "DuoLens: A Framework for Robust Detection of Machine-Generated Multilingual Text and Code"

    Authors: Shriyansh Agrawal, Aidan Lau, Sanyam Shah, Ahan M R

  25. "Prompt-Based Safety Guidance Is Ineffective for Unlearned Text-to-Image Diffusion Models"

    Authors: Jiwoo Shin, Byeonghu Na, Mina Kang, Wonhyeok Choi, Il-chul Moon

  26. "EgoAnimate: Generating Human Animations from Egocentric top-down Views via Controllable Latent Diffusion Models"

    Authors: Gurbuz Turkoglu, Julian Tanke, Iheb Belgacem, Lev Markhasin

  27. "Accelerate Creation of Product Claims Using Generative AI"

    Authors: PO-YU LIANG, Yong Zhang, Tatiana Hwa, Aaron Byers

  28. "Node-Based Editing for Multimodal Generation of Text, Audio, Image, and Video"

    Authors: Alexander Htet Kyaw, Lenin Sivalingam

  29. "First-Place Solution to NeurIPS 2024 Invisible Watermark Removal Challenge"

    Authors: Fahad Shamshad, Tameem Bakr, Yahia Salaheldin Shaaban, Noor Hussein, Karthik Nandakumar, Nils Lukas

  30. "Do Not Mimic My Voice: Speaker Identity Unlearning for Zero-Shot Text-to-Speech"

    Authors: Jinju Kim, Taesoo Kim, Dong Kim, Jong Hwan Ko, Gyeong-Moon Park

  31. "Decomate: Leveraging Generative Models for Co-Creative SVG Animation"

    Authors: Jihyeon Park, Jiyoon Myung, Seoni Shin, JUNGKI SON, Joohyung Han

  32. "Learning Human-Perceived Fakeness in AI-Generated Videos via Multimodal LLMs"

    Authors: Xingyu Fu, Siyi Liu, Yinuo Xu, Pan Lu, Yejin Choi, James Zou, Dan Roth, Chris Callison-Burch

  33. "Generative AI Agents for Controllable and Protected Content Creation"

    Authors: Haris Khan, Sadia Asif

  34. "Comparing Occupational Gender Bias in AI-Generated Anime-style and Realistic Illustrations"

    Authors: Ayano Ito, Shoma Mizuno, Keiichi Namikoshi, Yuko Sakurai, Satoshi Oyama

  35. "Watermarking Diffusion Language Models"

    Authors: Thibaud Gloaguen, Robin Staab, Nikola Jovanović, Martin Vechev

  36. "The Mind's Eye: A Multi-Faceted Reward Framework for Guiding Visual Metaphor Generation"

    Authors: Girish Koushik, Fatemeh Nazarieh, Katherine Birch, Shenbin Qian, Diptesh Kanojia

  37. "FlashFoley: Fast Interactive Sketch2Audio Generation"

    Authors: Zachary Novack, Koichi Saito, Zhi Zhong, Takashi Shibuya, Shuyang Cui, Julian McAuley, Taylor Berg-Kirkpatrick, christian simon, Shusuke Takahashi, Yuki Mitsufuji

  38. "LegalWiz: A Multi-Agent Generation Framework for Contradiction Detection in Legal Documents"

    Authors: Ananya Mantravadi, Shivali Dalmia, Abhishek Mukherji, Nand Dave, Anudha Mittal

  39. "Bridging_Reading_Accessibility_Gaps__Responsible_Multimodal_Simplification_with_Generative_AI"

    Authors: Sharv Murgai, Shivatmica Murgai

  40. "GAN-based Transfer of Interpretable Directions for Disentangled Image Editing in Text-to-Image Diffusion Models"

    Authors: Yusuf Dalva, Hidir Yesiltepe, Pinar Yanardag Delul

  41. "MITHRIL: A Multi-modal and Immersive Generative AI Approach for Roleplaying Tabletop Games"

    Authors: Michael Atkins, William Denman, Tuna Han Salih Meral, Mustafa Doga Dogan, Pinar Yanardag Delul

  42. "Side Effects of Erasing Concepts from Diffusion Models"

    Authors: Shaswati Saha, Sourajit Saha, Manas Gaur, Tejas Gokhale

  43. "Prompting Away Stereotypes? Evaluating Bias in Text-to-Image Models for Occupations"

    Authors: Shaina Raza, Maximus Powers, PARTHA PRATIM SAHA, Mahveen Raza, Rizwan Qureshi

  44. "Patch’n Play: Zero-Shot Video Editing by Fusing Local and Global Patches"

    Authors: Hidir Yesiltepe, Yusuf Dalva, Ritika Allada, Pinar Yanardag Delul

  45. "Context-Masked Meta-Prompting for Privacy-Preserving LLM Adaptation in Finance"

    Authors: Sayash Raaj Hiraou

  46. "Adaptive Originality Filtering: Rejection‑Based Prompting and RiddleScore for Culturally Grounded Multilingual Riddle Generation"

    Authors: Duy Le, Kent Ziti, Evan Girard-Sun, Vasu Sharma, Sean O'Brien, Kevin Zhu

  47. "Carbon Literacy for Generative AI: Visualizing Training Emissions Through Human-Scale Equivalents"

    Authors: Mahveen Raza, Maximus Powers

  48. "Dynamic Guardrail Generation (DGG): A Framework for Prompt-Time Mitigation of LLM Harms"

    Authors: Anh Dao Minh Nguyen

  49. "Evaluating the Evaluators: Metrics for Compositional Text-to-Image Generation"

    Authors: Seyed Amir Kasaei, Ali Aghayari, Arash Mari Oriyad, Niki Sepasian, MohammadAmin Fazli, Mahdieh Soleymani, Mohammad Hossein Rohban

  50. "Setting the DC: Tool-Grounded D&D Simulations to Test LLM Agents"

    Authors: Shengqi Li, Ziyi Zeng, Jiajun Xi, Andrew Zhu, Prithviraj Ammanabrolu

  51. "SPICE: A Synergistic, Precise, Iterative, and Customizable Image Editing Workflow"

    Authors: Kenan Tang, Yanhong Li, Yao Qin

  52. "Differentially Private Adaptation of Diffusion Models via Noisy Aggregated Embeddings"

    Authors: Pura Peetathawatchai, Wei-Ning Chen, Berivan Isik, Sanmi Koyejo, Albert No

  53. "Not All Deepfakes Are Created Equal: Triaging Audio Forgeries for Robust Deepfake Singer Identification"

    Authors: Davide Salvi, Hendrik Vincent Koops, Elio Quinton

  54. "The Name-Free Gap: Policy-Aware Stylistic Control in Music Generation"

    Authors: Ashwin Nagarajan, Hao-Wen Dong

  55. "Instance-Specific Test-Time Training for Speech Editing in the Wild"

    Authors: Taewoo Kim, Uijong Lee, Hayoung Park, Choongsang Cho, Nam Park, Young Lee

  56. "ShrutiSense: Microtonal Modeling and Correction in Indian Classical Music"

    Authors: Rajarshi Ghosh, Jayanth Athipatla

Important Dates

Paper Submission

August 22, 2025 23:59 AoE

August 29, 2025 23:59 AoE

Decision Notification

September 23, 2025 AoE

Camera Ready

October 31, 2025 23:59 AoE

Workshop Date

December 6, 2025 (Upper Level Room 23ABC)

Call for Papers & Demos

Important Dates

  • Paper Submission Deadline: August 22, 2025 23:59 AoE (Anywhere on Earth)
  • Paper Submission Deadline: August 29, 2025 23:59 AoE (Anywhere on Earth)
  • Decision Notification: September 23, 2025 AoE (Anywhere on Earth)
  • Camera Ready Deadline: October 31, 2025 23:59 AoE (Anywhere on Earth)
  • Workshop Date: December 6, 2025

Submit the paper through OpenReview Portal

All accepted papers are expected to be presented in person during the poster session, and some of which will be selected for in-person oral talks at the workshop.


Call for Regular/Short Papers

We invite research contributions related to generative AI for content creation that is (1) original and unpublished; (2) work-in-progress; (3) published at recent conferences or journals (e.g., ICML, ICLR, NeurIPS, CVPR, ICCV, TMLR, etc.), with an emphasis on controllability, protection, and real-world creative practices. If the paper was already published at a conference or journal, please explicitly provide the link/conference name to the paper. Topics of interest based on the workshop scope include, but not limited to:

  • Controllable X-to-X generation, where X represents any modality (e.g., text, image, audio, video)
  • Interactive or iterative generation pipelines on content creation (e.g., via codes, agents)
  • Evaluations and benchmarks for controllability
  • Applications of controllability/protection in creative practices
  • Digital watermarking, fingerprinting, and provenance tracking
  • Benchmarks for evaluating protection in generative systems
  • Case studies and design research for content creation
  • Human-in-the-loop approaches for real-world GenAI creation workflows
  • Emerging applications of GenAI in content creation

Call for Demos

We welcome submissions that is (1) original and unpublished; (2) work-in-progress; (3) published at recent conferences or journals (e.g., ICML, ICLR, NeurIPS, CVPR, ICCV, TMLR, CHI, SIGGRAPH, etc.) from artists, designers, and interdisciplinary researchers from academic prototypes and early-stage creative tools exploring human-AI collaboration, novel interfaces, or protective techniques integrated into creative tools. If the paper was already published at a conference or journal, please explicitly provide the link/conference name to the paper. Topics of interest include, but not limited to:

  • AI-assisted (controllable) content creation (e.g., art, music, fashion, or performance)
  • New creative workflows or practices enabled by GenAI
  • Human-in-the-loop co-creation systems and user experience studies
  • Social, ethical, or cultural impacts of GenAI on creators and audiences
  • Evaluations of GenAI tools in creative education, prototyping, or curation
  • Interdisciplinary tools or frameworks bridging AI and creative practice
  • Emerging applications of GenAI in content creation
  • Submission Guidelines

    The workshop accepts research, industrial, and creative demo papers of the following types:

    • Regular paper: 8 pages + references and appendix.
    • Short paper: 4 pages + references and appendix.
    • Creative demo: 2 pages + references and appendix.

    Submissions should be anonymized and follow the NeurIPS 2025 LaTeX style. Please use \usepackage[dblblindworkshop]{neurips_2025} for submission and \usepackage[dblblindworkshop, final]{neurips_2025} for the camera-ready version.


    Review Process

    All submissions will undergo a double-blind review process. Please ensure that your submission does not contain any identifying information about the authors.


    Publication

    Accepted papers will be made available through the OpenReview website and displayed on the GenProCC 2025 homepage, but are to be considered non-archival. Authors of accepted papers retain the full copyright of their work and are free to submit extended versions to conferences or journals.

Keynote Speakers

The invited speakers are organized into two complementary categories: researchers advancing GenAI (Tech) as well as creators and artists (Creators), listed below in alphabetical order by last name.

Tech Speakers

Creator Speakers

Organizers

Program Committee

We gratefully acknowledge the following reviewers for their contributions to the GenProCC Workshop program committee:

Contact Us

Workshop Location

NeurIPS 2025

San Diego, USA

San Diego Convention Center

Upper Level Room 23ABC