Vancouver, Canada - Saturday, July 19, 2025
Large language models (LLMs) have rapidly evolved into powerful engines capable of driving agentic workflows, i.e., autonomous sequences of actions traditionally performed by humans (e.g., booking flights, preparing administrative forms) based on textual and/or visual inputs. Embracing collaborative and federated learning is essential in this context, as these paradigms enable the aggregation of distributed data while preserving user privacy and ensuring regulatory compliance. By keeping data localized, federated approaches allow agentic workflows to continuously learn and adapt from diverse user interactions without exposing sensitive information. This distributed learning framework not only facilitates scalable and personalized improvements but also mitigates biases by incorporating insights from a broad range of environments, ultimately amplifying the transformative potential of agentic workflows for both industry and everyday applications.
Recent commercial deployments, such as OpenAI Operator, highlight the significant impact of agentic workflows on the global economy and daily life. However, these workflows currently face several challenges including imprecise execution (e.g., incorrectly interacting with UI elements), suboptimal tool-use efficiency (e.g., latency in processing), and limitations in adaptive user-agent interactions (e.g., ineffective co-piloting and supervision). Additionally, while agentic workflows generate valuable data from user interactions, the sensitive and localized nature of this data creates hurdles for centralized learning approaches.
Collaborative and federated learning are powerful methodologies to overcome these challenges. They facilitate collective improvement by enabling continuous workflow optimization through the distributed updates of the model and prompts without having to share the raw data. These methods also support personalization by tailoring agentic responses to individual user styles and preferences without compromising privacy. Importantly, they maintain strict regulatory compliance by ensuring that sensitive data remains local, which a critical requirement under emerging legislative frameworks such as the EU AI Act and Canada Bill C-27.
This workshop uniquely focuses on the convergence of collaborative/federated learning with agentic workflows, fostering interdisciplinary research that bridges theoretical foundations, practical implementations, and regulatory considerations.
We are soliciting contributions from the following areas (expand for further details):
We welcome contributions that push the boundaries at this unique intersection and aim to create an engaging forum for students, scholars, and practitioners worldwide to share insights, discuss progress, and chart future directions in this exciting field. We invite technical papers with up to 6 pages each and vision/position papers with up to 4 pages each (excluding references and appendices), reviewed by a workshop program committee. All double-anonymous submissions must use the ICML 2025 author kit available here. The review process will be facilitated via OpenReview. Please make sure every author has an OpenReview account ahead of submission. The submission portal can be found here.
Accepted papers will be accessible via this website ahead of the workshop. Our workshop is non-archival and there are no formal proceedings. We allow submissions of manuscripts that have not been accepted by an archival conference, i.e., if your paper is in submission with an archival conference/journal at the time of the workshop submission deadline you are welcome to submit to CFAgentic.
We are looking forward to hosting an exciting set of invited speakers from diverse research backgrounds!
Topic area: Agentic workflows on the network edge
Topic area: Reinforcement learning for agentic workflows
Topic area: Human & AI-agent interactions and reasoning in LLMs
Topic area: Online reinforcement learning for agentic workflows
Topic area: Federated learning and agentic workflow optimization
Topic area: Safety and security of agentic workflows
Topic area: Agentic Workflows & Reasoning Models in Practice
Topic area: Building AI agents and facilitating their collaboration to solve tasks
Topic area: Federated learning and agentic workflows
The workshop will be held on Saturday, July 19, 2025, 8.30 a.m. – 5.15 p.m. PST. We will be in Room 215-216 at Vancouver Convention Center. The schedule is subject to change.
Time | Name | Speaker |
---|---|---|
08:30 - 08:35 | Welcome | Organizers |
08:35 - 09:00 | Invited Talk #1: Challenges and Opportunities for Federated Foundation Models | Han Yu |
09:00 - 09:10 | Lightning Talk #1: DBA-DFL: Towards Distributed Backdoor Attacks with Network Detection in Decentralized Federated Learning | Bohan Liu |
09:10 - 09:20 | Lightning Talk #2: CAF-I: A Collaborative Multi-Agent Framework for Enhanced Irony Detection with Large Language Models | Ziqi Liu |
09:20 - 09:45 | Invited Talk #2: Title TBA | Yuejie Chi |
09:45 - 10:00 | Coffee Break | -- |
10:00 - 10:25 | Invited Talk #3: The Importance of Exploration for Test-Time Scaling | Aviral Kumar |
10:25 - 10:35 | Lightning Talk #3: MAD-Sherlock: Multi-Agent Debate for Visual Misinformation Detection | Kumud Lakara |
10:35 - 10:45 | Lightning Talk #4: Interpretable Multi-Agent Communication via Information Gating | Stav Belogolovsky |
10:45 - 11:10 | Invited Talk #4: Collaborative and Federated Agentic AI via AG2 | Chi Wang |
11:10 - 11:35 | Invited Talk #5: Learning over Heterogeneous Networks: From Convergence Analysis to Intelligent Control | Christopher G. Brinton |
11:35 - 12:30 | Poster Session | -- |
12:30 - 13:00 | Lunch (provided by ICML conference) | -- |
13:00 - 13:25 | Invited Talk #6: Title TBA | Nick Haber |
13:25 - 13:35 | Lightning Talk #5: AGENT KB: A Hierarchical Memory Framework for Cross-Domain Agentic Problem Solving | Robert Tang |
13:35 - 13:45 | Lightning Talk #6: Generalizing Trust: Weak-to-Strong Trustworthiness in Language Models | Lillian Sun |
13:45 - 13:55 | Lightning Talk #7: Who Should Be Consulted? Targeted Expert Selection for Rare Disease Diagnosis | Yinghao Fu |
13:55 - 14:05 | Lightning Talk #8: LLMSELECTOR: Learning to Select Models in Compound AI Systems | Lingjiao Chen |
14:05 - 14:30 | Invited Talk #7: Federated AI with Flower - Recent Advancements and Scaling to Production | William Lindskog-Münzing |
14:30 - 14:55 | Invited Talk #8: Title TBA | Jay Rodge |
14:55 - 15:30 | Coffee Break | -- |
15:30 - 15:55 | Invited Talk #9: What does it mean for agentic AI to preserve privacy? | Niloofar Mireshghallah |
15:55 - 16:05 | Lightning Talk #9: MobileA3gent: Training Mobile GUI Agents Using Decentralized Self-Sourced Data from Diverse Users | Wen-Hao Wang |
16:05 - 16:15 | Lightning Talk #10: Mitigating Unintended Memorization with LoRA in Federated Learning for LLMs | Tahseen Rabbani |
16:15 - 17:00 | Panel Discussion | -- |
17:00 - 17:15 | Awards & Closing Remarks | Organizers |
A link to each paper will be posted once the camera-ready deadline has passed (June 27, 2025).
# | Title | Authors | Decision |
---|---|---|---|
1 | DBA-DFL: Towards Distributed Backdoor Attacks with Network Detection in Decentralized Federated Learning | Bohan Liu, Yang Xiao, Ruimeng Ye, Zinan Ling, Xiaolong Ma, Bo Hui | Oral |
2 | Interpretable Multi-Agent Communication via Information Gating | Stav Belogolovsky, Eran Iceland, Itay Naeh, Ariel Barel, Shie Mannor | Oral |
3 | Generalizing Trust: Weak-to-Strong Trustworthiness in Language Models | Lillian Sun, Martin Pawelczyk, Zhenting Qi, Aounon Kumar, Himabindu Lakkaraju | Oral |
4 | Mitigating Unintended Memorization with LoRA in Federated Learning for LLMs | Thierry Bossy, Julien Tuấn Tú Vignoud, Tahseen Rabbani, Juan R. Troncoso Pastoriza, Martin Jaggi | Oral |
5 | Who Should Be Consulted? Targeted Expert Selection for Rare Disease Diagnosis | Yinghao Fu, Chao Yang, Xinye Chen, Yuting Yan, Shuang Li | Oral |
6 | MAD-Sherlock: Multi-Agent Debate for Visual Misinformation Detection | Kumud Lakara, Georgia Channing, Juil Sock, Christian Rupprecht, Philip Torr, John Collomosse, Christian Schroeder de Witt | Oral |
7 | LLMSELECTOR: Learning to Select Models in Compound AI Systems | Lingjiao Chen, Jared Quincy Davis, Boris Hanin, Peter Bailis, James Zou, Matei Zaharia, Ion Stoica | Oral |
8 | MobileA3gent: Training Mobile GUI Agents Using Decentralized Self-Sourced Data from Diverse Users | WenHao Wang, Mengying Yuan, Zijie Yu, Guangyi Liu, Rui Ye, Tian Jin, Siheng Chen, Yanfeng Wang | Oral |
9 | AGENT KB: A Hierarchical Memory Framework for Cross-Domain Agentic Problem Solving | Xiangru Tang, Tianrui Qin, Tianhao Peng, Ziyang Zhou, Daniel Shao, Tingting Du, Xinming Wei, He Zhu, Ge Zhang, Jiaheng Liu, Xingyao Wang, Sirui Hong, Chenglin Wu, Wangchunshu Zhou | Oral |
10 | CAF-I: A Collaborative Multi-Agent Framework for Enhanced Irony Detection with Large Language Models | Ziqi Liu, Ziyang Zhou, Mingxuan Hu | Oral |
11 | DebFlow: Automating Agent Creation via Agent Debate | Jinwei Su, Yinghui Xia, Ronghua Shi, Jianhui Wang, Jianuo Huang, Yijin Wang, Tianyu Shi, Yang Jingsong, Lewei He | Poster |
12 | Privacy-Enhancing Paradigms within Federated Multi-Agent Systems | Zitong Shi, Guancheng Wan, Wenke Huang, Guibin Zhang, Jiawei Shao, Mang Ye, Carl Yang | Poster |
13 | LoRA-FL: A Low-Rank Adversarial Attack for Compromising Group Fairness in Federated Learning | Sankarshan Damle, Ljubomir Rokvic, Venugopal Bhamidi, Manisha Padala, Boi Faltings | Poster |
14 | Position: Agentic Federated Learning for AI-Driven Strategy Design and Optimization | Haoyuan Li, Jindong Wang, Mathias Funk, Aaqib Saeed | Poster |
15 | Vision: How to fully unleash the productivity of Agentic AI? Decentralized Agent Swarm Network | Rui Sun, Zhipeng Wang, Jiahao Sun, Rajiv Ranjan | Poster |
16 | Private Federated Learning with Provable Convergence via Smoothed Normalization | Egor Shulgin, Sarit Khirirat, Peter Richtárik | Poster |
17 | Parrot: An Agentic Classroom AI | Kalena Dai, Arya Sarukkai | Poster |
18 | Multiple Automated Finance Integration Agents (MAFIA) With Self-Healing | Arya Sarukkai, Shaohui Sun, Wei Dai | Poster |
19 | EconEvals: Benchmarks and Litmus Tests for LLM Agents in Unknown Environments | Sara Fish, Julia Shephard, Minkai Li, Ran I Shorrer, Yannai A. Gonczarowski | Poster |
20 | CoEM: Collaborative Editable Model | Kaiwen Tang, Aitong Wu, Guangda Sun | Poster |
21 | Can One Safety Loop Guard Them All? Agentic Guard Rails for Federated Computing | Narasimha Raghavan Veeraragavan, Jan F Nygård | Poster |
22 | Federated Forgetting in Agentic Workflows: GDPR Compliance Experiments with Synthetic User Logs | Zichao Li, Zong Ke | Poster |
23 | DP-AdamW: Investigating Decoupled Weight Decay and Bias Correction in Private Deep Learning | Lillian Sun, Kevin Cong, Je Qin Chooi, Russell Li | Poster |
24 | Spatio-Temporal Gradient Matching for Federated Continual Learning | Duong Minh Nguyen, Le-Tuan Nguyen, Quoc-Viet Pham | Poster |
25 | Bidding for Influence: Auction-Driven Diffusion Image Generation | Lillian Sun, Henry Huang, Fucheng Warren Zhu, Giannis Daras, Constantinos Costis Daskalakis | Poster |
26 | FEDTAIL: Federated Long-Tailed Domain Generalization with Sharpness-Guided Gradient Matching | Sunny Gupta, Nikita Jangid, Shounak Das, Amit Sethi | Poster |
27 | Federated Submodular Maximization: Improved Communication Rounds and Bit Complexity | Sreeharsh Namani, Neophytos Charalambides, Akbar Rafiey | Poster |
28 | Advancing Agentic AI: Decentralized and Verifiable Collaboration for Next-Generation Foundation Model Development | Arpita Sarker, Arpita Sarker, Alexander Jesser | Poster |
29 | Leveraging Uncertainty Estimation for Efficient LLM Routing | Tuo Zhang, Asal Mehradfar, Dimitrios Dimitriadis, Salman Avestimehr | Poster |
30 | Fluid Democracy in Federated Data Aggregation | Aditya Vema Reddy Kesari, Krishna Reddy Kesari | Poster |
31 | Collective Bias Mitigation via Model Routing and Collaboration | Mingzhe Du, Anh Tuan Luu, Xiaobao Wu, Yichong Huang, Yue Liu, Dong HUANG, Huijun Liu, Bin Ji, Jie M. Zhang, See-Kiong Ng | Poster |
Paper instructions: You may add 1 page of main content to your paper to address reviewer feedback but technical papers must not exceed 7 pages and position/vision papers must not exceed 5 pages. Please upload your final workshop paper version to OpenReview by June 27, 2025.
Poster instructions: Every accepted paper is required to present a poster in our poster session. We will assign the poster slots prior to the workshop and share with all authors on time. Please note the poster requirements posted here: ICML Poster Instructions. The following is taken verbatim from the official instructions:
Must not exceed 36in (H) x 24in (W) or 91cm (H) x 61cm (W)
Notice: Workshop posters must be in portrait format
Oral instructions: Every Oral (lightning talk) will be 10 minutes long; 8 minutes for presentation and 2 minutes Q&A. You will bring your own laptop and connect to the projector. Every talk will be live-streamed.