About
Welcome to the FLSys Workshop co-located with MLSys 2023 conference.
Federated learning (FL) is an ecosystem of computational approaches and techniques that enable geographically distributed data sources to form secure and private coalitions and jointly analyze their siloed data. Data never leave the original source, and each source only shares its locally trained AI/ML model with the outside world. Through this collaborative learning approach, sources can extract knowledge that is statistically more powerful, less biased, and more accurate when compared to the learning conducted independently by each source. At its core, federated learning uses distributed machine and deep learning methods, and privacy-enhancing technologies to empower learning in a collaborative setting.
There are still many open problems that need to be addressed before federated learning can be widely adopted and deployed in real-world settings. When applied to edge devices, such as Android phones, Raspberry Pi's, and the like, federated learning is known as On-Device FL. Edge devices participating in FL typically experience limitations in computation power, memory capacity, and communication bandwidth. Hardware resources can also vary greatly among such devices, e.g., clients using Jetson Nano with embedded GPUs are able to conduct computations much faster than those with Raspberry Pi equipped with only a quad-core CPU. Edge devices powered by batteries responsible for wireless communication, can suffer from network disconnections, packet drops, or battery drain.
Deploying FL algorithms directly on these edge devices is not feasible without a system that is highly efficient, aware of hardware heterogeneity, and fault-tolerance. Similarly, running extremely large-sized models in a datacenter setting (Cross-Silo FL) over hundreds of clients requires scalable systems that can provide efficient model aggregation and learning task delegation, while at the same time providing failover capabilities and being resilient against numerous attacks (e.g., model poisoning, data poisoning) threatening the federated learning environment.
Topics of interests include but not limited to
- Challenges of FL systems deployment.
- FL systems automation.
- FL systems in real-world, practical and production settings.
- FL systems with federated data management awareness.
- FL systems tailored for different learning applications, such as medical, finance, and manufacturing.
- FL systems tailored for different learning topologies, such as centralized, decentralized, and hierarchical.
- FL systems tailored for different data partitioning schemes, such as horizontal, vertical, and hybrid.
- FL systems with self-tuning capabilities.
- FL systems with failover capabilities.
- FL systems benchmark and evaluation.
- Data value and economics of data federations and FL systems.
- Auditable FL systems.
- Explainable FL systems.
- Interpretable FL systems.
- FL systems open challenges and vision perspectives.
- Incentives for formatting large-scale federations across organizations.
- Operational challenges in FL systems.
- Resilient and robust FL systems.
- Standardization of FL systems.
- Trade-offs between FL systems privacy, security, and efficiency.
- Trustworthy FL systems.
- Privacy, security, and hardware co-designs for FL systems.
Call for Papers
Important Dates (Updated!)
- Submission Deadline:
April 12, 2023April 30, 2023 - Notification of Acceptance:
May 10, 2023May 15, 2023 - Camera-ready papers due:
June 8, 2023May 31, 2023 - Workshop Date: June 8, 2023
- All deadlines are AoE time.
Submission Instructions
Workshop papers need to describe new, previously unpublished research in this field. All submissions will be double-blind, though authors are allowed to post their papers on arXiv or other public forums. To prepare your submission, please use the MLSys 2023 LaTeX style files. All submitted papers need to be in a 2-column format. We will accept two paper versions (excluding references): short papers up to 6 pages and long papers up to 10 pages.
Please submit your paper via Openreview: https://openreview.net/group?id=MLSys/2023/Workshop/FLSys
Best (Student) Paper Awards
There will be a best paper award and a best student paper award for honoring exceptional papers published at the FLSys workshop.
Attendance & Registration
Attendance: The workshop will be held in-person only as the main MLSys 2023 conference.
Registration: Same as the main conference.
Workshop Schedule
Time | Session |
---|---|
08:50 - 09:00 ET | Opening Remarks |
09:00 - 09:30 ET | Virginia Smith |
09:30 - 10:00 ET | Salman Avestimehr |
10:00 - 10:15 ET | Coffee Break |
10:15 - 10:30 ET | Paper: Privacy Tradeoffs in Vertical Federated Learning |
10:30 - 10:45 ET | Paper: FedGP: Buffer-based Gradient Projection for Continual Federated Learning |
10:45 - 11:00 ET | Paper: Partial Disentanglement with Partially-Federated GANs (PaDPaF) |
11:00 - 11:30 ET | Shiqiang Wang |
11:30 - 12:30 ET | Lunch Break |
12:30 - 13:00 ET | Chunxiang (Jake) Zheng |
13:00 - 13:15 ET | Paper: Adaptive Split Learning |
13:15 - 13:30 ET | Paper: Memory-adaptive Depth-wise Heterogenous Federated Learning |
13:30 - 14:00 ET | Alexey Tumanov |
14:00 - 14:15 ET | Coffee Break | 14:15 - 15:30 ET | Poster Session |
15:30 - 15:45 ET | Closing Remarks |
Accepted Papers
- Adaptive Split Learning, Ayush Chopra, Surya Kant Sahu, Abhishek Singh, Abhinav Java, Praneeth Vepakomma, Mohammad Mohammadi Amiri, Ramesh Raskar
- Clustered Federated Learning for Heterogeneous Feature Spaces using Siamese Graph Convolutional Neural Network Distance Prediction, Yuto Suzuki, Farnoush Banaei-Kashani
- Resource-Efficient Federated Hyperdimensional Computing, Nikita Zeulin, Olga Galinina, Nageen Himayat, Sergey Andreev
- Memory-adaptive Depth-wise Heterogenous Federated Learning, Kai Zhang, Yutong Dai, Hongyi Wang, Eric Xing, Xun Chen, Lichao Sun
- FedCMR: A Library for Federated Continual Model Refinement, Xiaoyang Wang, Chaoyang He, Salman Avestimehr, Mahdi Marsousi
- Efficient Vertical Federated Learning with Secure Aggregation, Xinchi Qiu, Heng Hen, Wanru Zhao, Pedro Porto Buarque de Gusmao, Nicholas Donald Lane
- Privacy Tradeoffs in Vertical Federated Learning, Linh Tran, Timothy Castiglia, Stacy Patterson, Ana Milanova [Best Paper Award]
- FedGP: Buffer-based Gradient Projection for Continual Federated Learning, Shenghong Dai, Bryce Yicong Chen, Jy-yong Sohn, S M Iftekharul Alam, Ravikumar Balakrishnan, Suman Banerjee, Nageen Himayat, Kangwook Lee [Best Paper Award]
- FedSS: Federated Learning with Smart Selection of Clients, Ammar Tahir, Yongzhou Chen, Prashanti Nilayam
- Partial Disentanglement with Partially-Federated GANs (PaDPaF), Abdulla Jasem Almansoori, Abdulla_Jasem_Almansoori, Samuel Horváth, Martin Takáč
Program Committee
- José Luis Ambite (USC)
- Nina Mehrabi (USC)
- Umang Gupta (USC)
- Stefanos Laskaridis (Brave)
- George Sklivanitis (FAU)
- Yaodong Yu (Berkeley)
- Basak Guler (UCR)
- Jiachen Liu (UMich)
- Qinbin Li (NUS)
- Nathalie Baracaldo (IBM)
- Tao Lin (Westlake)
- Mi Zhang (OSU)
- Ang Li (UMD)
- Chulin Xie (UIUC)
- Yang Liu (Tsinghua)
- Jean du Terrail (Owkin)
- Xiaoyuan Liu (Berkeley)
- Christos Louizos (Qualcomm)
- Chrysovalantis Anastasiou (Google)
- Zhiwei Fan (Meta)
- Dimitrios Dimitriadis (Amazon)
- Othmane Marfoq (Inria)
- Revant Kumar (Apple)
Contact us