As the development of computing hardware, algorithms, and the availability of a large volume of data grows, machine learning technologies have become increasingly popular. Practical systems have been deployed in various domains, including face recognition, automatic video monitoring, and even auxiliary driving. However, the security implications of machine learning algorithms and systems are still unclear. For example, developers still lack a deep understanding of adversarial machine learning, one of the unique vulnerabilities of machine learning systems, and are unable to evaluate the robustness of those machine learning algorithms effectively. The other prominent problem is privacy concerns when applying machine learning algorithms, and as the general public is becoming more concerned about their privacy, more works are definitely desired towards privacy-preserving machine learning systems.
Motivated by this situation, this workshop solicits original contributions on the security and privacy problems of machine learning algorithms and systems, including adversarial learning, algorithm robustness analysis, privacy-preserving machine learning, etc. This workshop will bring researchers together to exchange ideas on cutting-edge technologies and brainstorm solutions for urgent problems derived from practical applications.
Topics of interest include, but are not limited, to the following:
Authors are welcome to submit their papers in the following two forms:
Full papers that present relatively mature research results related to security issues of machine learning algorithms, systems, and applications. The paper could be an attack, defence, security analysis, survey, etc. The submissions for this type must follow the original LNCS format (see LNCS format) with a page limit of 18 pages (including references) for the main part (reviewers are not required to read beyond this limit) and 20 pages in total. Note that the page limit for the camera-ready paper is set to a maximum of 20 pages (in LNCS format).
Short papers that describe ongoing work and bring some new insights and inspiring ideas related to security issues of machine learning algorithms, systems, and applications. Short papers will follow the same LNCS format as full papers (see LNCS format) but with a page limit of 9 pages (including references).
The submissions must be anonymous with no author names, affiliations, acknowledgement, or obvious references. Once accepted, the papers will appear in the formal proceedings. Authors of accepted papers must guarantee that their papers will be presented at the conference and must make their papers available online. There will be the best paper award.
Special Note to Springer LNCS Proceedings: Authors should consult Springer's authors' guidelines and use their proceedings templates, either for LaTeX or for Word, for the preparation of their papers. Springer encourages authors to include their ORCIDs in their papers. In addition, the corresponding author of each paper, acting on behalf of all of the authors of that paper, must complete and sign a Consent-to-Publish form, through which the copyright for their paper is transferred to Springer. The corresponding author signing the copyright form should match the corresponding author marked on the paper. Once the files have been sent to Springer, changes relating to the authorship of the papers cannot be made.
Each workshop affiliated with ACNS 2025 will nominate the best paper candidates. Best workshop papers will be selected and awarded with 500 EUR prize sponsored by Springer. The list of previous best workshop papers is available here
Invited Speaker: Dr. Rui Wang
Dr. Rui Wang received his BSc degree from Beijing University of Posts and Telecommunications in 2017 and his MSc degree from the University of Southampton in 2018. He completed his PhD at Delft University of Technology in 2024, where he is currently a postdoctoral researcher. His research interests include privacy and security challenges in federated learning.
Title: Taming Malicious Majorities in Federated Learning using Privacy-preserving Byzantine-robust Clustering
Abstract: Byzantine-robust Federated Learning (FL) aims to counter malicious clients and train an accurate global model while maintaining an extremely low attack success rate. Most existing systems, however, are only robust when the majority of clients are honest. Some approaches avoid this honest majority assumption but require the server to have access to an auxiliary dataset for filtering malicious updates. Others rely on the semi-honest majority assumption to ensure robustness and confidentiality of updates. As a result, achieving Byzantine robustness and confidentiality of updates without assuming a semi-honest majority remains a challenge.
This talk introduces a novel Byzantine-robust and privacy-preserving FL system capable of addressing malicious minorities and majorities on both the server and client sides. The proposed system ensures robustness and confidentiality, paving the way for more secure and reliable federated learning environments.
Name | Institution | Chair |
---|---|---|
Yangguang Tian | University of Surrey | Workshop Chair |
Ye Dong | Singapore University of Technology and Design | Workshop Co-Chair |
Name | Institution |
---|---|
Meng Li | Peking University |
Binanda Sengupta | Indian Institute of Engineering Science and Technology (IIEST), Shibpur |
Yan Lin Aung | Singapore University of Technology and Design |
Jodie Knapp | Royal Holloway University of London |
Xiangfu Song | Nanyang Technological University |
Yuantian Miao | The University of Newcastle |
Ming Xu | National University of Singapore |
Yu Zheng | University of California, Irvine |
Yaxi Yang | Singapore University of Technology and Design |
Qifan Zhang | University of California, Irvine |
Chengyang Zhao | University of California, Los Angeles |
Xuesong Bai | University of California, Irvine |
Qiafan Wang | University of Birmingham |
Qian Chen | Xidian University |
Chenang Li | University of California, Irvine |
Fengzhao Shi | SANGFOR Technologies Co., Ltd. |
Ziyao Liu | Nanyang Technological University |
Please refer to ACNS2025 Register Here.
Time | Title |
---|---|
9:30--9:40 | Opening |
9:40--10:15 | Invited Talk: Taming Malicious Majorities in Federated Learning using Privacy-preserving Byzantine-robust Clustering |
10:15--10:30 | Inoussa Mouiche and Sherif Saad, TIRE: Advancing Threat Intelligence Relation Extraction with a Novel Data-Centric Framework |
10:30--10:40 | Felix Maurer, Jonas Sander and Thomas Eisenbarth, ReDASH: Fast and efficient Scaling in Arithmetic Garbled Circuits for Secure Outsourced Inference |
10:40--11:00 | Coffee break |
11:00--11:10 | Bram van Dartel, Marc Damie and Florian Hahn, Evaluating Membership Inference Attacks in heterogeneous-data setups |
11:10--11:25 | Sanjana Nambiar and Christina Pöpper, JailFact-Bench: A Comprehensive Analysis of Jaibreak Attacks vs. Hallucinations in LLMs |
11:25--11:40 | Xiyan Shao and Yuke Liu, LLMacaroon: Re-architecting Secure LLM Applications with Macaroons |
11:40--11:55 | Lars Malmqvist, Winning at All Cost: A Small Environment for Eliciting Specification Gaming Behaviors in Large Language Models |
11:55--12:10 | Elnaz Rabieinejad, Ali Dehghantanha, Fattane Zarrinkalam and Jeff Schwartzentruber, United We Log, Divided We Identify: A Decentralized Approach for Automated Log Analysis |
12:10--12:25 | Rina Mishra and Gaurav Varshney, A Study of Effectiveness of Brand Domain Identification Features for Phishing Detection in 2025 |
12:25--12:30 | Closing remarks |
For more information, please contact the organizer simlaworkshop2025@gmail.com