As the development of computing hardware, algorithms, and the availability of a large volume of data grows, machine learning technologies have become increasingly popular. Practical systems have been deployed in various domains, including face recognition, automatic video monitoring, and even auxiliary driving. However, the security implications of machine learning algorithms and systems are still unclear. For example, developers still lack a deep understanding of adversarial machine learning, one of the unique vulnerabilities of machine learning systems, and are unable to evaluate the robustness of those machine learning algorithms effectively. The other prominent problem is privacy concerns when applying machine learning algorithms, and as the general public is becoming more concerned about their privacy, more works are definitely desired towards privacy-preserving machine learning systems.
Motivated by this situation, this workshop solicits original contributions on the security and privacy problems of machine learning algorithms and systems, including adversarial learning, algorithm robustness analysis, privacy-preserving machine learning, etc. This workshop will bring researchers together to exchange ideas on cutting-edge technologies and brainstorm solutions for urgent problems derived from practical applications.
Topics of interest include, but are not limited, to the following:
Authors are welcome to submit their papers in the following two forms:
Full papers that present relatively mature research results related to security issues of machine learning algorithms, systems, and applications. The paper could be an attack, defence, security analysis, survey, etc. The submissions for this type must follow the original LNCS format (see LNCS format) with a page limit of 18 pages (including references) for the main part (reviewers are not required to read beyond this limit) and 20 pages in total. Note that the page limit for the camera-ready paper is set to a maximum of 20 pages (in LNCS format).
Short papers that describe ongoing work and bring some new insights and inspiring ideas related to security issues of machine learning algorithms, systems, and applications. Short papers will follow the same LNCS format as full papers (see LNCS format) but with a page limit of 9 pages (including references).
The submissions must be anonymous with no author names, affiliations, acknowledgement, or obvious references. Once accepted, the papers will appear in the formal proceedings. Authors of accepted papers must guarantee that their papers will be presented at the conference and must make their papers available online. There will be the best paper award.
Special Note to Springer LNCS Proceedings: Authors should consult Springer's authors' guidelines and use their proceedings templates, either for LaTeX or for Word, for the preparation of their papers. Springer encourages authors to include their ORCIDs in their papers. In addition, the corresponding author of each paper, acting on behalf of all of the authors of that paper, must complete and sign a Consent-to-Publish form, through which the copyright for their paper is transferred to Springer. The corresponding author signing the copyright form should match the corresponding author marked on the paper. Once the files have been sent to Springer, changes relating to the authorship of the papers cannot be made.
Each workshop affiliated with ACNS 2025 will nominate the best paper candidates. Best workshop papers will be selected and awarded with 500 EUR prize sponsored by Springer. The list of previous best workshop papers is available here
There will be 1-2 invited keynote speakers in the workshop.
Name | Institution | Chair |
---|---|---|
Yangguang Tian | University of Surrey | Workshop Chair |
Ye Dong | Singapore University of Technology and Design | Workshop Co-Chair |
TBA...
Please Register Here.
TBA...
For more information, please contact the organizer simlaworkshop2025@gmail.com