Security in Machine Learning and its Applications

SiMLA 2019


Artificial Intelligence



Workshop Background
As the development of computing hardware, algorithms, and more importantly, availability of large volume of data, machine learning technologies have become a increasingly popular. Practical systems have been deployed in various domains, like face recognition, automatic video monitoring, and even auxiliary driving. However, the security implications of machine learning algorithms and systems are still unclear. For example, people still lack deep understanding on adversarial machine learning, one of the unique vulnerability of machine learning systems, and are unable to evaluate the robustness of those machine learning algorithms effectively. At the same time, innovations in machine learning, especially in deep learning, had also bring new tools and methodologies to solve existing problems, like spam detection, intrusion detection and response, etc.
Motivated by this situation, this workshop solicits original contributions on the security problems of machine learning algorithm and systems, including adversarial learning, algorithm robustness analysis, etc. We hope this workshop can bring researchers together to exchange ideas on edge-cutting technologies and brainstorm solutions for urgent problems derived from practical applications.
Topics
Topics of interest include, but not limited, to followings:
Adversarial Machine Learning
Robustness Analysis of Machine Learning Algorithms
Detection and Defense to Training Data set Poisonous attack
Watermarking of Machine Learning Algorithms and Systems
Attack and defense of face recognition systems
Attacks and defense of voice recognition and voice commanded systems
Attacks and defense of machine learning algorithms in program analysis
Malware identification and analysis
Spam and phishing email detection
Vulnerability analysis
Submissions Guidelines
Authors are welcome to submit their papers in following two forms:
Full papers that present relatively mature research results related to security issues of machine learning algorithms, systems, and applications. The paper could be attack, defense, security analysis, surveys, etc. The submissions for this type must follow the original LNCS format (see http://www.springeronline.com/lncs) with a page limit of 18 pages (including references) for the main part (reviewers are not required to read beyond this limit) and 25 pages in total.
Short papers that describe an on-going work and bring some new insights and inspiring ideas related to security issues of machine learning algorithms, systems, and applications. Short papers will follow the same LNCS format as full paper (http://www.springeronline.com/lncs), but with a page limit of 9 pages (including references).
A special issue for the selected papers will be published in IJIS (International Journal of Information Security) Springer.
The submissions must be anonymous, with no author names, affiliations, acknowledgment or obvious references. Once accepted, the papers will appear in the formal proceedings. Authors of accepted papers must guarantee that their paper will be presented at the conference and must make their paper available online. There will be a best paper award.