Explainable Deep Learning- AI / ICPR'2020 WorkShop

EDL-AI 2020


Artificial Intelligence



About
The recent focus of AI and Pattern Recognition communities on the supervised learning approaches, and particularly to Deep Learning / AI, resulted in considerable increase of performance of Pattern Recognition and AI systems, but also raised the question of the trustfulness and explainability of their predictions for decision-making. Instead of developing and using Deep NNs as black boxes and adapting known architectures to variety of problems, the goal of explainable Deep Learning / AI is to propose methods to “understand” and “explain” how the these systems produce their decisions. AI systems may produce errors, can exhibit overt or subtle bias, may be sensitive to noise in the data, and often lack technical and judicial transparency and explainability. These shortcomings raise many ethical and policy concerns that impede wider adoption of this potentially very beneficial technology. In various Pattern Recognition and AI application domains such as health, ecology, autonomous driving cars, security, culture it is mandatory to understand how the predictions are correlated with the information perception and decision making by the experts. The goals of the workshop are to bring together research community which is working on the question of improving explainability of AI and Pattern Recognition algorithms and systems. The Workshop is a part of ICPR'2020 and supported by research project XAI-LABRI
Topics
“Sensing” or “salient features” of Neural Networks and AI systems - explanation of which features for a given configuration yield predictions both in spatial (images) and temporal (time-series, video) data;
Attention mechanisms in Deep Neural Networks and their explanation;
For temporal data, the explanation of which features and at what time are the most prominent for the prediction and what are the time intervals when the contribution of each data is important;
How the explanation can help on making Deep learning architectures more sparse (pruning) and light-weight;
When using multimodal data how the prediction in data streams are correlated and explain each other;
Automatic generation of explanations / justifications of algorithms and systems’ decisions;
Decisional uncertainly and explicability
Evaluation of the explanations generated by Deep Learning and other AI systems.
Program Committee
Christophe Garcia (LIRIS, France)
Hugues Talbot (EC, France)
Dragutin Petkovic (SFSU,USA)
Alexandre Benoît( LISTIC,France)
Mark T. Keane (UCD, Ireland)
Georges Quenot(LIG, France)
Stefanos Kolias (NTUA, Grece)
Jenny Benois-Pineau(LABRI, France)
Jenny Benois-Pineau(LABRI, France)
Hervé Le Borgne (LIST, France)
Noel O’Connor (DCU, Ireland)
Nicolas Thome(CNAM, France)
Dates
Submission deadline : June 15th 2020
Workshop author notification: July 15th 2020
Camera-ready submission: July 30th 2020
Finalized workshop program: August 15th 2020