EDL-AI 2020
Computer Vision & Pattern Recognition Artificial Intelligence
The recent focus of AI and Pattern Recognition communities on the supervised learning approaches, and particularly to Deep Learning / AI, resulted in considerable increase of performance of Pattern Recognition and AI systems, but also raised the question of the trustfulness and explainability of their predictions for decision-making. Instead of developing and using Deep NNs as black boxes and adapting known architectures to variety of problems, the goal of explainable Deep Learning / AI is to propose methods to “understand” and “explain” how the these systems produce their decisions. AI systems may produce errors, can exhibit overt or subtle bias, may be sensitive to noise in the data, and often lack technical and judicial transparency and explainability. These shortcomings raise many ethical and policy concerns that impede wider adoption of this potentially very beneficial technology. In various Pattern Recognition and AI application domains such as health, ecology, autonomous driving cars, security, culture it is mandatory to understand how the predictions are correlated with the information perception and decision making by the experts. The goals of the workshop are to bring together research community which is working on the question of improving explainability of AI and Pattern Recognition algorithms and systems. The Workshop is a part of ICPR'2020 and supported by research project XAI-LABRI
The Proceedings of the EDL-AI 2020 workshop will be published in the Springer Lecture Notes in Computer Science (LNCS) series. Papers will be selected by a single blind (reviewers are anonymous) review process. Submissions must be formatted in accordance with the Springer's Computer Science Proceedings guidelines . Two types of contribution will be considered: