Joint Workshop on Efficient Deep Learning in Computer Vision

EDLCV 2020


Artificial Intelligence



Computer Vision has a long history of academic research, and recent advances in deep learning have provided significant improvements in the ability to understand visual content. As a result of these research advances on problems such as object classification, object detection, and image segmentation, there has been a rapid increase in the adoption of Computer Vision in industry; however, mainstream Computer Vision research has given little consideration to speed or computation time, and even less to constraints such as power/energy, memory footprint and model size.
This workshop, co-located with CVPR 2020, addresses the following topics:
Efficient Neural Network and Architecture Search
- Compact and efficient neural network architecture for mobile and AR/VR devices
- Hardware (latency, energy) aware neural network architectures search, targeted for mobile and AR/VR devices
- Efficient architecture search algorithm for different vision tasks (detection, segmentation etc.)
- Optimization for Latency, Accuracy and Memory usage, as motivated by embedded devices
Neural Network Compression
- Model compression (sparsification, binarization, quantization, pruning, thresholding and coding etc.) for efficient inference with deep networks and other ML models
- Scalable compression techniques that can cope with large amounts of data and/or large neural networks (e.g., not requiring access to complete datasets for hyperparameter tuning and/or retraining)
- Hashing (Binary) Codes Learning
Low-bit Quantization Network and Hardware Accelerators
- Investigations into the processor architectures (CPU vs GPU vs DSP) that best support mobile applications
- Hardware accelerators to support Computer Vision on mobile and AR/VR platforms
- Low-precision training/inference & acceleration of deep neural networks on mobile devices
Dataset and benchmark
- Open datasets and test environments for benchmarking inference with efficient DNN representations
- Metrics for evaluating the performance of efficient DNN representations
- Methods for comparing efficient DNN inference across platforms and tasks
Label/sample/feature efficient learning
- Label Efficient Feature Representation Learning Methods, e.g. Unsupervised Learning, Domain Adaptation, Weakly Supervised Learning and SelfSupervised Learning Approaches
- Sample Efficient Feature Learning Methods, e.g. Meta Learning
- Low Shot learning Techniques
- New Applications, e.g. Medical Domain
Mobile and AR/VR Applications
- Novel mobile and AR/VR applications using Computer Vision such as image processing (e.g. style transfer, body tracking, face tracking) and augmented reality
- Learning efficient deep neural networks under memory and computation constraints for on-device applications
All submissions will be handled electronically via the workshop’s CMT Website. Click the following link to go to the submission site: https://cmt3.research.microsoft.com/EDLCV2020/
Papers should describe original and unpublished work about the related topics. Each paper will receive double blind reviews, moderated by the workshop chairs. Authors should take into account the following:
- All papers must be written and presented in English.
- All papers must be submitted in PDF format. The workshop paper format guidelines are the same as the Main Conference papers
- The maximum paper length is 8 pages (excluding references). Note that shorter submissions are also welcome.
- The accepted papers will be published in CVF open access as well as in IEEE Xplore.