CVPR 2020- New Trends in Image Restoration and Enhancement workshop and challenges

NTIRE 2020


Computer Graphics Computer Vision & Pattern Recognition



NTIRE: 5th New Trends in Image Restoration and Enhancement workshop and challenges 2020
In conjunction with CVPR 2020
Website: http://www.vision.ee.ethz.ch/ntire20/
Contact: radu.timofte [at] vision.ee.ethz.ch
Scope
Image restoration, enhancement and manipulation are key computer vision tasks, aiming at the restoration of degraded image content, the filling in of missing information, or the needed transformation and/or manipulation to achieve a desired target (with respect to perceptual quality, contents, or performance of apps working on such images). Recent years have witnessed an increased interest from the vision and graphics communities in these fundamental topics of research. Not only has there been a constantly growing flow of related papers, but also substantial progress has been achieved.
Each step forward eases the use of images by people or computers for the fulfillment of further tasks, as image restoration, enhancement and manipulation serves as an important frontend. Not surprisingly then, there is an ever growing range of applications in fields such as surveillance, the automotive industry, electronics, remote sensing, or medical image analysis etc. The emergence and ubiquitous use of mobile and wearable devices offer another fertile ground for additional applications and faster methods.
This workshop aims to provide an overview of the new trends and advances in those areas. Moreover, it will offer an opportunity for academic and industrial attendees to interact and explore collaborations.
This workshop builds upon the success of the previous NTIRE editions: at CVPR 2017 and 2018 and 2019 and at ACCV 2016 . Moreover, it relies on all the people associated with the CLIC 2018, 2019, 2020 , PIRM 2018 , AIM 2019 and NTIRE events such as organizers, PC members, distinguished speakers, authors of published paper, challenge participants and winning teams.
Topics
Papers addressing topics related to image restoration, enhancement and manipulation are invited. The topics include, but are not limited to:
● Image/video inpainting
● Image/video deblurring
● Image/video denoising
● Image/video upsampling and super-resolution
● Image/video filtering
● Image/video de-hazing, de-raining, de-snowing, etc.
● Demosaicing
● Image/video compression
● Removal of artifacts, shadows, glare and reflections, etc.
● Image/video enhancement: brightening, color adjustment, sharpening, etc.
● Style transfer
● Hyperspectral imaging
● Underwater imaging
● Methods robust to changing weather conditions / adverse outdoor conditions
● Image/video restoration, enhancement, manipulation on constrained settings
● Image/video processing on mobile devices
● Visual domain translation
● Multimodal translation
● Perceptual enhancement
● Perceptual manipulation
● Image/video generation and hallucination
● Image/video quality assessment
● Image/video semantic segmentation, depth estimation
● Studies and applications of the above.
Submission
A paper submission has to be in English, in pdf format, and at most 8 pages (excluding references) in CVPR style. The paper format must follow the same guidelines as for all CVPR submissions.
http://cvpr2020.thecvf.com/submission/main-conference/author-guidelines
The review process is double blind. Authors do not know the names of the chair/reviewers of their papers. Reviewers do not know the names of the authors.
Dual submission is allowed with CVPR main conference only. If a paper is submitted also to CVPR and accepted, the paper cannot be published both at the CVPR and the workshop.
For the paper submissions, please go to the online submission site
https://cmt3.research.microsoft.com/NTIRE2020
Accepted and presented papers will be published after the conference in the CVPR Workshops Proceedings on by IEEE (http://www.ieee.org) and Computer Vision Foundation (www.cv-foundation.org).
The author kit provides a LaTeX2e template for paper submissions. Please refer to the example for detailed formatting instructions. If you use a different document processing system then see the CVPR author instruction page.
Author Kit: http://cvpr2020.thecvf.com/sites/default/files/2019-09/cvpr2020AuthorKit.zip
Workshop Dates
● Submission Deadline: March 15, 2020
● Decisions: April 05, 2020
● Camera Ready Deadline: April 15, 2020
NTIRE 2020 has the following associated groups of challenges (ONGOING!):
-image challenges:
● perceptual extreme super-resolution
● real world super-resolution (processing artifacts and smartphone camera)
● real denoising (rawRGB and sRGB)
● deblurring (on smartphone and on desktop)
● burst demoireing (static and dynamic)
● spectral reconstruction from RGB (clean and real world)
● dehazing (nonhomogeneous haze)
-video challenges:
● quality mapping (supervised and weakly supervised)
● deblurring
PARTICIPATION
To learn more about the challenges and to participate:
http://www.vision.ee.ethz.ch/ntire20/
Challenges Dates
● Release of train data: December 17, 2019
● Validation server online: January 06, 2020
● Competitions end: March 23, 2020
Organizers
● Radu Timofte, ETH Zurich, Switzerland
● Martin Danelljan, ETH Zurich, Switzerland
● Shuhang Gu, ETH Zurich, Switzerland
● Kai Zhang, ETH Zurich, Switzerland
● Lei Zhang, The Hong Kong Polytechnic University
● Ming-Hsuan Yang, University of California at Merced, US
● Luc Van Gool, KU Leuven, Belgium and ETH Zurich, Switzerland
● Cosmin Ancuti, Université catholique de Louvain (UCL), Belgium
● Codruta O. Ancuti, University Politehnica Timisoara, Romania
● Kyoung Mu Lee, Seoul National University, Korea
● Michael S. Brown, York University, Canada
● Eli Shechtman, Adobe Research
● Zhiwu Huang, ETH Zurich, Switzerland
● Seungjun Nah, Seoul National University, Korea
● Abdelrahman Kamel Siddek Abdelhamed, York University, Canada
● Mahmoud Afifi, York University, Canada
● Boaz Arad, Voyage 81, Israel
● Shanxin Yuan, Huawei Noah's Ark Lab, UK
● Gregory Slabaugh, Huawei Noah's Ark Lab, UK
Program Committee (to be updated)
Cosmin Ancuti, Universitatea Politehnica Timisoara, Romania
Nick Barnes, Data61, Australia
Michael S. Brown, York University, Canada
Subhasis Chaudhuri, IIT Bombay, India
Sunghyun Cho, Samsung
Christophe De Vleeschouwer, Université catholique de Louvain (UCL), Belgium
Chao Dong, SenseTime
Weisheng Dong, Xidian University, China
Alexey Dosovitskiy, Intel Labs
Touradj Ebrahimi, EPFL, Switzerland
Michael Elad, Technion, Israel
Corneliu Florea, University Politehnica of Bucharest, Romania
Alessandro Foi, Tampere University of Technology, Finland
Peter Gehler, University of Tübingen, MPI Intelligent Systems, Amazon, Germany
Bastian Goldluecke, University of Konstanz, Germany
Luc Van Gool, ETH Zürich and KU Leuven, Belgium
Shuhang Gu, ETH Zürich, Switzerland
Michael Hirsch, Amazon
Hiroto Honda, DeNA Co., Japan
Jia-Bin Huang, Virginia Tech, US
Michal Irani, Weizmann Institute, Israel
Phillip Isola, UC Berkeley, US
Zhe Hu, Light.co
Sing Bing Kang, Microsoft Research, US
Jan Kautz, NVIDIA Research, US
Seon Joo Kim, Yonsei University, Korea
Vivek Kwatra, Google
In So Kweon, KAIST, Korea
Christian Ledig, Twitter Inc.
Kyoung Mu Lee, Seoul National University, South Korea
Seungyong Lee, POSTECH, South Korea
Stephen Lin, Microsoft Research Asia
Chen Change Loy, Chinese University of Hong Kong
Vladimir Lukin, National Aerospace University, Ukraine
Kai-Kuang Ma, Nanyang Technological University, Singapore
Vasile Manta, Technical University of Iasi, Romania
Yasuyuki Matsushita, Osaka University, Japan
Peyman Milanfar, Google and UCSC, US
Rafael Molina Soriano, University of Granada, Spain
Yusuke Monno, Tokyo Institute of Technology, Japan
Hajime Nagahara, Osaka University, Japan
Vinay P. Namboodiri, IIT Kanpur, India
Sebastian Nowozin, Microsoft Research Cambridge, UK
Federico Perazzi, Disney Research
Aleksandra Pizurica, Ghent University, Belgium
Sylvain Paris, Adobe
Fatih Porikli, Australian National University, NICTA, Australia
Hayder Radha, Michigan State University, US
Tobias Ritschel, University College London, UK
Antonio Robles-Kelly, CSIRO, Australia
Stefan Roth, TU Darmstadt, Germany
Aline Roumy, INRIA, France
Jordi Salvador, Amazon, US
Yoichi Sato, University of Tokyo, Japan
Konrad Schindler, ETH Zurich, Switzerland
Samuel Schulter, NEC Labs America
Nicu Sebe, University of Trento, Italy
Eli Shechtman, Adobe Research, US
Boxin Shi, National Institute of Advanced Industrial Science and Technology (AIST), Japan
Wenzhe Shi, Twitter Inc.
Alexander Sorkine-Hornung, Disney Research
Sabine Süsstrunk, EPFL, Switzerland
Yu-Wing Tai, Tencent Youtu
Hugues Talbot, Université Paris Est, France
Robby T. Tan, Yale-NUS College, Singapore
Masayuki Tanaka, Tokyo Institute of Technology, Japan
Jean-Philippe Tarel, IFSTTAR, France
Radu Timofte, ETH Zürich, Switzerland
George Toderici, Google, US
Ashok Veeraraghavan, Rice University, US
Jue Wang, Megvii Research, US
Chih-Yuan Yang, UC Merced, US
Jianchao Yang, Snapchat
Ming-Hsuan Yang, University of California at Merced, US
Qingxiong Yang, Didi Chuxing, China
Jong Chul Ye, KAIST, Korea
Jason Yosinski, Uber AI Labs, US
Wenjun Zeng, Microsoft Research
Lei Zhang, The Hong Kong Polytechnic University
Wangmeng Zuo, Harbin Institute of Technology, China
Speakers (TBA)
Sponsors (TBA)
Samsung
Huawei
ETH Zurich / CVL
Contact
Email: radu.timofte [at] vision.ee.ethz.ch
Website: http://www.vision.ee.ethz.ch/ntire20/