IEEE Transactions on Neural Networks and Learning Systems, Special Issue on Effective Feature Fusion in Deep Neural Networks


Computer Vision & Pattern Recognition





IEEE Transactions On Neural Networks And Learning Systems Call for Papers



Special Issue on



Effective Feature Fusion in Deep Neural Networks



https://cis.ieee.org/images/files/Documents/call-for-papers/tnnls/SI_EFDNN_TNNLS_CFP.pdf



Submission deadline: Nov. 30, 2020. First notification: Feb. 1, 2021



                                                                         ================================================================================



Due to the powerful ability of learning hierarchical features, Deep Neural Networks (DNNs) have achieved great success in many intelligent perception systems with image data and/or point cloud data and have been widely used in developing robust automotive driving, visual surveillance, and human-machine interaction. For example, state-of-the-art performances in image classification, object detection, semantic segmentation, and cross-modal perception are obtained by different kinds of DNNs. To a great degree, the success of DNNs stems from properly fusing the hierarchical features which are diverse in semantic-levels, resolutions/scales, roles, sensitivity, and so on. Representative fusion schemes include dense connection, residual learning, skip connection, top-down feature pyramid, and attention-based feature weighting. However, there is a large room for developing more effective feature fusion to improve the performance of DNNs so that machine perception can approach or exceed human perception.



This special issue focuses on investigating problems and phenomena of existing feature fusion schemes, tackling the challenges of semantic gap and perception of hard objects and scenarios, and providing new ideas, theories, solutions, and insights for effective feature fusion in DNNs for image and/or point cloud data. The topics of interest include, but are not limited to:



Feature fusion for effective backbones and prediction



Feature fusion for image/video data using deep neural networks



Feature fusion for point cloud data using deep neural networks



Adaptive feature fusion networks



Criteria and loss functions for feature fusion in deep neural networks



Feature fusion for detecting/recognizing small objects



Feature fusion for detecting/recognizing occluded objects



Attention-based feature fusion in deep neural networks



Visualization and interpretation of feature fusion



Feature fusion for semantic segmentation



Feature fusion for object tracking



Feature fusion for cross-modal/domain learning



Feature fusion for 3D object detection



New feature fusion problems and applications



IMPORTANT DATES



n  November 30, 2020: Deadline for manuscript submission



n  February 1, 2021: Reviewer’s comments to authors



n  April 1, 2021: Submission deadline of revisions



n  June 1, 2021: Final decisions to authors



n  July 1, 2021: Publication date (Early access)



GUEST EDITORS



Yanwei Pang, Tianjin University, China, pyw@tju.edu.cn



Fahad Shahbaz Khan, Inception Institute of Artificial Intelligence, UAE, fahad.khan@liu.se



Xin Lu, Adobe Inc., USA, xinl@adobe.com



Fabio Cuzzolin, Oxford Brookes University, UK, fabio.cuzzolin@brookes.ac.uk



SUBMISSION INSTRUCTIONS



n  Read the Information for Authors at http://cis.ieee.org/tnnls.



n  Submit your manuscript at the TNNLS webpage (http://mc.manuscriptcentral.com/tnnls) and follow the submission procedure. Please, clearly indicate on the first page of the manuscript and in the cover letter that the manuscript is submitted to this special issue. Send an email to the leading editor Prof. Yanwei Pang (pyw@tju.edu.cn) with subject “TNNLS special issue submission” to notify your submission.



n  Early submissions are welcome. We will start the review process as soon as we receive your contributions.