Special Issue on
Model optimization and statistical inference have played a central role in variousapplications of computational intelligence, data analytics, and computervision. Traditional model-centric learning approaches require properly crafted optimization and inference algorithms, as well as carefully tuned parameters. Recently, the discriminative learning technique has demonstrated its power for process-centric learning. The resulting solutions are closely related to a variety of statistical and optimization models such as sparse representation, structured regression, and conditional random fields, and are empowered by effective computational techniques such as bi-level optimizationand partial differential equations (PDEs). Moreover, many deep learning models has beenshown to be closely tied with discriminative learning models. For example, a problem-specific deep architecture can be formed by unfolding themodel inference as aniterative process, whose parameters can be jointly learnedfrom training datawith a discriminative loss. Such a viewpoint motivates the incorporation of domain expertise and problem structures into designing deep architectures, and helps the interpretation and performanceimprovement of deep models.
This special issue aims at promoting first-class research along this direction, andoffers a timely collection of information to benefit the researchers andpractitioners. We welcome high-quality original submissions addressing bothnovel theoretical and modeling progress, and real-world applications thatbenefit discriminative learning for model optimization and statisticalinference. Topics of interests include, but are not limited to:
- Task-driven learning for modeloptimization and/or statistical inference.
- Novel architectures andalgorithms for bi-level optimization and/or PDEs .
- Problem-specific deeparchitectures for solvingmodel optimization and statisticalinference.
- Integration ofoptimization-based, statistical learning, and inferencemodels with deep learning models.
- Sparse representation motivateddeep architectures.
- Structured regression motivateddeep architectures.
- Conditional random forestmotivated recurrent neural networks.
- Novel interpretative frameworkson the working mechanism of representative deep learning models.
- Theoretical analysis of deeplearning models and algorithms: convergence, optimality, generalization, stability, andsensitivity analysis.
- Applications based on the abovedescribed models and algorithms: (1) image enhancement, restoration and synthesis; (2) opticalflow, stereo matching, camera localization, and normal estimation; (3) visualrecognition, detection, and segmentation, and scene understanding; (4) pattern classification, clustering and dimensionality reduction; (5) medicalimage analysis and other novel application domains.
- 15 July 2017 – Deadline formanuscript submission (NO Deadline Extension)
- 30 September 2017 – Reviewer’scomments to authors
- 15 November 2017 – Deadline for submittingrevised manuscripts
- 30 December 2017 – Final decisionof acceptance to authors
- 30 April 2018- Tentativepublication date
- Wangmeng Zuo, Harbin Institute ofTechnology, China.
- Zhangyang (Atlas) Wang, Texas A&MUniversity, USA.
- Xi Peng, Institute for Infocomm, A*STAR,Singapore.
- Ling Shao, University of East Anglia, UK.
- Danil Prokhorov, Toyota ResearchInstitute North America, USA.
- Horst Bischof, Graz University ofTechnology, Austria.
- Read the Information for Authorsat http://cis.ieee.org/tnnls.
- Submit your manuscript at theTNNLS webpage (http://mc.manuscriptcentral.com/tnnls) and follow the submissionprocedure. Please, clearly indicate on the first page of the manuscript and inthe cover letter that the manuscript is submitted to this special issue. Sendan email to the leading guest editor, Prof.Wangmeng Zuo (firstname.lastname@example.org), with subject “TNNLS special issuesubmission” to notify about your submission.
- Early submissions are welcome. Wewill start the review process as soon as we receive your contributions.