Research Topic

Deep Learning with Limited Labeled Data for Vision, Audio, and Text

About this Research Topic

Deep learning’s impressive performance on complex classification applications has made deep neural networks the standard tool for many applications, such as image classification, document summarization, and speaker identification. This impressive performance is commonly achieved with supervised learning in which deep neural networks train on very large labeled datasets. Moreover, benchmark datasets in these topic areas typically contain thousands to millions of labeled images. Consequently, research in deep learning had exploded in the last decade following the creation of large labeled benchmark datasets such as ImageNet.

However, collecting, cleaning, and labeling training, validation, and test data for potential new applications of deep learning is hard, even for those applications where the raw data might be plentiful. Manually labeling thousands of data samples is often impractical for new applications because labeling is labor-intensive and often can only be correctly classified by a limited number of experts, such as in medical, defense, and scientific applications. Furthermore, care must be taken in labeling training samples or else label noise can interfere with the model's accuracy and generalization capabilities.

On the other hand, humans learn to recognize new object types and words (both written and auditory) with one or a few examples. Designing deep neural networks that can learn with limited labeled examples is an open and active research area. Novel deep learning systems that learn with few labeled examples to recognize new classes and to adapt continuously to changing scenarios would greatly reduce the effort required to develop deep learning systems for new applications and expand the lifetime of production systems.

Furthermore, significant progress has been made in semi-supervised and unsupervised learning. Recent approaches are achieving performances that are comparable to fully supervised training on large labeled datasets. These new approaches lower the barrier for applying deep learning to new applications.

This Research Topic focuses on learning with fewer labels for deep neural networks. Application areas can include vision, language processing, multimedia, and speech (i.e., machine language translation). Multi-modal tasks come with their own set of challenges and are of particular interest. The topics of interest include (but are not limited to) the following areas:

• Self-supervised and unsupervised learning methods
• Semi-supervised learning methods
• Weakly-supervised learning methods
• New methods for few-/zero-shot learning
• Meta-learning methods
• New applications in vision, text, and speech
• Multi-modal learning with limited labels (i.e., VQA, fusion)
• Life-long/continual/incremental learning methods
• Novel domain adaptation methods
• Theoretical understanding of learning with limited labels
• Biologically inspired learning with limited labels
• Novel evaluation metrics


Keywords: Self-supervised learning, Unsupervised learning, Semi-supervised, Representation learning, Machine Learning, Zero-shot learning, limited labels


Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

Deep learning’s impressive performance on complex classification applications has made deep neural networks the standard tool for many applications, such as image classification, document summarization, and speaker identification. This impressive performance is commonly achieved with supervised learning in which deep neural networks train on very large labeled datasets. Moreover, benchmark datasets in these topic areas typically contain thousands to millions of labeled images. Consequently, research in deep learning had exploded in the last decade following the creation of large labeled benchmark datasets such as ImageNet.

However, collecting, cleaning, and labeling training, validation, and test data for potential new applications of deep learning is hard, even for those applications where the raw data might be plentiful. Manually labeling thousands of data samples is often impractical for new applications because labeling is labor-intensive and often can only be correctly classified by a limited number of experts, such as in medical, defense, and scientific applications. Furthermore, care must be taken in labeling training samples or else label noise can interfere with the model's accuracy and generalization capabilities.

On the other hand, humans learn to recognize new object types and words (both written and auditory) with one or a few examples. Designing deep neural networks that can learn with limited labeled examples is an open and active research area. Novel deep learning systems that learn with few labeled examples to recognize new classes and to adapt continuously to changing scenarios would greatly reduce the effort required to develop deep learning systems for new applications and expand the lifetime of production systems.

Furthermore, significant progress has been made in semi-supervised and unsupervised learning. Recent approaches are achieving performances that are comparable to fully supervised training on large labeled datasets. These new approaches lower the barrier for applying deep learning to new applications.

This Research Topic focuses on learning with fewer labels for deep neural networks. Application areas can include vision, language processing, multimedia, and speech (i.e., machine language translation). Multi-modal tasks come with their own set of challenges and are of particular interest. The topics of interest include (but are not limited to) the following areas:

• Self-supervised and unsupervised learning methods
• Semi-supervised learning methods
• Weakly-supervised learning methods
• New methods for few-/zero-shot learning
• Meta-learning methods
• New applications in vision, text, and speech
• Multi-modal learning with limited labels (i.e., VQA, fusion)
• Life-long/continual/incremental learning methods
• Novel domain adaptation methods
• Theoretical understanding of learning with limited labels
• Biologically inspired learning with limited labels
• Novel evaluation metrics


Keywords: Self-supervised learning, Unsupervised learning, Semi-supervised, Representation learning, Machine Learning, Zero-shot learning, limited labels


Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

About Frontiers Research Topics

With their unique mixes of varied contributions from Original Research to Review Articles, Research Topics unify the most influential researchers, the latest key findings and historical advances in a hot research area! Find out more on how to host your own Frontiers Research Topic or contribute to one as an author.

Topic Editors

Loading..

Submission Deadlines

15 November 2021 Abstract
27 December 2021 Manuscript

Participating Journals

Manuscripts can be submitted to this Research Topic via the following journals:

Loading..

Topic Editors

Loading..

Submission Deadlines

15 November 2021 Abstract
27 December 2021 Manuscript

Participating Journals

Manuscripts can be submitted to this Research Topic via the following journals:

Loading..
Loading..

total views article views article downloads topic views

}
 
Top countries
Top referring sites
Loading..