About this Research Topic
With the deployment of the Internet of Things (IoT), an increasing number of sensors become connected to the Internet generating large-amount, streaming, and multimodal data. These data have distinct statistical characteristics over time and sensing modalities, which are hardly captured by traditional learning methods. Continual and multimodal learning allows integration, adaptation, and generalization of the knowledge learned from experiential data collected with heterogeneity to new situations. Therefore, continual and multimodal learning is an important step to enable efficient ubiquitous computing on IoT devices.
The major challenges for combining continual learning and multimodal learning with real-world data include 1) how to fuse and transfer knowledge between the multimodal data under constrained computational resources, 2) how to learn continually despite the missing, imbalanced, or noisy data under constrained computational resources, 3) how to effectively reserve privacy and retain security when learning knowledge from streaming and multimodal data collected by multiple stakeholders, and 4) how to develop large-scale distributed learning systems to efficiently learn from continual and multimodal data.
This Research Topic aims to bring researchers working on different disciplines together to tackle these challenges. It focuses on exploring the intersection and combination of continual machine learning and multimodal modeling with applications on the Internet of Things. It welcomes submissions addressing these challenges in different applications/domains as well as algorithmic and systematic approaches to leverage continual learning on multimodal data. The Research Topic further seeks to develop a community that systematically handles the streaming multimodal data widely available in real-world computing systems. Authors who have presented their work in IJCAI workshops may submit their substantially extended manuscripts to this Research Topic. We also strongly encourage manuscript submissions from all researchers.
To allow a systematical discussion and study on continual and multimodal learning themes that submissions can address, but are not limited are as follows:
• on-device continual and multimodal learning under constrained computational resources
• continual and multimodal federated learning for Internet of Things
• effective model distillation from large-scale multimodal pretraining
• discrete representation learning and hash learning
• meta-learning and lifelong learning
• on-device interactive learning
• effective knowledge transfer and fusion across multimodal data in continual learning
• transfer learning and federated learning with multimodal streaming data
• multi-task learning on multimodal data
• how to balance on-device and off-device learning on streaming multimodal data
• to manage high volume data flow from the streaming multimodal data
Research Topic also welcomes continual learning methods that target:
• data distribution changed caused by the fast-changing dynamic physical environment
• missing, imbalanced, or noisy data under multimodal data scenarios
Finally, novel applications or interfaces on streaming multimodal data are also related topics. As examples, the data modalities can include natural language, speech, image, video, audio, virtual reality, biochemistry, WiFi, GPS, RFID, vibration, accelerometer, pressure, temperature, humidity, among others.
The Topic Editors Tong Yu, Handong Zhao, and Ruiyi Zhang are employed by Adobe Research. The rest of the Topic Editors have no competing interests to declare.
Keywords: Continual learning, Multimodal fusion and learning, Internet of Things, IoT, Sensing, Vision and language
Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.