About this Research Topic
However, again due to such diversities in GSD, there is no unitary GSD learning model that can perform consistently across different tasks and datasets. As a consequence, how to design adaptive methods to learn from GSD in a task-aware and data-specific manner is an important problem. Fortunately, there are emerging technical tools from machine learning communities that have the potential to solve the above problems. Examples are automated machine learning, neural architecture search, meta-learning, and learning to lean. These methods can all help generalize learning methods that can exhibit abilities to learn well across different datasets and tasks. Thus, by proposing this Research Topic, we hope to draw interest from both academia and industry, with the goal to push learning methods for GSD to the next level.
In this research topic, we are generally interested in the theory, methodology, new learning algorithms, and applications of GNNs on GSD. Specifically, the following topics are more preferable:
• Architecture design and theoretical understanding of GNNs
• Applying, extending, or customizing automated machine learning / learning to lean / meta-learning / neural architecture search methods to GNNs
• Applications or methodology building on GNNs, including (but not limited to) complex networks, heterogeneous information network, knowledge base
• Hyper-parameter tuning / bi-level optimization for GNNs / optimization techniques for GSD
• Exploration of the behavior of GNN in novel settings, including (but not limited to) noisy / adversary / heterophilous
We also acknowledge the contribution of Zhitao (Rex) Ying in preparing and coordinating this project.
Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.