Humans’ understanding of the world is highly structured: we perceive and manipulate objects, remember events, construct hierarchies, reason over plans, develop algorithms, and elaborate formal scientific and mathematical theories. However, in the majority of “end-to-end” deep learning research, these structures are only implicitly represented (e.g., in the weights or activations of a neural network). While such research has made impressive strides towards more powerful language and perceptual modeling, the implicit representations employed by these methods may come at a cost to generalization, data-efficiency, and interpretability.
This Research Topic focuses on ways to address these limitations in deep learning systems by explicitly incorporating structured knowledge, reasoning, and planning into the design of sequential decision making models.
We invite researchers to submit articles on the following topics in the context of sequential decision making or planning:
● Graph representation learning
● Program synthesis
● Self-supervised learning
● Relational reinforcement learning
● Probabilistic programming
● Neurosymbolic systems
● Knowledge retrieval & integration
● Object-oriented learning
● Causal reasoning
● Exploration and hypothesis testing in reinforcement learning
● Learning world models
● Leveraging world models for decision making
Submissions should generally go beyond metrics like prediction error, classification accuracy, or reward on standard benchmarks. Instead, we encourage submissions to emphasize how the work improves on one or more of the following:
● Zero- or few-shot generalization
● Data-efficient transfer to new tasks
● Interpretability
● Modularity
● Other novel evaluation metrics or environments
We also encourage submissions of review, vision, or position pieces on the above topics.
Humans’ understanding of the world is highly structured: we perceive and manipulate objects, remember events, construct hierarchies, reason over plans, develop algorithms, and elaborate formal scientific and mathematical theories. However, in the majority of “end-to-end” deep learning research, these structures are only implicitly represented (e.g., in the weights or activations of a neural network). While such research has made impressive strides towards more powerful language and perceptual modeling, the implicit representations employed by these methods may come at a cost to generalization, data-efficiency, and interpretability.
This Research Topic focuses on ways to address these limitations in deep learning systems by explicitly incorporating structured knowledge, reasoning, and planning into the design of sequential decision making models.
We invite researchers to submit articles on the following topics in the context of sequential decision making or planning:
● Graph representation learning
● Program synthesis
● Self-supervised learning
● Relational reinforcement learning
● Probabilistic programming
● Neurosymbolic systems
● Knowledge retrieval & integration
● Object-oriented learning
● Causal reasoning
● Exploration and hypothesis testing in reinforcement learning
● Learning world models
● Leveraging world models for decision making
Submissions should generally go beyond metrics like prediction error, classification accuracy, or reward on standard benchmarks. Instead, we encourage submissions to emphasize how the work improves on one or more of the following:
● Zero- or few-shot generalization
● Data-efficient transfer to new tasks
● Interpretability
● Modularity
● Other novel evaluation metrics or environments
We also encourage submissions of review, vision, or position pieces on the above topics.