Research Topic

Deep Continuous-Discrete Machine Learning (DeCoDeML)

About this Research Topic

Since the beginnings of machine learning – and indeed already hinted at in Alan Turing’s groundbreaking 1950 paper “Computing machinery and intelligence” – two opposing approaches have been pursued: on the one hand, approaches that relate learning to knowledge and mostly use “discrete” formalisms of formal logic. On the other hand, approaches which, mostly motivated by biological models, investigate learning in artificial neural networks and predominantly use “continuous” methods from numerical optimization and statistics. The recent successes of deep learning can be attributed to the latter, the “continuous” approach, and are currently opening up new opportunities for computers to “perceive” the world and to act, with far-reaching consequences for industry, science and society. The massive success in recognizing “continuous” patterns is the catalyst for a new enthusiasm for artificial intelligence methods. However, today’s artificial neural networks are hardly suitable for learning and understanding “discrete” logical structures, and this is one of the major hurdles to further progress.
Accordingly, one of the biggest open problems is to clarify the connection between these two learning approaches (logical-discrete, neural-continuous). In particular, the role and benefits of prior knowledge need to be reassessed and clarified. The role of formal logic in ensuring sound reasoning must be related to perception through deep networks. Further, the question of how to use prior knowledge to make the results of deep learning more stable, and to explain and justify them, is to be discussed. The extraction of symbolic knowledge from networks is a topic that needs to be reexamined against the background of the successes of deep learning. Finally, it is an open question if and how the principles responsible for the success of deep learning methods can be transferred to symbolic learning.

This Article Collection will call for papers that address the topic of discrete-continuous learning as well as the following questions (but not limited to):
1. How can continuous neural networks contribute to learning of logical artifacts, such as formulas, logic programs, database queries and integrity constraints? How are they applied to tune deductive systems?
2. How can discrete logical structures and the construction of formalisms allow learning systems to take advantage of specified logical rules?
3. How can hybrid discrete-continuous architectures improve learning over pure discrete or continuous architectures? What are the trade-offs between adopting logic-based methods vs. adopting learning-based methods?


Keywords: machine learning, neural networks, deep learning, continuous learning, discrete learning


Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

Since the beginnings of machine learning – and indeed already hinted at in Alan Turing’s groundbreaking 1950 paper “Computing machinery and intelligence” – two opposing approaches have been pursued: on the one hand, approaches that relate learning to knowledge and mostly use “discrete” formalisms of formal logic. On the other hand, approaches which, mostly motivated by biological models, investigate learning in artificial neural networks and predominantly use “continuous” methods from numerical optimization and statistics. The recent successes of deep learning can be attributed to the latter, the “continuous” approach, and are currently opening up new opportunities for computers to “perceive” the world and to act, with far-reaching consequences for industry, science and society. The massive success in recognizing “continuous” patterns is the catalyst for a new enthusiasm for artificial intelligence methods. However, today’s artificial neural networks are hardly suitable for learning and understanding “discrete” logical structures, and this is one of the major hurdles to further progress.
Accordingly, one of the biggest open problems is to clarify the connection between these two learning approaches (logical-discrete, neural-continuous). In particular, the role and benefits of prior knowledge need to be reassessed and clarified. The role of formal logic in ensuring sound reasoning must be related to perception through deep networks. Further, the question of how to use prior knowledge to make the results of deep learning more stable, and to explain and justify them, is to be discussed. The extraction of symbolic knowledge from networks is a topic that needs to be reexamined against the background of the successes of deep learning. Finally, it is an open question if and how the principles responsible for the success of deep learning methods can be transferred to symbolic learning.

This Article Collection will call for papers that address the topic of discrete-continuous learning as well as the following questions (but not limited to):
1. How can continuous neural networks contribute to learning of logical artifacts, such as formulas, logic programs, database queries and integrity constraints? How are they applied to tune deductive systems?
2. How can discrete logical structures and the construction of formalisms allow learning systems to take advantage of specified logical rules?
3. How can hybrid discrete-continuous architectures improve learning over pure discrete or continuous architectures? What are the trade-offs between adopting logic-based methods vs. adopting learning-based methods?


Keywords: machine learning, neural networks, deep learning, continuous learning, discrete learning


Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

About Frontiers Research Topics

With their unique mixes of varied contributions from Original Research to Review Articles, Research Topics unify the most influential researchers, the latest key findings and historical advances in a hot research area! Find out more on how to host your own Frontiers Research Topic or contribute to one as an author.

Topic Editors

Loading..

Submission Deadlines

30 November 2020 Manuscript

Participating Journals

Manuscripts can be submitted to this Research Topic via the following journals:

Loading..

Topic Editors

Loading..

Submission Deadlines

30 November 2020 Manuscript

Participating Journals

Manuscripts can be submitted to this Research Topic via the following journals:

Loading..
Loading..

total views article views article downloads topic views

}
 
Top countries
Top referring sites
Loading..