Both frequentist and Bayesian inference are powerful tools for modeling random objects, but the latter focuses more on measuring uncertainty. Bayesian inference enjoys generality and flexibility in updating the model estimation and prediction from their prior random measures to posterior MCMC sampling and variation inference, based on incorporating more available evidence and information. Artificial Intelligence (AI) is a field with data-driven intrinsicality. It consists of a human-machine collaboration for generating data, developing algorithms, and evaluating results to make decisions. Standard training in AI and various learnings is usually a process that finds the best choice of weights or parameters through an optimization algorithm. Such a process links the approximation roles between probability distributions in statistics and objective functions in AI from a probabilistic perspective. Therefore, it drives the urgent need for Bayesian philosophy and approaches into AI surroundings.
Bayesian inference has had a pretty wide range of applications in AI, including Bayesian networks, Bayesian spatial-temporal models, Bayesian inference and learning from neural networks and deep learning, Bayesian meta-learning, Bayesian reinforcement learning, Bayesian supervised learning, semi-supervised learning, and unsupervised learning. It has also been used to solve real-world problems, for example, Bayesian network for social media data and Bayesian decoding of brain images. However, there is quite a large space to fill in integrating Bayesian within the AI framework. For, example, there are still tremendous unsolved questions: how to quantify prior knowledge, how to justify probability distribution behind the likelihood of the parameters, how to implement Bayesian inference efficiently from posterior distributions, how to measure the effect of minor perturbation to prior and data, and how to improve the convergence rate of the algorithm in implementing Bayesian learning.
We call for applications of Bayesian concepts in AI under the statistical framework and the related Bayesian theory. It includes (but not limited to) the following areas:
• Recent developments in Bayesian theory and its methodology;
• Applications of Bayesian methods in AI;
• Computing technology in Bayesian inference, algorithms and its implementation;
• Applying Bayesian approach and AI techniques on solving real-world problems;
• Robust of Bayesian learning and Bayesian sensitive analysis in AI.
Both frequentist and Bayesian inference are powerful tools for modeling random objects, but the latter focuses more on measuring uncertainty. Bayesian inference enjoys generality and flexibility in updating the model estimation and prediction from their prior random measures to posterior MCMC sampling and variation inference, based on incorporating more available evidence and information. Artificial Intelligence (AI) is a field with data-driven intrinsicality. It consists of a human-machine collaboration for generating data, developing algorithms, and evaluating results to make decisions. Standard training in AI and various learnings is usually a process that finds the best choice of weights or parameters through an optimization algorithm. Such a process links the approximation roles between probability distributions in statistics and objective functions in AI from a probabilistic perspective. Therefore, it drives the urgent need for Bayesian philosophy and approaches into AI surroundings.
Bayesian inference has had a pretty wide range of applications in AI, including Bayesian networks, Bayesian spatial-temporal models, Bayesian inference and learning from neural networks and deep learning, Bayesian meta-learning, Bayesian reinforcement learning, Bayesian supervised learning, semi-supervised learning, and unsupervised learning. It has also been used to solve real-world problems, for example, Bayesian network for social media data and Bayesian decoding of brain images. However, there is quite a large space to fill in integrating Bayesian within the AI framework. For, example, there are still tremendous unsolved questions: how to quantify prior knowledge, how to justify probability distribution behind the likelihood of the parameters, how to implement Bayesian inference efficiently from posterior distributions, how to measure the effect of minor perturbation to prior and data, and how to improve the convergence rate of the algorithm in implementing Bayesian learning.
We call for applications of Bayesian concepts in AI under the statistical framework and the related Bayesian theory. It includes (but not limited to) the following areas:
• Recent developments in Bayesian theory and its methodology;
• Applications of Bayesian methods in AI;
• Computing technology in Bayesian inference, algorithms and its implementation;
• Applying Bayesian approach and AI techniques on solving real-world problems;
• Robust of Bayesian learning and Bayesian sensitive analysis in AI.