CiteScore 3.36
More on impact ›

Original Research ARTICLE Provisionally accepted The full-text will be published soon. Notify me

Front. Robot. AI | doi: 10.3389/frobt.2019.00077

Incremental and Parallel Machine Learning Algorithms with Automated Learning Rate Adjustments

  • 1Department of Computer Science, Meiji University, Japan

The existing machine learning algorithms for minimizing the convex function over a closed convex set suffer from slow convergence because their learning rates must be determined before running them.
This paper proposes two machine learning algorithms incorporating the line search method, which automatically and algorithmically finds appropriate learning rates at run-time.
One algorithm is based on the incremental subgradient algorithm, which sequentially and cyclically uses each of the parts of the objective function; the other is based on the parallel subgradient algorithm, which uses parts independently in parallel.
These algorithms can be applied to constrained nonsmooth convex optimization problems appearing in tasks of learning support vector machines without adjusting the learning rates precisely.
The proposed line search method can determine learning rates to satisfy weaker conditions than the ones used in the existing machine learning algorithms.
This implies that the two algorithms are generalizations of the existing incremental and parallel subgradient algorithms for solving constrained nonsmooth convex optimization problems.
We show that they generate sequences that converge to a solution of the constrained nonsmooth convex optimization problem under certain conditions.
The main contribution of this paper is the provision of three kinds of experiment showing that the two algorithms can solve concrete experimental problems faster than the existing algorithms.
First, we show that the proposed algorithms have performance advantages over the existing ones in solving a test problem.
Second, we compare the proposed algorithms with a different algorithm Pegasos, which is designed to learn with a support vector machine efficiently, in terms of prediction accuracy, value of the objective function, and computational time.
Finally, we use one of our algorithms to train a multilayer neural network and discuss its applicability to deep learning.

Keywords: Support Vector Machines, neural networks, Nonsmooth convex optimization, Incremental Subgradient Algorithm, Parallel Subgradient Algorithm, Line search algorithm, Parallel Computing

Received: 03 Dec 2018; Accepted: 08 Aug 2019.

Edited by:

Bologna Guido, Université de Genève, Switzerland

Reviewed by:

Önder Tutsoy, Adana Science and Technology University, Turkey
Narin Petrot, Naresuan University, Thailand  

Copyright: © 2019 Hishinuma and Iiduka. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Mr. Kazuhiro Hishinuma, Department of Computer Science, Meiji University, Kawasaki, Japan, kaz@cs.meiji.ac.jp