%A Mhaskar,Hrushikesh N.
%D 2020
%J Frontiers in Applied Mathematics and Statistics
%C
%F
%G English
%K Kernel based approximation,Distributed learning,machine learning,inverse problems,Probability Estimation
%Q
%R 10.3389/fams.2020.00030
%W
%L
%N 30
%M
%P
%7
%8 2020-October-20
%9 Original Research
%#
%! Analysis of massive data
%*
%<
%T Kernel-Based Analysis of Massive Data
%U https://www.frontiersin.org/article/10.3389/fams.2020.00030
%V 6
%0 JOURNAL ARTICLE
%@ 2297-4687
%X Dealing with massive data is a challenging task for machine learning. An important aspect of machine learning is function approximation. In the context of massive data, some of the commonly used tools for this purpose are sparsity, divide-and-conquer, and distributed learning. In this paper, we develop a very general theory of approximation by networks, which we have called eignets, to achieve local, stratified approximation. The very massive nature of the data allows us to use these eignets to solve inverse problems, such as finding a good approximation to the probability law that governs the data and finding the local smoothness of the target function near different points in the domain. In fact, we develop a wavelet-like representation using our eignets. Our theory is applicable to approximation on a general locally compact metric measure space. Special examples include approximation by periodic basis functions on the torus, zonal function networks on a Euclidean sphere (including smooth ReLU networks), Gaussian networks, and approximation on manifolds. We construct pre-fabricated networks so that no data-based training is required for the approximation.