Regularized Kernel Algorithms for Support Estimation

In the framework of non-parametric support estimation, we study the statistical properties of a set estimator deﬁned by means of Kernel Principal Component Analysis. Under a suitable assumption on the kernel, we prove that the algorithm is strongly consistent with respect to the Hausdorff distance. We also extend the above analysis to a larger class of set estimators deﬁned in terms of a low-pass ﬁlter function. We ﬁnally provide numerical simulations on synthetic data to highlight the role of the hyper parameters, which affect the algorithm


INTRODUCTION
A classical issue in statistics is support estimation, i.e., the problem of learning the support of a probability distribution from a set of points identically sampled according to the distribution. For example, the Devroye-Wise algorithm [1] estimates the support with the union of suitable balls centered in the training points. In the last two decades, many algorithms have been proposed and their statistical properties analyzed [1][2][3][4][5][6][7][8][9][10][11][12][13][14] and references therein.
An instance of the above setting, which plays an important role in applications, is the problem of novelty/anomaly detection, see Campos et al. [15] for an updated review. In this context, in Hoffmann [16] the author proposed an estimator based on Kernel Principal Component Analysis (KPCA), first introduced in Schölkopf et al. [17] in the context of dimensionality reduction. The algorithm was successfully tested in many applications from computer vision to biochemistry [18][19][20][21][22][23][24]. In many of these examples the data are often represented by high dimensional vectors, but they actually live close to a nonlinear low dimensional submanifold of the original space, and the proposed estimator takes advantage of the fact that KPCA provides an efficient compression/dimensionality reduction of the original data [16,17], whereas many classical set estimators refer to the dimension of the original space, as it happens for the Devroye-Wise algorithm.
In this paper we prove that KPCA is a consistent estimator of the support of the distribution with respect to the Hausdorff distance. The result is based on an intriguing property of the reproducing kernel, called separating condition, first introduced in De Vito et al. [25]. This assumption ensures that any closed subset of the original space is represented in the feature space by a linear subspace. We show that this property remains true if the data are recentered to have zero mean in the feature space. Together with the results in De Vito et al. [25], we conclude that the consistency of KPCA algorithm is preserved by recentering of the data, which can be regarded as a degree of freedom to improve the empirical performance of the algorithm in a specific application.
Our main contribution is sketched in the next subsection together with some basic properties of KPCA and some relevant previous works. In section 2, we describe the mathematical framework and the related notations. Section 3 introduces the spectral support estimator and informally discusses its main features, whereas its statistical properties and the meaning of the separating condition for the kernel are analyzed in section 4. Finally section 5 presents the effective algorithm to compute the decision function and discusses the role of the two meta-parameters based on the previous theoretical analysis. In the Appendix (Supplementary Material), we collect some technical results.

Sketch of the Main Result and Previous Works
In this section we sketch our main result by first recalling the construction of the KPCA estimator introduced in Hoffmann [16]. We have at disposal a training set {x 1 , . . . , x n } ∈ D ⊂ R d of n points independently sampled according to some probability distribution P. The input space D is a known compact subset of R d , but the probability distribution P is unknown and the goal is to estimate the support C of P from the empirical data. We recall that C is the smallest closed subset of D such that P [C] = 1 and we stress that C is in general a proper subset of D, possibly of low dimension.
Classical Principal Component Analysis (PCA) is based on the construction of the vector space V spanned by the first m eigenvectors associated with the largest eigenvalues of the empirical covariance matrix where x = 1 n n i = 1 x i is the empirical mean. However, if the data do not live on an affine subspace, the set V is not a consistent estimator of the support. In order to take into account non-linear models, following the idea introduced in Schölkopf et al. [17] we consider a feature map from the input space D into the corresponding feature space H, which is assumed to be a Hilbert space, and we replace the empirical covariance matrix with the empirical covariance operator where µ n = 1 n n i = 1 (x i ) is the empirical mean in the feature space. As it happens in PCA, we consider the subspace V m,n of H spanned by the first m-eigenvectorsf 1 , . . . ,f m of T c n . According to the proposal in Hoffmann [16], we consider the following estimator of the support of the probability distribution P where τ n is a suitable threshold depending on the number of examples and is the projection of an arbitrary point x ∈ D onto the affine subspace µ n + V m,n . We show that, under a suitable assumption on the feature map, called separating property, C n is a consistent estimator of C with respect to the Hausdorff distance between compact sets, see Theorem 3 of section 4.2.
The separating property was introduced in De Vito et al. [25] and it ensures that the feature space is rich enough to learn any closed subset of D. This assumption plays the same role of the notion of universal kernel [26] in supervised learning.
Moreover, following [25,27] we extend the KPCA estimator to a class of learning algorithms defined in terms of a low-pass filter function r m (σ ) acting on the spectrum of the covariance matrix and depending on a regularization parameter m ∈ N. The projection of n (x) onto V m,n is replaced by the vector where {f j } j is the family of eigenvectors of T c n and {σ j } j is the corresponding family of eigenvalues. The support is then estimated by the set Note that KPCA corresponds to the choice of the hard-cut off filter However, other filter functions can be considered, inspired by the theory of regularization for inverse problems [28] and by supervised learning algorithms [29,30]. In this paper we show that the explicit computation of these spectral estimators reduces to a finite dimensional problem depending only on the kernel K(x, w) = (x), (w) associated with the feature map, as for KPCA. The computational properties of each learning algorithm depend on the choice of the low-pass filter r m (σ ), which can be tuned to out-perform of some specific data set, see the discussion in Rudi et al. [31].
We conclude this section with two considerations. First, in De Vito et al. [25,27] it is proven a consistency result for a similar estimator, where the subspace V n,m is computed with respect to the non-centered covariance matrix in the feature space H, instead of the covariance matrix. In this paper we analyze the impact of recentering the data in the feature space H on the support estimation problem, see Theorem 1 below. This point of view is further analyzed in Rudi et al. [32,33].
Finally note that, our consistency results are based on convergence rates of empirical subspaces to true subspaces of the covariance operator, see Theorem 2 below. The main difference between our result and the result in Blanchard et al. [34], is that we prove the consistency for the case when the dimension m = m n of the subspace V m,n goes to infinity slowly enough. On the contrary, in their seminal paper [34] the authors analyze the most specific case when the dimension of the projection space is fixed.

MATHEMATICAL ASSUMPTIONS
In this section we introduce the statistical model generating the data, the notion of separating feature map and the properties of the filter function. Furthermore, we show that KPCA can be seen as a filter function and we recall the main properties of the covariance operators.
We assume that the input space D is a bounded closed subset of R d . However, our results also hold true by replacing D with any compact metric space. We denote by d(x, w) the Euclidean distance |x − w| between two points x, w ∈ R d and by d H (A, B) the Hausdorff distance between two compact subsets A, B ⊂ D, explicitly given by

Statistical Model
The statistical model is described by a random vector X taking value in D. We denote by P the probability distribution of X, defined on the Borel σ -algebra of D, and by C the support of P.
Since the probability distribution P is unknown, so is its support. We aim to estimate C from a training set of empirical data, which are described by a family X 1 , . . . , X n of random vectors, which are independent and identically distributed as X. More precisely, we are looking for a closed subset C n = C X 1 ,...,X n ⊂ D, depending only on X 1 , . . . , X n , but independent of P, such that P lim n→+∞ d H ( C n , C) = 0 = 1 for all probability distributions P. In the context of regression estimate, the above convergence is usually called universal strong consistency [35].

Mercer Feature Maps and Separating Condition
To define the estimator C n we first map the data into a suitable feature space, so that the support C is represented by a linear subspace. (H2) the map is continuous.
The space H is called the feature space and the map is called a Mercer feature map.
In the following the norm and scalar product of H are denoted by · and ·, · , respectively. Assumptions (H1) and (H2) are standard for kernel methods, see Steinwart and Christmann [36]. We now briefly recall some basic consequences. First of all, the map is a Mercer kernel and we denote by H K the corresponding (separable) reproducing kernel Hilbert space, whose elements are continuous functions on D. Moreover, each element f ∈ H defines a function f ∈ H K by setting f (x) = f , (x) for all x ∈ D. Since (D) is total in H, the linear map f → f is an isometry from H onto H K . In the following, with slight abuse of notation, we write f instead of f , so that the elements f ∈ H are viewed as functions on D satisfying the reproducing property Finally, since D is compact and is continuous, it holds that Following De Vito et al. [27], we call a separating Mercer feature map if the following the separating property also holds true.
(H3) The map is injective and for all closed subsets C ⊂ D It states that any closed subset C ⊂ D is mapped by onto the intersection of (D) and the closed subspace span{ (x) | x ∈ C} ⊂ H. Examples of kernels satisfying the separating property are for D ⊂ R d [27]: • Sobolev kernels with smoothness index s > d 2 ; • the Abel/Laplacian kernel K(x, w) = e −γ |x−w| with γ > 0; • the ℓ 1 -kernel K(x, w) = e −γ |x−w| 1 , where |·| 1 is the ℓ 1 -norm and γ > 0.
As shown in De Vito et al. [25], given a closed set C the equality (2) is equivalent to the condition that for every Clearly, an arbitrary Mercer feature map is not able to separate all the closed subsets, but only few of them. To better describe these sets, we introduce the elementary learnable sets, namely Clearly, C f is closed and the equality (3) holds true. Furthermore the intersection of an arbitrary family of elementary i.e., a basis of the orthogonal complement of (x) | x ∈ C , then it is easy to prove that so that any set which is separating by is the (possibly denumerable) intersection of elementary sets. Assumption (H3) is hence a requirement that the family of the elementary learnable sets, labeled by the elements of H, is rich enough to parameterize all the closed subsets of D by means of (4). In section 4.3 we present some examples.
The Gaussian kernel K(x, w) = e −γ|x−w| 2 is a popular choice in machine learning, however it is not separating. Indeed, since K is analytic, the elements of the corresponding reproducing kernel Hilbert space are analytic functions, too [36]. It is known that, given an analytic function f = 0, the corresponding elementary learnable set C f = x ∈ D | f (x) = 0 is a closed set whose interior is the empty set. Hence also the denumerable intersections have empty interior, so that K can not separate a support with not-empty interior. In Figure 1 we compare the decay behavior of the eigenvalues of the Laplacian and the Gaussian kernels.

Filter Function
The second building block is a low pass filter, we introduce to avoid that the estimator overfits the empirical data. The filter functions were first introduced in the context of inverse problem, see Engl et al. [28] and references therein, and in the context of supervised learning, see Lo Gerfo et al. [29] and Blanchard and Mucke [30].
We now fix some notations. For any f ∈ H, we denote by f ⊗ f the rank one operator (f ⊗f )g = g, f f . We recall that a bounded operator A on H is a Hilbert-Schmidt operator if for some (any) basis f j j the series A 2 2 : = j Af j 2 is finite, A 2 is called the Hilbert-Schmidt norm and A ∞ ≤ A 2 , where · ∞ is the spectral norm. We denote by S 2 the space of Hilbert-Schmidt operators, which is a separable Hilbert space under the scalar product A, B 2 = j Af j , Bf j . (H6) for all m ∈ N, there is L m > 0 such that i.e., r m is a Lipschitz function with Lipschitz constant L m .
For fixed m, r m is a filter cutting the smallest eigenvalues (high frequencies). Indeed, (H4) and (H6) with σ ′ = 0 give On the contrary, if m goes to infinity, by (H5) r m converges point-wisely to the Heaviside function Since r m (σ ) converges to (σ ), which does not satisfy (5), we have that lim m→+∞ L m = +∞.
We fix the interval [0, R] as domain of the filter functions r m since the eigenvalues of the operators we are interested belong to [0, R], see (23).
Examples of filter functions are • Tikhonov filter • Landweber iteration We recall a technical result, which is based on functional calculus for compact operators. If A is a positive Hilbert-Schmidt operator, Hilbert-Schmidt theorem (for compact self-adjoint operators) gives that there exist a basis f j j of H and a family σ j j of positive numbers such that If the spectral norm A ∞ ≤ R, then all the eigenvalues σ j belong to [0, R] and the spectral calculus defines r m (A) as the operator on H given by With this definition each f j is still an eigenvector of r m (A), but the corresponding eigenvalue is shrunk to r m (σ j ). Proposition 1 in the Appendix in Supplementary Material summarizes the main properties of r m (A).
FIGURE 1 | Eigenvalues in logarithmic scale of the Covariance operator when the kernel is Abel (blue) and Gaussian (red) and the distribution is uniformly supported on the "8" curve in Figure 2. Note that the eigenvalue decay rate of the first operator has a polynomial behavior while the second has an exponential one.

Kernel Principal Component Analysis
As anticipated in the introduction, the estimators we propose are a generalization of KPCA suggested by Hoffmann [16] in the context of novelty detection. In our framework this corresponds to the hard cut-off filter, i.e., by labeling the different eigenvalues of A in a decreasing order 1 σ 1 > σ 2 > . . . > σ m > σ m+1 > . . ., the filter function is Clearly, r m satisfies (H4) and (H5), but (H6) does not hold. However, the Lipschitz assumption is needed only to prove the bound (21e) and, for the hard cut-off filter, r m (A) is simply the orthogonal projector onto the linear space spanned by the eigenvectors whose eigenvalues are bigger than σ m+1 . For such projections [37] proves the following bound Hence, our results also hold for hard cut-off filter at the price to have a Lipschitz constant L m depending on the eigenvalues of A.

Covariance Operators
The third building block is made of the eigenvectors of the distribution dependent covariance operator and of its empirical version. The covariance operators are computed by first mapping the data in the feature space H.
As usual, we introduce two random variables (X) and (X) ⊗ (X), taking value in H and in S 2 , respectively. Since 1 Here, the labeling is different from the one in (6), where the eigenvalues are repeated according to their multiplicity.
is continuous and X belongs to the compact subset D, both random variables are bounded. We set where the integrals are in the Bochner sense. We denote by µ n and T n the empirical mean of (X) and (X) ⊗ (X), respectively, and by T c n the empirical covariance operator, respectively. Explicitly, The main properties of the covariance operator and its empirical version are summarized in Proposition 2 in the Appendix in Supplementary Material.

THE ESTIMATOR
Now we are ready to construct the estimator, whose computational aspects are discussed in section 5. The set C n is defined by the following three steps: a) the points x ∈ D are mapped into the corresponding centered vectors (x) − µ n ∈ H, where the center is the empirical mean; b) the operator r m ( T c n ) is applied to each vector (x) − µ n ; c) the point x ∈ D is assigned to C n if the distance between r m n ( T c n )( (x) − µ n ) and (x) − µ n is smaller than a threshold τ .
Explicitly we have that where τ = τ n and m = m n are chosen as a function of the number n of training data.
With the choice of the hard cut-off filter, this reduces to the KPCA algorithm [16,17]. Indeed, r m ( T c n ) is the projection Q m onto the vector space spanned by the first m eigenvectors. Hence C n is the set of points x whose image (x) − µ n is close to Q m . For an arbitrary filter function r m , Q m is replaced by r m ( T c n ), which can be interpreted as a smooth version of Q m . Note that, in general, r m ( T c n ) is not a projection. In De Vito et al. [27] a different estimator is defined. In that paper the data are mapped in the feature space H without centering the points with respect to the empirical mean and the estimator is given by where the filter function r m is as in the present work, but r m n ( T n ) is defined in terms of the eigenvectors of the non-centered second momentum T n . To compare the two estimators note that where r * m (σ ) = 2r m (σ ) − r m (σ ) 2 , which is a filter function too, possibly with a Lipschitz constant L * m ≤ 2L m . Note that for the hard cut-off filter r * m (σ ) = r m (σ ). Though r m n ( T n ) and r m n ( T c n ) are different, both C n and C n converge to the support of the probability distribution P, provided that the separating property (H3) holds true. Hence, one has the freedom to choose if the empirical data have or not zero mean in the feature space.

MAIN RESULTS
In this section, we prove that the estimator C n we introduce is strongly consistent. To state our results, for each n ∈ N, we fix an integer m n ∈ N and set F n : D → H to be so that Equation (9) becomes

Spectral Characterization
First of all, we characterize the support of P by means of Q c , the orthogonal projector onto the null space of the distribution dependence covariance operator T c . The following theorem will show that the centered feature map sends the support C onto the intersection of c (D) and the closed subspace (I − Q c )H, i.e,

Theorem 1.
Assume that is a separating Mercer feature map, then where Q c is the orthogonal projector onto the null space of the covariance operator T c .
Proof: To prove the result we need some technical lemmas, we state and prove in the Appendix in Supplementary Material. Assume first that x ∈ D is such that Q c ( (x) − µ) = 0. Denoted by Q the orthogonal projection onto the null space of T, by Lemma 2 QQ c = Q and Qµ = 0, so that Hence Lemma 1 implies that x ∈ C. Conversely, if x ∈ C, then as above Q( (x) − µ) = 0. By Lemma 2 we have that Q c (1 − Q) = Q c µ −2 Q c µ ⊗ Q c µ. Hence it is enough to prove that which holds true by Lemma 3.

Consistency
Our first result is about the convergence of F n .

Theorem 2.
Assume that is a Mercer feature map. Take the sequence {m n } n such that lim n→∞ m n = +∞, for some constant κ > 0, then Proof: We first prove Equation (13). Set A n = I − r m n ( T c n ). Given x ∈ D, where the fourth line is due to (21e), the bound A n ∞ = sup σ ∈[0,1] 1 − r m (σ ) ≤ 1, and the fact that both (x) and µ are bounded by √ R. By (12b) it follows that so that, taking into account (24a) and (24c), it holds that This last limit is a consequence of (21d) observing that (x) − µ | x ∈ D is compact since D is compact and is continuous.
We add some comments. Theorem 1 suggests that the consistency depends on the fact that the vector (I−r m n ( T c n ))( (x)− µ n ) is a good approximation of Q c ( (x)−µ). By the law of large numbers, T n and µ n converge to T and µ, respectively, and Equation (21d) implies that, if m is large enough, (I − r m (T))( (x) − µ) is closed to Q c ( (x) − µ). Hence, if m n is large enough, see condition (12a), we expect that r m n ( T c n ) is close to r m n (T c ). However, this is true only if m n goes to infinity slowly enough, see condition (12b). The rate depends on the behavior of the Lipschitz constant L m , which goes to infinity if m goes to infinity. For example, for Tikhonov filter a sufficient condition is that m n ∼ n 1 2 −ǫ with ǫ > 0. With the right choice of m n , the empirical decision function F n converges uniformly to the function F(x) = Q c ( (x) − µ), see Equation (13).
If the map is separating, Theorem 1 gives that the zero level set of F is precisely the support C. However, if C is not learnable by , i.e., the equality (2) does not hold, then the zero level set of F is bigger than C. For example, if D is connected, C has not-empty interior and is the feature map associated with the Gaussian kernel, it is possible to prove that F is an analytic function, which is zero on an open set, hence it is zero on the whole space D. We note that, in real applications the difference between Gaussian and Abel kernel, which is separating, is not so big and in our experience the Gaussian kernel provides a reasonable estimator.
From now on we assume that is separating, so that Theorem 1 holds true. However, the uniform convergence of F n to F does not imply that the zero level sets of F n converges to C = F −1 (0) with respect to the Hausdorff distance. For example, with the Tikhonov filter F −1 n (0) is always the empty set. To overcome the problem, C n is defined as the τ n -neighborhood of the zero level set of F n , where the threshold τ m goes to zero slowly enough.
Define the data dependent parameter τ n as Since F n ∈ [0, 1], clearly τ n ∈ [0, 1] and the set estimator becomes The following result shows that C n is a universal strongly consistent estimator of the support of the probability distribution P. Note that for KPCA the consistency is not universal since the choice of m n depends on some a-priori information about the decay of the eigenvalues of the covariance operator T c , which depends on P.

Theorem 3.
Assume that is a separating Mercer feature map. Take the sequence {m n } n satisfying (12a)-(12b) and define τ n by (14). Then and let E be the event on which F n converges uniformly to F(x), and F be the event such that X i ∈ C for all i ≥ 1. Theorem 2 shows that P [E] = 1 and, since C is the support, then P [F] = 1.
Take ω ∈ E ∩ F and fix ǫ > 0, then there exists n 0 > 0 (possibly depending on ω and ǫ) such that for all n ≥ n 0 | F n (x) − F(x)| ≤ ǫ for all x ∈ D. By Theorem 1 F(x) = 0 for all x ∈ C and X 1 (ω), . . . , X n (ω) ∈ C, it follows that | F n (X i (ω))| ≤ ǫ for all 1 ≤ i ≤ n so that 0 ≤ τ n (ω) ≤ ǫ, so that the sequence τ n (ω) goes to zero. Since P [E ∩ F] = 1 Equation (15a) holds true. We split the proof of Equation (15b) in two steps. We first show that with probability one lim n→+∞ sup x∈ C n d(x, C) = 0. On the event E ∩ F, suppose, by contraction, that the sequence sup x∈ C n d(x, C) n does not converge to zero. Possibly passing to a subsequence, for all n ∈ N there exists x n ∈ C n such that d(x n , C n ) ≥ ǫ 0 for some fixed ǫ 0 > 0. Since D is compact, possibly passing to a subsequence, {x n } n converges to x 0 ∈ D with d(x 0 , C) ≥ ǫ 0 . We claim that x 0 ∈ C. Indeed, since x n ∈ C n means that F n (x n ) ≤ τ n . If n goes to infinity, since is continuous and by the definition of E and F, the right side of the above inequality goes to zero, so that Q c ( (x 0 ) − µ) = 0, i.e., by Theorem 1 we get x 0 ∈ C, which is a contraction since by construction d(x 0 , C) ≥ ǫ 0 > 0. We now prove that lim n→∞ sup x∈C d(x, C n ).
For any x ∈ D, set X 1,n (x) to be a first neighbor of x in the training set {X 1 , . . . , X n }. It is known that for all x ∈ C, see for example Lemma 6.1 of Györfi et al. [35]. Choose a denumerable family z j j∈J in C such that is dense in C. By Equation (16) there exists an event G with such that P [G] = 1 and, on G, for all j ∈ J lim n→+∞ d(X 1,n (z j ), z j ) = 0.
Fix ω ∈ G, we claim that lim n sup x∈C d(x, C n ) = 0. Observe that, by definition of τ n , X i ∈ C n for all 1 ≤ i ≤ n and Indeed, fix x ∈ C, there exists an index j ∈ J ǫ such that x ∈ B(z j , ǫ). By definition of first neighbor, clearly d(X 1,n (x), x) ≤ d(X 1,n (z j ), x), so that by triangular inequality we get Taking the supremum over C we get the claim. Since ω ∈ G and J ǫ is finite, so that by Equation (17) limsup n→+∞ sup x∈C d(X 1,n (x), x) ≤ ǫ.
Since ǫ is arbitrary, we get lim n→+∞ sup x∈C d(X 1,n (x), x) = 0, which implies that Theorem 3 is an asymptotic result. Up to now, we are not able to provide finite sample bounds on d H ( C n , C). It is possible to have finite sample bounds on F n (x) − Q c ( (x) − µ) , as in Theorem 7 of De Vito et al. [25] with the same kind of proof.

The Separating Condition
The following two examples clarify the notion of the separating condition.
Example 1. Let D be a compact subset of R 2 , H = R 6 with the euclidean scalar product, and : D → R 6 be the feature map ((x, y)) = (x 2 , y 2 , √ 2xy, √ 2x, √ 2y, 1), whose corresponding Mercer kernel is a polynomial kernel of degree two, explicitly given by Given a vector f = (f 1 , . . . , f 6 ) ⊤ , the corresponding elementary set is the conic Conversely, all the conics are elementary sets. The family of all the intersections of at most five conics, i.e., the sets whose cartesian equation is a system of the form Example 2. The data are the random vectors in R 2 where a, c ∈ N, b, d ∈ [0, 2π] and 1 , . . . , n are independent random variables, each of them uniformly distributed on [0, 2π]. Setting D = [−1, 1] 2 , clearly X i ∈ D and the support of their common probability distribution is the Lissajous curve

Figure 2
shows two examples of Lissajous curves. As a filter function r m , we fix the hard cut-off filter where m is the number of eigenvectors corresponding to the highest eigenvalues we keep. We denote byĈ m,τ n the corresponding estimator given by (10). In the first two tests we use the polynomial kernel (18), so that the elementary learnable sets are conics. One can check that the rank of T c is less or equal than 5. More precisely, if Lis a,b,c,d is a conic, the rank of T c is 4 and we need to estimate five parameters, whereas if Lis a,b,c,d is not a conic, Lis a,b,c,d is not a learnable set and the rank of T c is 5.  In the first test the data are sampled from the distribution supported on the circumference Lis 1,0,1, π 2 (see panel left of Figure 2). In Figure 3 we draw the setĈ m,τ n for different values of m and τ when n varies. In this toy example n = 5 is enough to learn exactly the support, hence for each n = 2, . . . , 5 the corresponding values of m n and τ n are m n = 1, 2, 3, 4 and τ n = 0.01, 0.005, 0.005, 0.002.
In the second test the data are sampled from the distribution supported on the curve Lis 2,0.11,1,0.3 , which is not a conic (see panel right of Figure 2). In Figure 4 we draw the setĈ m,τ n for n = 10, 20, 50, 100, m = 4, and τ = 0.01. Clearly, C n is not able to estimate Lis 2,0.11,1,0.3 .
We now briefly discuss how to select the parameter m n and τ n from the data. The goal of set-learning problem is to recover the support of the probability distribution generating the data by using the given input observations. Since no output is present, set-learning belongs to the category of unsupervised learning problems, for which there is not a general framework accounting for model selection. However there are some possible strategies (whose analysis is out of the scope of this paper). A first approach, we used in our simulations, is based on the monotonicity properties ofĈ m,τ n with respect to m, τ . More precisely, given f ∈ (0, 1), we select (the smallest) m and (the biggest) τ such that at most nf observed points belong to the the estimated set. It is possible to prove that this method is consistent when f tends to 1 as the number of observations increases. Another way to select the parameters consists in transforming the set-learning problem is a supervised one and then performing standard model selection techniques like cross validation. In particular set-learning can be casted in a classification problem by associating the observed example to the class +1 and by defining an auxiliary measure µ (e.g., uniform on a ball of interest in D) associated to −1, from which n i.i.d. points are drawn. It is possible to prove that this last method is consistent when µ(suppρ) = 0.

The Role of the Regularization
We now explain the role of the filter function. Given a training set X 1 , . . . , X n of size n, the separating property (3) applied to the support of the empirical distribution gives that where µ n is the empirical mean and I − Q c n is the orthogonal projection onto the linear space spanned by the family (X 1 ) − µ n , . . . , (X n ) − µ n , which are the centered images of the examples. Hence, given a new point x ∈ D the condition Q c n ( (x) − µ n ) ≤ τ with τ ≪ 1 is satisfied if only if x is close to one of the examples in the training set. Hence the naive estimator x ∈ D | Q c n ( (x) − µ n ) ≤ τ overfits the data. Hence we would like to replace Q c n with an operator, which should be close to the identity on the linear subspace spanned by (X 1 ) − µ n , . . . , (X n ) − µ n and it should have a small range. To modulate the two requests, one can consider the following optimization problem We note that if A is a projection its Hilbert-Schmidt norm A 2 is the square root of the dimension of the range of A. Since and the optimal solution is given by i.e., A opt is precisely the operator r m ( T c n ) with the Tikhonov filter r m (σ ) = σ σ +λ and λ = R m . A different choice of the filter function r m corresponds to a different regularization of the least-square problem

THE KERNEL MACHINE
In this section we show that the computation of F n (x) , in terms of which is defined the estimator C n , reduces to a finite dimensional problem, depending only on the Mercer kernel K, associated with the feature map. We introduce the centered sampling operator S c n : H → R n (S c n f ) i = f , (X i ) − µ n , whose transpose is given by where v i is the i-th entry of the column vector v ∈ R n . Hence, it holds that where K n is the n × n matrix whose (i, j)-entry is K(X i , X j ) and I n is the identity n × n matrix, so that the (i, j)-entry of S c n S c n ⊤ is Frontiers in Applied Mathematics and Statistics | www.frontiersin.org Denoted by ℓ the rank of S c n S c n ⊤ , take the singular value decomposition of S c n S c n ⊤ /n, i.e., where V is an n × ℓ matrix whose columns v j ∈ R n are the normalized eigenvectors, V ⊤ V = I ℓ , and is a diagonal ℓ × ℓ matrix with the strictly positive eigenvalues on the diagonal, i.e., = diag( σ 1 , . . . , σ ℓ ). Set U = S c n ⊤ V − 1 2 , regarded as operator from R ℓ to H, then a simple calculation shows that where r m ( ) is the diagonal ℓ × ℓ matrix r m ( ) = diag(r m ( σ 1 ), . . . , r m ( σ ℓ )), and the equation for r m ( T c n ) holds true since by assumption r m (0) = 0. Hence where the real number w(x) = (x) − µ n , (x) − µ n is and the n × n-matrix G m is In Algorithm 1 we list the corresponding MatLab Code. The above equations make clear that both F n and C n can be computed in terms of the singular value decomposition (V, ) Algorithm 1 Matlab code for Set Learning.
-repmat(mu', size(Y,1), 1)+ s; w = diag(gram(Y,Y,k)) -2 * sum(W,2)/ n + s; y = w -diag(vx * Gm * vx'); end %----------------% main script ...% creation of training set X and test set Y and kernel width c [Gm, mu, s] = learnSet(X, @abel(c), @hardcutoff, m); y = testSet(X, Gm, mu, s, Y); y <= tau; % membership of test data of the n × n Gram matrix K n and of the filter function r m , so that F n belongs to the class of kernel methods and C n is a plug-in estimator. For the hard cut-off filter, one simply has For real applications, a delicate issue is the choice of the parameters m and τ , we refer to Rudi et al. [31] for a detailed discussion. Here, we add some simple remarks. We first discuss the role of τ . According to (10),Ĉ m,τ n ⊆Ĉ m,τ ′ n whenever τ < τ ′ . We exemplify this behavior with the dataset Frontiers in Applied Mathematics and Statistics | www.frontiersin.org of Example 2. The training set is sampled from the distribution supported on the curve Lis 2,0.11,1,0.3 (see panel right of Figure 2) and we compute C n with the Abel kernel, n = 100 and m ranging over 5, 10, 20, 50. Figure 6 shows the nested sets when τ runs in the associated color-bar.
Analyzing the role of m, we now show that, for the the hard cut-off filter,Ĉ m ′ ,τ n ⊆Ĉ m,τ n whenever m ′ ≤ m. Indeed, this filter satisfies r m ′ (σ ) ≤ r m (σ ) and, since 0 ≤ r m (σ ) ≤ 1, one has (1 − r m (σ )) 2 ≤ (1 − r m ′ (σ )) 2 . Hence, denoted by u j j a base of eigenvectors of T c n , it holds that Hence, for any point in x ∈Ĉ m ′ ,τ n , so that x ∈Ĉ m,τ n . As above, we illustrate the different choices of m with the data sampled from the curve Lis 2,0.11,1,0.3 and the Abel kernel where n = 100 and τ ranges over 0.25, 0.3, 0.4, 0.5. Figure 7 shows the nested sets when m runs in the associated color-bar.

DISCUSSION
We presented a new class of set estimators, which are able to learn the support of an unknown probability distribution from a training set of random data. The set estimator is defined through a decision function, which can be seen as a novelty/anomality detection algorithm as in Schölkopf et al. [6].
The decision function we defined is a kernel machine. It is computed by the singular value decomposition of the empirical (kernel)-covariance matrix and by a low pass filter. An example of filter is the hard cut-off function and the corresponding decision function reduces to KPCA algorithm for novelty detection first introduced by Hoffmann [16]. However, we showed that it is possible to choose other low pass filters, as it was done for a class of supervised algorithms in the regression/classification setting [38].
Under some weak assumptions on the low pass filter, we proved that the corresponding set estimator is strongly consistent with respect to the Hausdorff distance, provided that the kernel satisfies a suitable separating condition, as it happens, for example, for the Abel kernel. Furthermore, by comparing Theorem 2 with a similar consistency result in De Vito et al. [27], it appears clear that the algorithm correctly learns the support both if the data have zero mean, as in our paper, and if the data are not centered, as in De Vito et al. [27]. On the contrary, if the separating property does not hold, the algorithm learns only the supports that are mapped into linear subspaces by the feature map defined by the kernel. The set estimator we introduced depends on two parameters: the effective number m of eigenvectors defining the decision function and the thickness τ of the region estimating the support. The role of these parameters and of the separating property was briefly discussed by a few tests on toy data.
We finally observe that our class of set learning algorithms is very similar to classical kernel machines in supervised learning. So, in order to reduce both the computational cost and the memory requirements, there is the possibility to successfully implement some new advanced approximation techniques, for which there exist theoretical guarantees for the statistical learning setting. For example random features [39,40], Nyström projections [41,42] or mixed approaches with iterative regularization and preconditioning [43,44].