A block-sparse Tensor Train Format for sample-efficient high-dimensional Polynomial Regression

Low-rank tensors are an established framework for high-dimensional least-squares problems. We propose to extend this framework by including the concept of block-sparsity. In the context of polynomial regression each sparsity pattern corresponds to some subspace of homogeneous multivariate polynomials. This allows us to adapt the ansatz space to align better with known sample complexity results. The resulting method is tested in numerical experiments and demonstrates improved computational resource utilization and sample efficiency.


Notation
In our opinion, using a graphical notation for the involved contractions in a tensor network drastically simplifies the expressions makes the whole setup more approachable. This Section introduces this graphical notation for tensor networks, the spaces that will be used in the remainder of this work and the regression framework.

Tensors and indices
Definition 2.1. Let d ∈ N >0 . Then n = (n 1 , · · · , n d ) ∈ N d is called a dimension tuple of order d and x ∈ R n1×···×n d =: R n is called a tensor of order d and dimension n. Let N n = {1, . . . , n} then a tuple (l 1 , . . . , l d ) ∈ N n1 × · · · × N n d =: N n is called a multi-index and the corresponding entry of x is denoted by x(l 1 , . . . , l d ). The positions 1, . . . , d of the indices l 1 , . . . , l d in the expression x(l 1 , . . . , l d ) are called modes of x.
To define further operations on tensors it is often useful to associate each mode with a symbolic index.
Definition 2.2. A symbolic index i of dimension n is a placeholder for an arbitrary but fixed natural number between 1 and n. For a dimension tuple n of order d and a tensor x ∈ R n we may write x(i 1 , . . . , i d ) and tacitly assume that i k are indices of dimension n k for each k = 1, . . . , d. When standing for itself this notation means x(i 1 , . . . , i d ) = x ∈ R n and may be used to slice the tensor x(i 1 , l 2 , . . . , l d ) ∈ R n1 where l k ∈ N n k are fixed indices for all k = 2, . . . , d. For any dimension tuple n of order d we define the symbolic multi-index i n = (i 1 , . . . , i d ) of dimension n where i k is a symbolic index of dimension n k for all k = 1, . . . , d.
Remark 2.3. We use the letters i and j (with appropriate subscripts) for symbolic indices while reserving the letters k, l and m for ordinary indices. Example 2.4. Let x be an order 2 tensor with mode dimensions n 1 and n 2 , i.e. an n 1 -by-n 2 matrix. Then x( 1 , j) denotes the 1 -th row of x and x(i, 2 ) denotes the 2 -th column of x.
Inspired by Einstein notation we use the concept of symbolic indices to define different operations on tensors. Definition 2.5. Let i 1 and i 2 be (symbolic) indices of dimension n 1 and n 2 , respectively and let ϕ be a bijection We then define the product of indices with respect to ϕ as j = ϕ(i 1 , i 2 ) where j is a (symbolic) index of dimension n 1 n 2 . In most cases the choice of bijection is not important and we will write i 1 · i 2 := ϕ(i 1 , i 2 ) for an arbitrary but fixed bijection ϕ. For a tensor x of dimension (n 1 , n 2 ) the expression defines the tensor y of dimension (n 1 n 2 ) while the expression x(i 1 , i 2 ) = y(i 1 · i 2 ) defines x ∈ R n1×n2 from y ∈ R n1n2 . Definition 2.6. Consider the tensors x ∈ R n1×a×n2 and y ∈ R n3×b×n4 . Then the expression z(i n1 , i n2 , j 1 , j 2 , i n3 , i n4 ) = x(i n1 , j 1 , i n2 ) · y(i n3 , j 2 , i n4 ) defines the tensor z ∈ R n1×n2×a×b×n3×n4 in the obvious way. Similary, for a = b the expression z(i n1 , i n2 , j, i n3 , i n4 ) = x(i n1 , j, i n2 ) · y(i n3 , j, i n4 ) defines the tensor z ∈ R n1×n2×a×n3×n4 . Finally, also for a = b the expression defines the tensor z ∈ R n1×n2×n3×n4 as z(i n1 , i n2 , i n3 , i n4 ) = a k=1 x(i n1 , k, i n2 ) · y(i n3 , k, i n4 ).
We choose this description mainly because of its simplicity and how it relates to the implementation of these operations in the numeric libraries numpy [Oli06] and xerus [HW14].

Graphical notation and tensor networks
This section will introduce the concept of tensor networks [EHHS11] and a graphical notation for certain operations which will simplify working with these structures. To this end we reformulate the operations introduced in the last section in terms of nodes, edges and half edges. Definition 2.7. For a dimension tuple n of order d and a tensor x ∈ R n the graphical representation of x is given by where the node represents the tensor and the half edges represent the d different modes of the tensor illustrated by the symbolic indices i 1 , . . . , i d .
With this definition we can write the reshapings of Defintion 2.5 simply as and also simplify the binary operations of Definition 2.6. Definition 2.8. Let x ∈ R n1×a×n2 and y ∈ R n3×b×n4 be two tensors. Then Operation (1) is represented by . and defines z ∈ R ···×a×b×··· . For a = b Operation (2) is represented by . and defines z ∈ R ···×a×··· and Operation (3) defines z ∈ R ···×··· by With these definitions we can compose entire networks of multiple tensors which are called tensor networks.

The Tensor Train Format
A prominent example of a tensor network is the tensor train (TT) [Ose11b,HRS12a], which is the main tensor network used throughout this work. This network is discussed in the following subsection. Definition 2.9. Let n be an dimensional tuple of order-d. The TT format decomposes an order d tensor x ∈ R n into d component tensors x k ∈ R r k−1 ×n k ×r k for k = 1, . . . , d with r 0 = r d = 1. This can be written in tensor network formula notation as The tuple (r 1 , . . . , r d−1 ) is called the representation rank of this representation.
In graphical notation it looks like this Remark 2.10. Note that this representation is not unique. For any pair of matrices (A, B) that satisfies AB = Id we can replace x k by x k (i 1 , i 2 , j) · A(j, i 3 ) and x k+1 by B(i 1 , j) · x(j, i 2 , i 3 ) without changing the tensor x.
The representation rank of x is therefore dependent on the specific representation of x as a TT, hence the name. Analogous to the concept of matrix rank we can define a minimal necessary rank that is required to represent a tensor x in the TT format.
Definition 2.11. The tensor train rank of a tensor x ∈ R n with tensor train components x 1 ∈ R n1×r1 , x k ∈ R r k−1 ×n k ×r k for k = 2, . . . , d − 1 and x d ∈ R r d−1 ×n d is the set TT-rank(x) = (r 1 , · · · , r d ) of minimal r k 's such that the x k compose x.
In [HRS12b,Theorem 1a] it is shown that the TT-rank can be computed by simple matrix operations. Namely, r k can be computed by joining the first k indices and the remaining d − k indices and computing the rank of the resulting matrix. At last, we need to introduce the concept of left and right orthogonality for the tensor train format. Definition 2.12. Let x ∈ R m×n be a tensor of order d + 1. We call x left orthogonal if Similarly, we call a tensor x ∈ R m×n of order d + 1 right orthogonal if For technical purposes it is also useful to define the so-called interface tensors, which are based on left and right orthogonal decompositions. Definition 2.13. Let x be a tensor train of order d with rank tuple r. For every k = 1, . . . , d and = 1, . . . , r k , the -th left interface vector is given by where x is assumed to be left orthogonal. The -th right interface vector is given by where x is assumed to be right orthogonal.

Sets of Polynomials
In this section we specify the setup for our method and define the majority of the different sets of polynomials that are used. We start by defining dictionaries of one dimensional functions which we then use to construct the different sets of high-dimensional functions. Definition 2.14. Let p ∈ N be given. A function dictionary of size p is a vector valued function Ψ : R → R p .
Example 2.15. Two simple examples of a function dictionary that we use in this work are given by the monomial basis of dimension p, i.e.
and by the basis of the first p Legendre polynomials, i.e.
Using function dictionaries we can define the following high-dimensional space of multivariate functions. Let Ψ be a function dictionary of size p ∈ N. The d-th order product space that corresponds to the function dictionary Ψ is This means that every function u ∈ V d p can be written as with a coefficient tensor c ∈ R p where p = (p, . . . , p) is a dimension tuple of order d. Note that equation (7) uses the index notation from Definition 2.6 with arbitrary but fixed x k 's. Since R p is an intractably large space, it makes sense for numerical purposes to consider the subset where the TT rank of the coefficient is bounded. Every u ∈ T r (V d p ) can thus be represented graphically as .
where the C k 's are the components of the tensor train representation of the coefficient tensor c ∈ R p of u ∈ V d p . Remark 2.16. In this way every tensor c ∈ R p (in the tensor train format) corresponds one to one to a function u ∈ V d p .
An important subspace of V d p is the space of homogeneous polynomials. For the purpose of this paper we define the subspace of homogeneous polynomials of degree g as the space From this definition it is easy to see that a homogeneous polynomial of degree g can be represented as an element of V d p where the coefficient tensor c satisfies In Section 3 we will introduce an efficient representation of such coefficient tensors c in a block sparse tensor format.
Using W d g we can also define the space of polynomials of degree at most g by Based on this characterization we will define a block-sparse tensor train version of this space in Section 3.

Parametrizing homogeneous polynomials by symmetric tensors
In algebraic geometry the space W d g is considered classically only for the dictionary Ψ monomial of monomials and is typically parameterized by a symmetric tensor where d = (d, . . . , d) is a dimension tuple of order g and B ∈ R d satisfies B(m 1 , . . . , m g ) = B(σ(m 1 , . . . , m g )) for every permutation σ in the symmetric group S g . We conclude this section by showing how the representation (7) can be calculated from the symmetric tensor representation (12), and vice versa. By equating coefficients we find that for every Since B is symmetric the sum simplifies to {σ(n) : σ∈Sg} B(σ(n 1 , . . . , n g )) = g m 1 − 1, . . . , m d − 1 B(n 1 , . . . , n g ).
From this follows that for (n 1 , . . . , n g ) ∈ N g d B(n 1 , . . . , n g ) = 1 δ k,n for all k = 1, . . . , d and δ k, denotes the Kronecker delta. This demonstrates how our approach can alleviate the difficulties that arise when symmetric tensors are represented in the hierarchical tucker format [Hac16] in a very simple fashion.

Least Square
Let in the following V d p be the product space of a function dictionary Ψ such that V d p ⊆ L 2 (Ω). Consider a highdimensional function f ∈ L 2 (Ω) on some domain Ω ⊂ R d and assume that the point-wise evaluation f (x) is well-defined for x ∈ Ω. In practice it is often possible to choose Ω as a product domain Ω = Ω 1 × Ω 2 × · · · Ω d by extending f accordingly. To find the best approximation u W of f in the space W ⊆ V d p we then need to solve the problem A practical problem that often arises when computing u W is that computing the L 2 (Ω)-norm is intractable for large d. Instead of using classical quadrature rules one often resorts to a Monte Carlo estimation of the high-dimensional integral. This means one draws M random samples {x (m) } m=1,...,M from Ω and estimates With this approximation we can define an empirical version of u W as For a linear space W , computing u W,M amounts to solving a linear system and does not pose an algorithmic problem. We use the remainder of this section to comment on the minimization problem (14) when a set of tensor trains is used instead. (7). If the coefficient tensor c of u can be represented in the TT format then we can use equation (9) to perform this evaluation efficiently for all samples (x (m) ) m=1,...,M at once. For this we introduce for each k = 1, . . . , d the matrix Then the M -dimensional vector of evaluations of u at all given sample points is given by where we use Operation (2) to join the different M -dimensional indices. The alternating least-squares algorithm cyclically updates each component tensor C k by minimizing the residual corresponding to this contraction. To formalize this we define the operator Φ k ∈ R M ×r k−1 ×n k ×r k as Then the update for C k is given by a minimal residual solution of the linear system where F (m) := y (m) := f (x (m) ) and i 1 , i 2 , i 3 , j are symbolic indices of dimensions r k−1 , n k , r k , M , respectively. The particular algorithm that is used for this minimization may be adapted to the problem at hand. These contractions are the basis for our algorithms in Section 4. We refer to [HRS12a] for more details on the ALS algorithm. Note that it is possible to reuse parts of the contractions in Φ k through so called stacks. In this way not the entire contraction has to be computed for every k. The dashed boxes mark the parts of the contraction that can be reused.

Sample Complexity
The quality of the solution u W,M of (14) in relation to u W is subject to tremendous interest on the part of the mathematics community. Two particular papers that consider this problem are [CM17] and [EST20]. While the former provides sharper error bounds for the case of linear ansatz spaces the latter generalizes the work and is applicable to tensor network spaces. We now recall the relevant result for convenience. .

Then for any
Note that the value of k depends only on f and on the set W but not on the particular choice of representation of W . However, the variation constant of spaces like V d p still depends on the underlying dictionary Ψ. Although the proposition indicates that a low value of k is necessary to achieve a fast convergence the tensor product spaces V d p considered thus far does not exhibit a small variation constant. The consequence of Proposition 3.1 is that elements of this space are hard to learn in general and may require an infeasible number of samples. To see this consider Ω = [−1, 1] d and the function dictionary Ψ Legendre of Legendre polynomials (5) and let {P } ∈L be an orthonormal basis for some linear subspace V ⊆ V d p . Then we can show that by using techniques from [EST20, Section 3.1] and the fact that each P attains its maximum at 1. Using the product structure of L = N d p we can interchange the sum and product in (17) and can conclude that K(V d p ) = p 2d . This means that we have to restrict the space V d p to obtain an admissible variation constant. We propose to use the space W d g of homogeneous polynomials of degree g. Spaces like this are commonly used in practical applications. Their dimension is comparably low yet their direct sum S d g allows for a good approximation of numerous highly regular functions given a sufficiently large polynomial degree g. We can employ (17) with L = { : | | = g} to obtain the upper bound where the maximum is estimated by observing that (2( 1 + 1) + 1)(2 2 + 1) ≤ (2 1 + 1)(2( 2 + 1) + 1) This improves the variation constant substantially compared to the bound K(V d p ) ≤ p 2d . The bound for the dictionary of monomials Ψ monomial is more involved but can theoretically be computed in the same way. By drawing samples from an adapted sampling measure [CM17] the theory in [EST20] ensures that K(V ) = dim(V ) for all linear spaces V -independent of the underlying dictionary Ψ. Using such an optimally weighted least-squares method thus leads to the bounds

Block Sparse Tensor Trains
Now that we have seen that it is advantagious to restrict ourselves to the space W d g we need to find a way to do so without loosing the advantages of the tensor train format. In [BGP21] it was rediscovered that if a tensor train is an eigenvector of certain Laplace-like operators it admits a block sparse structure. This means for a tensor train c the components C k have zero blocks. Furthermore, this block sparse structure is persevered under key operations, like e.g. the TT-SVD. One possible operator which introduces such a structure is the Laplace-like operator This is the operator mentioned in the introduction encoding a quantum symmetry. In the context of quantum mechanics this operator is known as the bosonic particle number operator but we simply call it the degree operator. The reason for this is that for the function dictionary of monomials Ψ monomial the eigenspaces of L for eigenvalue g are associated with homogeneous polynomials of degreee g. Simply put, if the coefficient tensor c for the multivariate polynomial u ∈ V d p is an eigenvector of L with eigenvalue g, then u is homogeneous and the degree of u is g. In general there are polynomials in V d p with degree up to (p − 1)d. To state the results on the block-sparse representation of the coefficient tensor we need the partial operators for which we have In the following we adopt the notation x = Lc to abbreviate the equation x(i 1 , . . . , i d ) = L(i 1 , . . . , i d , j 1 , . . . , j d )c(j 1 , . . . , j d ) where L is a tensor operator acting on a tensor c with result x. Recall that by Remark 2.16 every TT corresponds to a polynomial by multiplying function dictionaries onto the cores. This means that for every = 1, . . . , r the TT τ ≤ k, (c) corresponds to a polynomial in the variables x 1 , . . . , x k and the TT τ ≥ k+1, (c) corresponds to a polynomial in the variables x k+1 , . . . , x d . In general these polynomials are not homogeneous, i.e. they are not eigenvectors of the degree operators L ≤ k and L ≥ k+1 . But since TTs are not uniquely defined (cf. Remark 2.10) it is possible to find transformations of the component tensors C k and C k+1 that do not change the tensor c or the rank r but result in a representation where each τ ≤ k, (c) and each τ ≥ k+1, (c) correspond to a homogeneous polynomial. Thus, if c represents a homogeneous polynomial of degree g and τ ≤ k, (c) is homogeneous with deg(τ ≤ k, (c)) =g then τ ≥ k+1, (c) must be homogeneous with deg(τ ≥ k, (c)) = g −g. This is put rigorously in the first assertion in the subsequent Theorem 3.2. There S k,g contains all the indices for which the reduced basis polynomials satisfy deg(τ ≤ k, (c)) =g. Equivalently, it groups the basis functions τ ≥ k+1, (c) into functions of order g −g. The second assertion in Theorem 3.2 states that we can only obtain a homogeneous polynomial of degreeg + m in the variables x 1 , . . . , x k by multiplying a homogeneous polynomial of degreeg in the variables x 1 , . . . , x k−1 with a univariate polynomial of degree m in the variable x k . This provides a constructive argument for the proof and can be used to ensure block-sparsity in the implementation. Note that this condition forces entire blocks in the component tensor C k in equation (20) to be zero and thus decreases the degrees of freedom. Theorem 3.2. [BGP21, Theorem 1] Let p = (p, . . . , p) be a dimension tuple of size d and c ∈ R p \ {0}, be a tensor train of rank r = (r 1 , . . . , r d−1 ). Then Lc = gc if and only if c has a representation with component tensors C k ∈ R r k−1 ×p×r k that satisfies the following two properties.
2. The component tensors satisfy a block structure in the sets S k,g for m = 1, . . . p where we set S 0,0 = S d,g = {1}.
Note that this generalizes to other dictionaries and is not restricted to monomials. Remark 3.1. The rank bounds presented in this section do not only hold for the monomial dictionary Ψ monomial but for all polynomial dictionaries Ψ that satisfy deg(Ψ k ) = k − 1 for all k = 1, . . . , p. When we speak of homogeneous polynomials of degree g in the following we mean the space For the dictionary of monomials Ψ monomial the space W d g contains only homogeneous polynomials in the classical sense. However, when the basis of Legendre polynomials Ψ Legendre is used one obtains a space in which the functions are not homogeneous in the this sense. Note that we use polynomials since they have been applied successfully in practice, but other function dictionaries can be used as well. Also note that the theory is much more general as shown in [BGP21] and is not restricted to the degree counting operator.
Although, block sparsity also appears for g + 1 = p we restrict ourselves to the case g + 1 = p in this work. Note that then the eigenspace of L to the eigenvalue g have the dimension equal to the space of homogeneous polynomials namely d+g−1 d−1 and for ρ k,g = |S k,g | we get the following rank bounds. Assume that g + 1 = p then the block sizes ρ k,g from Theorem 3.2 are bounded by for all k = 1, . . . , d − 1 andg = 0, . . . , g and ρ k,0 = ρ k,g = 1.
The proof of this theorem is based on a simple combinatorial argument. For every k consider the size of the groups ρ k−1,ḡ forḡ ≤g. Then ρ k,ḡ can not exceed the sum of these sizes. Similarly, ρ k,ḡ can not exceed ḡ≤g ρ k+1,ḡ . Solving these recurrence relations yields the bound. This block structure results from sorting the indices i and j in such a way that max S k,g + 1 = min S k,g+1 for everyg. The maximal block sizes ρ k,g for k = 1, . . . , d − 1 are given by ρ k,0 = 1, ρ k,1 = min{k, d − k}, ρ k,2 = min{k, d − k}, ρ k,3 = 1.
As one can see by Theorem 3.3 the block sizes ρ k,g can still be quite high. The expressive power of tensor train parametrization can be understood by different concepts such as for example locality or self similarity. For what comes now, we state a result that addresses locality and leads to d-independent rank bounds. For this we need to introduce a workable notion of locality. Definition 3.3. Let u ∈ W d g be a homogeneous polynomial and B be the symmetric coefficient tensor introduced in Subsection 2.4. We say that u has a variable locality of K loc if B( 1 , . . . , g ) = 0 for all ( 1 , . . . , g ) ∈ N g d with max{| m1 − m2 | : m 1 , m 2 = 1, . . . , g} > K loc .
Example 3.4. Let u be a homogeneous polynomial of degree 2 with variable locality K loc . Then the symmetric matrix B (cf. (12)) is K loc -banded. For K loc = 0 this means that B is diagonal and that u takes the form This shows that variable locality removes mixed terms.
Proof. For fixed g > 0 and a fixed component C k recall that for each l the tensor τ ≤ k,l (c) corresponds to a reduced basis function v l in the variables x 1 , . . . , x k and that for each l the tensor τ ≥ k+1,l (c) corresponds to a reduced basis function w l in the variables x k+1 , . . . , x d . Further recall that the sets S k,g group these v l and w l . For all l ∈ S k,g it holds that deg(v l ) =g and deg(w l ) = g −g. Forg = 0 andg = g we know from Theorem 3.3 that ρ k,g = 1. Now fix any 0 <g < g and arrange all the polynomials v l of degreeg in a vector v and all polynomials w l of degree g −g in a vector w. Then every polynomial of the form v Qw for some matrix Q satisfies the degree constraint and the maximal possible rank of Q provides an upper bound for the block size ρ k,g . However, due to the locality constraint we know that certain entries of Q have to be zero. We denote a variable of a polynomial as inactive if the polynomial is constant with respect to changes in this variable and active otherwise. Assume that the polynomials in v are ordered (ascendingly) according to the smallest index of their active variables and that the polynomials in w are ordered (ascendingly) according to the largest index of their active variables. With this ordering Q takes the form This means that for = 1, . . . , K loc each block Q matches a polynomial v l of degreeg in the variables x k−K loc + , . . . , x k with a polynomial w l of degree g −g in the variables x k+1 , . . . , x k+ . Observe that the number of rows in Q decreases while the number columns increases with . This means that we can subdivide Q as where Q C contains the blocks Q with more rows than columns (i.e. full column rank) and Q R contains the blocks Q with more columns than rows (i.e. full row rank). So Q C is a tall-and-skinny matrix while Q R is a short-and-wide matrix and the rank for general Q is bounded by the sum over the column sizes of the Q in Q C plus the sum over the row sizes of the Q in Q R i.e.
To conclude the proof it remains to compute the row and column sizes of Q . Recall that the number of rows of Q equals the number of polynomials u of degreeg in the variables x k−K loc + , . . . , x k that can be represented as u(x k−K loc + , . . . , x k ) = x k−K loc + ũ(x k−K loc + , . . . , x k ). This corresponds to all possibleũ of degreeg − 1 in the K loc − + 1 variables x k−K loc + , . . . , x k . This means that and a similar argument yields This concludes the proof.
This lemma demonstrates how the combination of the model space W d g with a tensor network space can improve the space complexity by incorporating locality.
Remark 3.5. The rank bound in Theorem 3.4 is only sharp for the highest possible rank. At the sides of the tensor trains the ranks can be much lower, but the precise bounds are quite technical to write down, which is why we skipped this. One sees that the bound only depends on g and K loc and is therefore d-independent.
With Theorem 3.4 it is possible to formulate situations in which a block sparse tensor train representation perform exceptionally well. Let u be a homogeneous polynomial with symmetric coefficient tensor B (cf. (12)) and let B| K loc be the restriction of B onto the coefficients that satisfy the variable locality constraint K loc . If we can choose K loc such that the error of this restriction is small u can be well approximated by a block sparse tensor train satisfying the rank bounds (22).

Method Description
In this section we utilize the insights of Section 3 to refine the approximation spaces W d g and S d g and adapt the alternating least-squares (ALS) method to solve the related least-squares problems. First, we define the subset and provide an algorithm for the related least-squares problem in Algorithm 1 which is a slightly modified version of the classical ALS [HRS12a] 1 . With this definition a straight-forward application of the concept of block-sparsity to the space S d g is given by This means that every polynomial in S d g,ρ can be represented by a sum of orthogonal coefficient tensors 2 There is however another, more compact, way to represent this function. Instead of storing g + 1 different tensors c (0) , . . . , c (g) of order d, we can merge them into a single tensor c of order d + 1 such that c(i d ,g) = c (g) (i d ). The summation overg can then be represented by a contraction of a vector of 1's to the (d + 1)-th mode. To retain the block-sparse representation we can view the (d + 1)-th component as an artificial component representing a shadow variable x d+1 .
Remark 4.1. The introduction of the shadow variable x d+1 contradicts the locality assumptions of Theorem 3.4 and implies that the worst case rank bounds must increase. This can be problematic since the block size contributes quadratically to the number of parameters. However, a similar argument as in the proof of Theorem 3.4 can be made in this setting and one can show that the bounds remain independent of d where the changes to (22) are underlined.
We denote the set of polynomials that results from this augmented block-sparse tensor train representation as where again ρ provides a bound for the block-size in the representation. Since S d,aug g,ρ is defined analogously to B ρ (W d g ) we can use Algorithm 1 to solve the related least-squares problem by changing the contraction (16) to . . . . . .
To optimize the coefficient tensors c (0) , . . . , c (g) in the space S d g,ρ we resort to an alternating scheme. Since the coefficient tensors are mutually orthogonal we propose to optimize each c (g) individually while keeping the other summands {c (k) } k =g fixed. This means that we solve the problem which can be solved using Algorithm 1. The original problem (14) is then solved by alternating overg until a suitable convergence criterion is met. The complete algorithm is summarized in Algorithm 2. The proposed representation has several advantages. The optimization with the tensor train structure is computationally less demanding than solving directly in S d g . Let D = dim(S d g ) = d+g d . Then a reconstruction on S d g requires to solve a linear system of size M × D while a microstep in an ALS sweep only requires the solution of systems of size less than M pr 2 (depending on the block size). Moreover, the stack contractions as shown in 2.6 also benefit from the block sparse structure. This also means that the number of parameters of a full rank r tensor train can be much higher than the number of parameters of several c (m) 's which individually have ranks that are even larger than r.
Remark 4.2. We expect that solving the least-squares problem for S d,aug g,ρ will be faster than for S d g,ρ since it is computational more efficient to optimize all polynomials simultaneously than every degree individually in an alternating fashion. On the other hand, the hierarchical scheme of the summation approach may allow one to utilize multi-level Monte Carlo approaches. Together with the fact that every degreeg possesses a different optimal sampling density this may result in a drastically improved best case sample efficiency for the direct method. Additionally, with S d g,ρ it is easy to extend the ansatz space simply by increasing g which is not so straight-forward for S d,aug g,ρ . Which approach is superior depends on the problem at hand.

Numerical Results
In this section we illustrate the numerical viability of the proposed framework on some simple but common problems. We estimate the relative errors on test sets with respect to the sought function f . Our implementation is meant only as a proof of concept and does not lay any emphasis on efficiency. The termination conditions and the rank selection in particularly are naïvely implemented and rank adaptivity is missing all together. It is, however, straight forward to apply SALSA as described in Section 4 for rank adaptivity, which we consider to be state of the art for these kinds of problem. But, for our experiments, we are more interested in the required sample sizes leading to recovery. In the following we always assume p = g + 1. We also restrict the group sizes to be bounded by the parameter ρ max . For every sample size the error plots show the distribution of the errors between the 0.15 and 0.85 quantile. The code for all experiments has been made publicly available at https://github.com/ptrunschke/block_sparse_tt. Algorithm 1: Extended ALS (SALSA) for the least-squares problem on B ρ (W d g ) input :Data pairs (x (m) , y (m) ) ∈ R d × R for m = 1, . . . , M , a function dictionary Ψ, a maximal degree g, and a maximal block size ρ. output :Coefficent tensor c of a function u ∈ B(W d g ) that approximates the data. For k = 1, . . . , d compute Ξ k according to Equation (15) After a spatial discretization of the heat equation with finite differences we obtain a d-dimensional system of the form minimize u ∞ 0 y(t) Qy(t) + λu(t) 2 dt subject toẏ = Ay + Bu and y(0) = y 0 .
It is well known [CZ95] that the value function for this problem takes the form v(y 0 ) = y 0 P y 0 where P can be computed by solving the algebraic Riccati equation (ARE). It is therefore a homogeneous polynomial of degree 2. This function is a perfect example of a function that can be well-approximated in the space W d 2 . We approximate the value function on the domain Ω = [−1, 1] d for d = 8 with the parameters g = 2 and ρ max = 4.
In this experiment we use the dictionary of monomials Ψ = Ψ monomial (cf. equation (4)) and compare the ansatz spaces W 8 2 , B 4 (W 8 2 ), T 6 (V 8 3 ) and V 8 3 . Since the function v(x) is a general polynomial we use Theorem 3.3 to calculate the maximal block size 4. This guarantees perfect reconstruction since B 4 (W 8 2 ) = W 8 2 . The rank bound 6 is chosen s.t. B 4 (W 8 2 ) ⊆ T 6 (V 8 3 ). The degrees of freedom of all used spaces are listed in Table 1. In Figure 1 we compare the relative error of the respective ansatz spaces. It can be seen that the block sparse ansatz space recovers almost as well as the sparse approach. As expected, the dense TT format is less favorable with respect to the sample size.
A clever change of basis, given by the diagonalization of Q, can reduce the required block size from 4 to 1. This allows to extend the presented approach to higher dimensional problems. The advantage over the classical Riccati approach becomes clear when considering non-linear versions of the control problem that do not exhibit a Riccati solution. This is done in [OSS20,DKK21] using the dense TT-format T r (V d p ).

Gaussian density
As a second example we consider the reconstruction of an unnormalized Gaussian density again on the domain Ω = [−1, 1] d with d = 6. For the dictionary Ψ = Ψ Legendre (cf. equation (5)) we chose g = 7, ρ max = 1 and r = 8 and compare the reconstruction w.r.t. S d g , S d g,ρmax and T r (V d p ), defined in (11), (24) and (8). The degrees of freedom resulting from these different discretizations are compared in Table 2. This example is interesting because here the roles of the spaces are reversed. The function has product structure f (x) = exp(−x 2 1 ) · · · exp(−x 2 d ) and can therefore be well approximated as a rank 1 tensor train with each component C k just being a best approximation for exp(−x 2 k ) in the used function dictionary. Therefore, we expect the higher degree polynomials to be important. A comparison of the relative errors to the exact solution are depicted in Figure 2. This example demonstrates the limitations of the ansatz space S 6 7 which is not able to exploit the low-rank structure of the function f . Using S 6 7,1 can partially remedy this problem as can be seen by the improved sample efficiency. But since S 6 7,1 ⊆ S 6 7 the final approximation error of S 6 7,1 can not deceed that of S 6 7 . One can see that the dense format T 1 (V 6 8 ) produces the best results but is quite unstable compared to the other ansatz classes. This instability is a result of the non-convexity of the set T r (V d p ) and we observe that the chance of getting stuck in a local minimum increases when the rank r is reduced from 8 to 1. Finally, we want to address the peaks that are observable at M ≈ 500 samples for T 8 (V 6 8 ) and M ≈ 1716 samples for S 6 7 . For this recall that the approximation in S 6 7 amounts to solving a linear system which is underdetermined for M < 1716 samples and overdetermined for M > 1716 samples. In the underdetermined case we compute the minimum norm solution and in the overdetermined case we compute the least-squares solution. It is well-known that the solution to such a reconstruction problem is particularly unstable in the area of this transition [CM17]. Although the set S 6 7,1 is non-linear we take the peak at M ≈ 500 as evidence for a similar effect which is produced by the similar linear systems that are solved in the micro steps in the ALS.

Quantities of Interest
The next considered problem often arises when computing quantities of interest from random partial differential equations. We consider the stationary diffusion equation k −2 sin(ˆ k x 1 ) sin(ˇ k x 2 )y k withˆ k = π k 2 andˇ k = π k 2 . The solution u often measures the concentration of some substance in the domain Ω and one is interested in the total amount of this substance in the entire domain M (y) := Ω u(x, y) dx.
An important result proven in [HS12] ensures the p summability, for some 0 < p ≤ 1, of the polynomial coefficients of the solution of this equation when Ψ is the dictionary of Chebyshev polynomials. This means that the function is very regular and we presume that it can be well approximated in S d g for the dictionary of Legendre polynomials Ψ Legendre .
For our numerical experiments we chose d = 10, g = 5 and ρ max = 3 and again compare the reconstruction w.r.t. S d g , the block-sparse TT representations of S d g,ρmax and S d,aug g,ρmax and a dense TT representation of T r (V d p ) with rank r ≤ 14. Admittedly, the choice d = 10 is relatively small for this problem but was necessary since the computation on S d g took prohibitively long for larger values. A comparison of the degrees of freedom for the different ansatz spaces is given in Table 3 the relative errors to the exact solution are depicted in Figure 3. In this plot we can recognize the general pattern that a lower number of parameters can be associated with an improved sample efficiency. However, we also observe that for small M the relative error for S d g,ρ is smaller than for S d,aug g,ρ . We interpret this as a consequence of the regularity of u since the alternating scheme for the optimization in S d g,ρ favors lower degree polynomials by construction. In spite of this success, we have to point out that optimizing over S d g,ρ took about 10 times longer than optimizing over S d,aug g,ρ . Finally, we observe that the recovery in T 14 (V 10 6 ) produces unexpectedly large relative errors when compared to previous results in [ENSW19]. This implies that the rank-adaptive algorithm from [ENSW19] must have a strong regularizing effect that improves the sample efficiency.   , and red: T 14 (V 10 6 ). The experiment for T 14 (V 10 6 ) was stopped early at M = 1200 due to its prohibitive computational demand and because the expected behaviour is already observable.

Conclusion
We discuss the problem of function identification from data for tensor train based ansatz spaces and give some insights into when these ansatz spaces can be used efficiently. For this we combine recent results on sample complexity [EST20] and block sparsity of tensor train networks [BGP21] to motivate a novel algorithm for the problem at hand. We then demonstrate the applicability of this algorithm to different problems. Up until know only dense tensor trains were used for these recovery tasks. The numerical examples however demonstrate that this format can not compete with our novel block-sparse approach. We observe that the sample complexity can be much more favorable for successful system identification with block sparse tensor trains than with dense tensor trains or purely sparse representations. We expect that inclusion of rank-adaptivity using techniques from SALSA or bASD is straight forward and consider it an interesting direction for forthcoming papers. We expect, that this would improve the numerical results even further. The introduction of rank-adaptivity would moreover alleviate the problem of having to choose a block size a-priori. Finally, we want to reiterate that the spaces of homogeneous polynomials are predestined for the application of least-squares recovery with an optimal sampling density (cf. [CM17]) which holds opportunities for further improvement of the sample efficiency. This leads us to the conclusion that the proposed algorithm can be applied successfully to other high dimensional problems in which the sought function exhibits sufficient regularity.