Your new experience awaits. Try the new design now and help us make it even better

CORRECTION article

Front. Appl. Math. Stat., 01 July 2025

Sec. Optimization

Volume 11 - 2025 | https://doi.org/10.3389/fams.2025.1629658

This article is part of the Research TopicLarge Tensor Analysis and ApplicationsView all 4 articles

Corrigendum: Expectation-maximization alternating least squares for tensor network logistic regression


Naoya YamauchiNaoya Yamauchi1Hidekata HontaniHidekata Hontani1Tatsuya Yokota,
Tatsuya Yokota1,2*
  • 1Department of Computer Science, Nagoya Institute of Technology, Aichi, Japan
  • 2RIKEN Center for Advanced Intelligence Project, Tokyo, Japan

A Corrigendum on
Expectation-maximization alternating least squares for tensor network logistic regression

by Yamauchi, N., Hontani, H., and Yokota, T. (2025). Front. Appl. Math. Stat. 11:1593680. doi: 10.3389/fams.2025.1593680

In the original published article, there were typographical errors in mathematical formulas (Equations 58, 59, 73, and 74). The equations were derived and implemented correctly in the computer program; however, mistakes occurred during the writing of the paper. Corrections have been made to Equations 58, 59 in Section 4.2.2 EM-ALS algorithm and Equations 73, 74 in Section 4.3.2 EM-ALS for learning multi-class TN classifiers.

Equations 58, 59, 73, and 74 previously stated:

β^m=(ZmΩtZm)-1Zm(κωt),    (58)
β^m(ϵ)=(ZmΩtZm+ϵI)-1Zm(κωt),    (59)
β^m=(ZmΩtZm)-1Zm(κωt),    (73)
β^m(ϵ)=(ZmΩtZm+ϵI)-1Zm(κω).    (74)

The corrected Equations appear below:

β^m=(ZmΩtZm)-1Zmκ,    (58)
β^m(ϵ)=(ZmΩtZm+ϵI)-1Zmκ,    (59)
β^m=(ZmΩtZm)-1Zmκ,    (73)
β^m(ϵ)=(ZmΩtZm+ϵI)-1Zmκ.    (74)

1 Derivation of corrections

Here we only show the derivation of Equation (58). The remaining three corrections can be derived in a similar manner.

First, Equation 58 is the weighted least squares solution for

g(Am,v|θ(t))=||Ωt12(κωt)-Ωt12(L(m)Φ(m)R(m))vec(Am)       +vΩt121||22,(57)    (57)

where we put Ωt=diag(ω^1(t),ω^2(t),...,ω^N(t))N×N, ωt=[ω^1(t),ω^2(t),...,ω^N(t)]N×N, κ=[κ1,κ2,...,κN]N, and ⊘ stands for entry-wise division. Let us put βm=[vec(Am),v] and Zm=[(L(m)Φ(m)R(m)),1], the cost function is rewritten as

||Ωt12(κωt)-Ωt12(L(m)Φ(m)R(m))vec(Am)+vΩt121||22       =||Ωt12(κωt)-Ωt12Zmβm||22       =||Ωt12(Zmβm-κωt)||22       =||Ωt12(Zmβm-Ωt-1κ)||22       =(Zmβm-Ωt-1κ)Ωt(Zmβm-Ωt-1κ)       =βmZmΩtZmβm-2βmZmκ+κΩt-1κ.

Note that κωt=Ωt-1κ. Optimality condition is given by

gβm|βm=βm^=2ZmΩtZmβ^m-2Zmκ=0.    (1)

Finally, the minimizer is given in closed form as

β^m=(ZmΩtZm)-1Zmκ.    (58)

The original article has been updated.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Keywords: expectation-maximization (EM), majorization-minimization (MM), alternating least squares (ALS), tensor networks, tensor train, logistic regression, Pólya-Gamma (PG) augmentation

Citation: Yamauchi N, Hontani H and Yokota T (2025) Corrigendum: Expectation-maximization alternating least squares for tensor network logistic regression. Front. Appl. Math. Stat. 11:1629658. doi: 10.3389/fams.2025.1629658

Received: 16 May 2025; Accepted: 03 June 2025;
Published: 01 July 2025.

Edited and reviewed by: Yannan Chen, South China Normal University, China

Copyright © 2025 Yamauchi, Hontani and Yokota. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Tatsuya Yokota, dC55b2tvdGFAbml0ZWNoLmFjLmpw

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.