Abstract
Accurate prediction of Quality of Service (QoS) plays a crucial role in service recommendation and selection across large-scale distributed environments. Latent factor (LF) models have become a mainstream solution for QoS prediction owing to their simplicity and scalability, yet typical formulations struggle to capture complex latent interactions and usually rely on manually tuned regularization, which often limits prediction accuracy. To address these challenges, we propose an Adaptive Core-Enhanced Latent Factor (ACELF) model that integrates a learnable core interaction mechanism with an incremental Proportional-Integral-Derivative (PID)-driven adaptive regularization strategy. Specifically, a learnable core interaction matrix is introduced to model interactions between latent user and service factors, enabling richer representation learning beyond standard bilinear assumptions. To further enhance robustness, we design an incremental PID controller that dynamically adjusts the regularization coefficient of the core interaction matrix according to the training dynamics, allowing the optimization process to automatically balance model expressiveness and overfitting. Extensive experiments on real-world QoS datasets demonstrate that ACELF consistently outperforms several state-of-the-art methods in terms of prediction accuracy.
1 Introduction
With the rapid proliferation of service-oriented and cloud-based applications (Syed et al., 2025; Sadat and Dai, 2025; Boiko et al., 2024; Saad et al., 2024; Mohamed Hadjkouider et al., 2024; Jia et al., 2024), users increasingly rely on distributed service platforms to select and invoke online services. In such ecosystems, Quality of Service (QoS) information—such as response time, throughput, and availability—plays a fundamental role in determining user satisfaction and system performance (Chen et al., 2025; Yang et al., 2025; Ghafouri et al., 2022; Zhang et al., 2024; Cao et al., 2024; Wang et al., 2019). However, directly obtaining complete QoS measurements is often infeasible due to the dynamic, heterogeneous, and large-scale nature of service environments (Gnanasekaran et al., 2022; Huang et al., 2022; Jia et al., 2023). Consequently, accurate QoS prediction has become a critical research problem and has attracted significant attention in both industry and academia (Syu and Wang, 2021; Kimbugwe et al., 2021; Zheng et al., 2022).
Among various QoS prediction techniques, latent factor (LF) models have emerged as one of the most effective and scalable paradigms (Wu et al., 2023b; Uta et al., 2024; Zhang et al., 2022; Merabet and Benmerzoug, 2022; Chen et al., 2022b, 2024b; Luo et al., 2021; Wu et al., 2023a; Chen et al., 2022a; Wu et al., 2022; Qin et al., 2024). Figure 1 illustrates the basic idea of LF models for QoS prediction. By embedding users and services into a shared low-dimensional latent space, LF models learn compact representations that enable efficient QoS estimation even under sparse observations (Jawabreh and Taweel, 2024). Wu et al. (2023b) proposed D2E-LF, an ensemble LF model that combines inner-product and distance spaces with both L1 and L2 losses. Merabet and Benmerzoug (2022) proposed an Auto-NF framework that reduces sparsity via neighbor clustering and mitigates overfitting through an autoencoder-based LF selection mechanism. Qin et al. (2024) proposed a series of adaptively accerated LF models. Chen et al. (2022a) proposed a context-aware LF model that captures low- and high-order interactions among users, services, and contextual features. Luo et al. (2021) proposed multiple extended-stochastic-gradient-based optimizers to enhance the convergence performance of LF models. Chen et al. (2024b) proposed a non-negative LF model enhanced with generalized Nesterov acceleration and particle-swarm-based hyperparameter adaptation. Xu et al. (2023) proposed an extended-linear-biases LF model with self-adaptive bias scaling. Despite their effectiveness, conventional LF formulations generally employ a simple bilinear interaction between latent vectors (Jawabreh and Taweel, 2024; Ahmadian et al., 2025). This constraint limits their ability to capture more complex, higher-order dependencies that often exist in real-world QoS data. Recent advances in graph learning have highlighted the importance of modeling complex interactions and dependencies in sparse and high-dimensional data (Bi et al., 2025a; Zhou et al., 2024; He et al., 2021, 2024; Bi et al., 2025b). For example, Bi et al. (2025a) proposed a dynamic graph mixer framework to capture coupled interactions in nonstandard tensor data. Zhou et al. (2024) developed a general representation learning framework that emphasizes structured interaction modeling. He et al. (2021, 2024) introduced advanced interaction operators for graph-based representation learning, enabling more expressive modeling of complex relational patterns. While these approaches are highly expressive, they rely on complex architectures and incur substantial computational overhead, which limits their scalability for large-scale QoS prediction.
Figure 1
In this work, we propose an Adaptive Core-Enhanced Latent Factor (ACELF) model to directly address the above challenges by jointly enriching interaction modeling and enabling dynamic regularization control. ACELF introduces a learnable core interaction matrix to modulate the interactions between latent user and service factors. This design greatly increases the flexibility of the representation space and allows the model to capture non-bilinear relationships that are impossible to express with traditional LF methods. To regulate this expressive core while preventing overfitting, we develop an incremental Proportional–Integral–Derivative (PID) controller that automatically adjusts the regularization coefficient of the core tensor during training. Unlike fixed or manually tuned regularization, the PID mechanism derives its adjustment from the optimization dynamics, providing a principled and responsive way to maintain training stability and model generalization.
The main contributions of this work can be summarized as follows:
We propose ACELF, a novel latent factor model enhanced with a learnable and lightweight core interaction matrix to capture complex user–service interaction patterns.
We introduce an incremental PID-based adaptive regularization strategy that dynamically tunes the core interaction matrix's regularization strength based on training feedback.
We provide comprehensive empirical evaluations demonstrating that ACELF achieves superior accuracy and stability compared with state-of-the-art LF models.
The remainder of this paper is organized as follows. Section 2 provides preliminaries, including the LF model for QoS prediction, SGD-based learning of LF models, and the PID controller. Section 3 presents the proposed ACELF model and its optimization procedure. Section 4 reports experimental results and analysis. Section 5 concludes the paper.
2 Preliminaries
In this section, we briefly introduce the basic LF model for QoS prediction, its stochastic gradient descent (SGD) based learning procedure, and the PID controller. Table 1 summarizes the main notations used in this paper.
Table 1
| Symbol | Description |
|---|---|
| , | Sets of users and services |
| M, N | Numbers of users and services |
| R | Partially observed QoS matrix |
| rus | Observed QoS value of user u on service s |
| Predicted QoS value of user u on service s | |
| Ω | Index set of observed entries in R |
| Ωtr, Ωval, Ωte | Training, validation, and test index sets |
| d | Latent dimension |
| pu, qs | Latent vectors of user u and service s |
| P, Q | Stacked user/service latent factor matrices |
| G | Core interaction matrix modeling cross-dimension interactions |
| μ, bu, cs | Global mean, user bias, and service bias |
| Objective function | |
| λ | Regularization coefficient in standard LF models |
| λP, λQ | Regularization coefficients for P and Q |
| λG | Regularization coefficient for G |
| Value of λG at epoch k | |
| λmin, λmax | Lower and upper bounds for clipping λG |
| eus | Per-entry prediction residual |
| Jk | Validation performance indicator at epoch k |
| J* | Reference value for PID control |
| ek | Control error at epoch k |
| Kp, Ki, Kd | PID proportional, integral, and derivative gains |
| Incremental PID update of λG at epoch k | |
| η, ηP, ηQ, ηG | Learning rates (general / for P, Q, G) |
| ∥·∥2, ∥·∥F | Euclidean norm and Frobenius norm |
| 〈·, ·〉F | Frobenius inner product |
Summary of main notations used in this paper.
2.1 LF Model for QoS prediction
Let denote the set of users and denote the set of services. We use R ∈ ℝM×N to represent the partially observed QoS matrix, where rus is the observed QoS value (e.g., response time or throughput) of user u on service s. The index set of observed entries is denoted by
LF models assume that each user u and each service s can be embedded into a shared d-dimensional latent space (Ahmadian et al., 2025; Lin et al., 2025b). Specifically, we associate user u with a latent vector and service s with a latent vector . The predicted QoS value is then given by a bilinear interaction:
where denotes the predicted QoS of user u on service s.
Let matrices and collect all user and service latent factors, respectively. A common learning approach is to minimize the following regularized squared loss:
where λ > 0 is a regularization coefficient that helps prevent overfitting by penalizing large latent factor norms.
2.2 SGD-based learning of latent factor models
SGD is a standard optimization method for large-scale machine learning problems (Wang and Joshi, 2021; Tian et al., 2023). Consider an objective function of the form
where θ denotes the model parameters and ℓ(θ; ξi) is the loss associated with a single training sample ξi. Instead of computing the full gradient over all samples, SGD updates the parameters using one randomly sampled training instance at each iteration. The basic SGD update rule is
where ηt > 0 is the learning rate at iteration t, and ξt is a randomly chosen sample at iteration t.
For the LF objective in (2), we focus on a single observed entry (u, s) ∈ Ω. Define the prediction error
The gradients of with respect to pu and qs contributed by this entry are
Applying the general SGD rule to (6)–(7), the parameter updates for a sampled pair (u, s) ∈ Ω with learning rate η > 0 become
By iterating these updates over multiple passes through the observed entries, the latent factors P and Q are learned in a scalable and efficient manner.
2.3 PID controller
The PID controller is a classical and widely used feedback control mechanism in automatic control systems (Jamil et al., 2022; Borase et al., 2021; Amertet et al., 2024). Its goal is to regulate a process variable by minimizing the deviation between a desired setpoint and the actual system output through three components: proportional (P), integral (I), and derivative (D).
Let y(t) denote the measured process variable at time t, and let y* denote the desired setpoint. The control error is defined as
The continuous-time PID control law is given by
where u(t) is the control signal, and Kp, Ki, and Kd are the proportional, integral, and derivative gains, respectively. The proportional term reacts to the current error, the integral term accumulates past errors to eliminate steady-state bias, and the derivative term anticipates future trends by considering the rate of change of the error.
In discrete-time settings with sampling index k, the error at step k is denoted by ek. A commonly used incremental form of the discrete PID controller updates the control signal via
where uk is the control signal at step k, and Δuk denotes its increment. The new control signal is then obtained by
PID controllers have been extensively applied in various engineering domains, such as process control, robotics, and industrial automation, due to their simplicity, robustness, and ease of implementation.
3 Proposed method
In this section, we present the proposed Adaptive Core-Enhanced Latent Factor (ACELF) model for QoS prediction. Figure 2 illustrates the overall framework of ACELF. We first introduce the core-enhanced latent factor formulation that incorporates a learnable interaction structure between users and services, and then design an incremental PID–based strategy to adaptively tune the regularization strength of the core interaction matrix. Finally, we derive the optimization algorithm based on SGD and analyze its computational properties.
Figure 2
3.1 Core-enhanced latent factor model
Traditional LF models predict the QoS value between user u and service s by (1), which assumes that the interaction across latent dimensions is strictly one-to-one: the i-th dimension of pu only interacts with the i-th dimension of qs (Wu et al., 2025a). Such a restriction may be too rigid to model complex user–service relationships (Wu et al., 2024).
To enhance the expressiveness of the interaction mechanism, we introduce a learnable core interaction matrix
and define the prediction as
Equivalently, (15) can be written in component-wise form as
where pu, i and qs, j denote the i-th and j-th components of pu and qs, respectively, and Gij is the (i, j)-th entry of G.
From (16), we see that G explicitly models cross-dimension interactions: the i-th latent dimension of the user can interact with the j-th latent dimension of the service with strength Gij. This generalizes the standard LF model in the following sense:
If G = Id (identity matrix), then
and ACELF reduces exactly to the conventional LF model.
If G is restricted to be diagonal, i.e., G = diag(γ) with γ ∈ ℝd, then
which corresponds to a LF model with dimension-wise reweighting of latent interactions.
With a full matrix G, ACELF can capture arbitrary linear mixing between latent dimensions, substantially increasing representational capacity compared with standard LF models.
By stacking user and service latent factors into matrices
the predictions for all user–service pairs can be written in a compact matrix form as
where contains predicted QoS values. If we view as a third-order tensor with modes corresponding to users, services, and an (implicit) interaction mode, then (20) can be interpreted as a degenerate Tucker-style factorization (Song et al., 2020; Wu et al., 2025c), where P and Q play the role of factor matrices and G serves as the core tensor along the latent dimension.
In many practical implementations, one may also include bias terms, such as a global mean μ, user bias bu, and service bias cs, leading to
For clarity of exposition, we focus on the interaction term and omit bias terms in the following derivations, they can be incorporated straightforwardly if needed.
Objective Function. Given the prediction model in (15), we define the following loss over the observed entries:
where λP, λQ, and λG are nonnegative regularization coefficients. The terms and constrain the magnitude of user and service latent factors, similar to standard LF models. The term controls the complexity of the interaction structure. A large λG encourages G toward small-norm solutions (e.g., close to the zero matrix), effectively simplifying the interaction pattern, whereas a small λG allows more complex cross-dimension interactions at the risk of overfitting.
The introduction of G brings about an intrinsic trade-off between model capacity and generalization. Compared with traditional LF (which corresponds to the special case G = Id with no additional parameters), ACELF introduces d2 extra parameters in G. Properly regularizing G is thus crucial for avoiding overfitting, especially when d is moderate or large. This observation motivates the adaptive treatment of λG, rather than fixing it manually.
Interpretability. Using the Frobenius inner product 〈·, ·〉F, the core-enhanced prediction in (16) can be rewritten as
which indicates that the core interaction matrix G acts as a global interaction kernel over the outer products of user and service latent representations.
From an interpretability perspective, each entry of G can be viewed as an interaction weight between latent dimensions of users and services. A large positive value of Gij suggests that a strong preference of a user on the i-th latent dimension, together with a strong attribute of a service on the j-th latent dimension, is likely to yield a higher QoS value. In contrast, negative values may indicate antagonistic interactions between the corresponding latent factors. Such patterns help reveal which latent dimensions cooperate or conflict in shaping QoS outcomes.
From a practical standpoint, the learned core interaction matrix provides meaningful insights for QoS analysis and service recommendation. By examining rows or columns of G, one can identify dominant latent dimensions and understand how different user preferences and service characteristics jointly influence QoS performance at a global level. Latent dimensions associated with consistently large interaction weights may correspond to critical service attributes or user sensitivity patterns that strongly affect QoS. Compared with standard bilinear LF models, whose interpretability is largely limited to individual latent factors, the core-enhanced formulation enables analysis of cross-dimension interactions. This allows not only identifying important latent factors but also understanding how combinations of user and service properties interact to impact QoS. Such interpretability can support QoS-aware service recommendation, system diagnosis, and service optimization, providing insights beyond pure prediction accuracy.
3.2 Incremental PID-based adaptive regularization
As discussed above, the regularization coefficient λG plays a central role in controlling the complexity of the core interaction structure. A small λG may lead to overfitting, while an overly large λG may underuse the expressive power of G. Fixing λG to a constant value chosen by grid search does not exploit the dynamic feedback available during training. A straightforward alternative is to employ heuristic or predefined regularization schedules, such as monotonically decreasing or piecewise constant rules. However, such open-loop strategies are fixed before training and cannot respond to the evolving optimization dynamics. In practice, the tendency toward overfitting or underfitting may vary across different training stages and datasets, making manually designed schedules suboptimal and sensitive to hyperparameter choices.
To address this issue, we treat λG as a time-varying control variable that is adjusted according to the training dynamics. Let the training process be indexed by discrete steps or epochs k = 1, 2, …. At each step k, we compute a scalar performance indicator Jk of the current model. A typical choice is the average validation loss over a held-out set:
where Ωval is a disjoint validation index set, and denotes the prediction at epoch k.
We set a reference value J* (commonly zero) and define the control error
We then employ the incremental PID controller introduced in the preliminaries to update the regularization coefficient λG.
Specifically, we regard as the control signal at step k and write the incremental PID update as
where Kp, Ki, and Kd are the proportional, integral, and derivative gains, respectively. At the beginning of training, the historical error terms for the PID controller are initialized to zero. The new regularization coefficient is then obtained as
To keep within a reasonable range and maintain numerical stability, we apply clipping:
where λmin ≥ 0 and λmax > λmin are predefined bounds.
The three terms in (26) have complementary effects:
The proportional term Kp(ek − ek − 1) reacts to the change in error and provides a direct correction proportional to the most recent deviation.
The integral term Kiek accumulates the error and reduces long-term bias in λG, preventing it from staying in a region that yields persistently poor validation performance.
The derivative term Kd(ek−2ek − 1+ek − 2) captures the curvature of the error trajectory and can damp rapid oscillations in λG, improving the stability of the training process.
By embedding this closed-loop control into the learning procedure, ACELF can automatically adapt λG to the current stage of optimization. When the model tends to overfit (e.g., validation loss increases), the controller can increase λG, when the model is too rigid (e.g., validation loss decreases slowly), the controller can relax the regularization to allow more expressive interactions.
3.3 Optimization algorithm
Gradient Derivation. We now derive the SGD updates for ACELF with the objective function in (22). For a single observed entry (u, s) ∈ Ω, the prediction error under the core-enhanced model is
The contribution of (u, s) to the data-fitting term is
Taking derivatives, we obtain the gradients of w.r.t. pu, qs, and G as
The first terms on the right-hand side correspond to the gradient of the squared loss, and the second terms arise from the ℓ2 regularization.
SGD-based Learning Scheme. Let ηP, ηQ, and ηG denote the learning rates for pu, qs, and G, respectively. Using the general SGD rule on (31)–(33), the parameter updates for a sampled observed entry (u, s) ∈ Ω are
where λG in (36) is the current value at epoch k provided by the PID controller.
The complete training algorithm of ACELF is summarized in Algorithm 1. The inner loop performs SGD updates over the observed entries, while the outer loop updates the regularization coefficient λG using the incremental PID controller.
Algorithm 1

3.4 Computational and modeling analysis
We briefly analyze both the computational complexity and modeling characteristics of ACELF.
Computational Complexity. For a single observed entry (u, s), computing requires forming either Gqs or , both costing O(d2) operations, followed by one inner product of cost O(d). The gradient computations and updates in (34)–(36) are also dominated by forming and scaling G, which are O(d2). Therefore, the per-entry complexity is O(d2).
Let |Ω| denote the number of observed QoS entries. A full pass (epoch) over the data costs
The additional overhead for the PID update of λG is O(1) per epoch, which is negligible compared to the SGD updates. In contrast, a standard LF model with bilinear interactions has per-entry complexity O(d), thus ACELF trades increased computational cost for stronger representational power.
Memory Complexity. The memory cost is dominated by storing P ∈ ℝM×d, Q ∈ ℝN×d, and G ∈ ℝd×d, resulting in
space. When d is moderate, the additional d2 parameters for G are typically affordable in modern QoS prediction scenarios.
Modeling Perspective. From a modeling standpoint, ACELF strictly generalizes standard LF and provides a continuous spectrum of models controlled by λG. When λG is very large, the regularization term forces G toward a small-norm matrix (e.g., close to the zero matrix or a scaled identity), and the model behaves similarly to a low-capacity LF model. When λG is very small, G can deviate significantly from identity and learn complex cross-dimension interactions, which increases expressiveness but may overfit the training data. By coupling ACELF with an incremental PID controller, the model can automatically adjust λG according to validation feedback, aiming to achieve a favorable balance between these two extremes during training.
4 Expreiments
In this section, we conduct empirical studies to evaluate the effectiveness of the proposed ACELF model. We first describe the experimental settings, including datasets, evaluation metrics, baselines, and implementation details. Then we report and analyze the experimental results.
4.1 Experimental settings
4.1.1 Datasets
We evaluate ACELF on the WS-DREAM dataset, which contains QoS records collected from 339 users invoking 5,825 Web services (Lebib and Kichou, 2024). The QoS values include response time (RT) and throughput (TP), and thus the original user–service interactions can be decomposed into two matrices: RTData and TPData.
RTData contains 1,873,838 observed entries, and TPData contains 1,831,253 observed entries. To reduce the influence of heavy-tailed distributions and improve numerical stability, we apply a logarithmic scaling to the QoS values in both RTData and TPData before model training and evaluation.
To study the performance under different sparsity levels, we randomly sample observed entries from RTData and TPData at three sampling ratios (0.05, 0.075, and 0.10), forming six dataset subsets in total: RT-0.05, RT-0.075, RT-0.10, TP-0.05, TP-0.075, and TP-0.10. The statistics of these subsets are summarized in Table 2.
Table 2
| Dataset | #Users M | #Services N | #Observations |Ω| | Sparsity (%) |
|---|---|---|---|---|
| RT-0.05 | 339 | 5825 | 93,692 | 95.26 |
| RT-0.075 | 339 | 5825 | 140,538 | 92.88 |
| RT-0.10 | 339 | 5825 | 187,384 | 90.51 |
| TP-0.05 | 339 | 5825 | 91,563 | 95.36 |
| TP-0.075 | 339 | 5825 | 137,344 | 93.04 |
| TP-0.10 | 339 | 5825 | 183,125 | 90.73 |
Statistics of the QoS datasets used in our experiments.
4.1.2 Train–validation–test split
To evaluate the generalization performance, we randomly split the observed QoS entries Ω of each subset into three disjoint sets: a training set Ωtr used to learn model parameters, a validation set Ωval used to tune hyperparameters and drive the PID controller, and a test set Ωte used only for final performance reporting. We adopt the following split ratio:
To reduce randomness, we repeat the random splitting process multiple times and report the average results over all runs.
4.1.3 Evaluation metrics
We use standard regression metrics to assess the prediction quality of QoS values (Wu et al., 2025b; Sun and Liu, 2025; Lei et al., 2024; Lyu et al., 2026; Lin et al., 2025a; Liao et al., 2025). Given the ground-truth QoS rus and the corresponding prediction on the test set Ωte, we compute:
Mean Absolute Error (MAE):
Root Mean Squared Error (RMSE):
Mean Squared Error (MSE):
Lower values of these metrics indicate better predictive performance.
4.1.4 Implementation details
We set the latent dimension to d = 20 for all experiments considering the computational complexity and model capacity (Yuan et al., 2025; Chen et al., 2024a). The PID gains are empirically set to Kp = 0.01, Ki = 0.005, and Kd = 0.001 (Li et al., 2025).
We determine the initial learning rate and regularization strength for G via grid search:
The best hyperparameters on TPData subsets are η = 10−2 and λ = 5 × 10−2, while on RTData subsets they are η = 10−3 and λ = 5 × 10−2. In ACELF, we apply λ to the latent factors (i.e., λP = λQ = λ), and we initialize the core regularization as .
During training, λG is adaptively updated by the incremental PID controller, and we clip it to a predefined range for stability:
We run each method for at most Kmax epochs. To ensure stable convergence and avoid overfitting, we adopt an early-stopping strategy based on the validation indicator Jk (the average validation loss). Training stops if Jk does not improve for P consecutive epochs, or if the relative improvement of Jk falls below a small threshold ϵ, i.e.,
In our experiments, we set Kmax = 500, P = 5, and ϵ = 10−4.
4.1.5 Baselines
We compare ACELF with five representative LF-based QoS prediction baselines, covering adaptive hyperparameter tuning, PID-driven adaptation, accelerated optimization, and constrained factorization:
PSLF (PSO-Adaptive Latent Factor) (Qin et al., 2024): An adaptive LF model that employs particle swarm optimization (PSO) to automatically tune key hyperparameters, including the learning rate and the regularization coefficient, during training.
MLF (Momentum-Accelerated Latent Factor) (Luo et al., 2021): A momentum-enhanced LF model that accelerates convergence by incorporating classical momentum into the SGD-style updates.
NLF (Nesterov-Accelerated Latent Factor) (Luo et al., 2021): An LF model optimized with Nesterov's accelerated gradient, which performs a look-ahead update along the momentum direction to achieve faster convergence.
PLF (PID-Adaptive Latent Factor) (Li et al., 2025): An adaptive LF model that incorporates a PID controller to regulate the training process at the sample level, where the instantaneous per-entry prediction residual is used as the control input, and the PID-adjusted error signal replaces the original residual in the SGD updates of the standard bilinear LF model.
ANLF (Adaptive Non-negative Latent Factor) (Li et al., 2023): A non-negative LF model that enforces non-negativity constraints on latent factors and adopts an adaptive learning scheme to improve robustness.
All baselines are implemented with the same latent dimension d = 20 and optimized using the same training protocol and early-stopping strategy as ACELF for fair comparison.
4.2 Comparison results and analysis
This section presents comprehensive experimental results on six WS-DREAM subsets under different sparsity levels. Figures 3–6 visualize the RMSE, MAE, MSE, and overall Avg error results reported in Table 3, where Avg is defined as (RMSE+MAE+MSE)/3 for each dataset. Table 4 reports the per-dataset improvement of ACELF over each baseline measured by Avg error. Finally, Table 5 summarizes average performance across all datasets for each metric, and Table 6 further shows the average improvement of ACELF over baselines.
Figure 3
Table 3
| Method | Metric↓ | RT-0.05 | RT-0.075 | RT-0.01 | TP-0.05 | TP-0.075 | TP-0.01 |
|---|---|---|---|---|---|---|---|
| ACELF | RMSE | 0.148353 | 0.141291 | 0.133584 | 0.273801 | 0.250409 | 0.226591 |
| MAE | 0.083069 | 0.078701 | 0.072036 | 0.188475 | 0.165541 | 0.148113 | |
| MSE | 0.022009 | 0.019963 | 0.017845 | 0.074967 | 0.062705 | 0.051343 | |
| Avg | 0.084477 | 0.079985 | 0.074488 | 0.179081 | 0.159552 | 0.142016 | |
| PSLF | RMSE | 0.163019 | 0.156700 | 0.152960 | 0.296646 | 0.259275 | 0.243900 |
| MAE | 0.091939 | 0.089515 | 0.086507 | 0.208619 | 0.176362 | 0.164926 | |
| MSE | 0.026575 | 0.024576 | 0.023397 | 0.087999 | 0.067223 | 0.059487 | |
| Avg | 0.093844 | 0.090264 | 0.087621 | 0.197755 | 0.167620 | 0.156104 | |
| MLF | RMSE | 0.166271 | 0.147735 | 0.134903 | 0.312377 | 0.261697 | 0.238551 |
| MAE | 0.093390 | 0.082163 | 0.073270 | 0.228292 | 0.178355 | 0.158650 | |
| MSE | 0.027646 | 0.021826 | 0.018199 | 0.097579 | 0.068486 | 0.056906 | |
| Avg | 0.095769 | 0.083908 | 0.075457 | 0.212749 | 0.169513 | 0.151369 | |
| NLF | RMSE | 0.161597 | 0.155093 | 0.151201 | 0.299772 | 0.278093 | 0.317071 |
| MAE | 0.090405 | 0.088386 | 0.085256 | 0.213895 | 0.199752 | 0.233275 | |
| MSE | 0.026114 | 0.024054 | 0.022862 | 0.089863 | 0.077336 | 0.100534 | |
| Avg | 0.092705 | 0.089178 | 0.086440 | 0.201177 | 0.185060 | 0.216960 | |
| PLF | RMSE | 0.159218 | 0.145776 | 0.136415 | 0.278435 | 0.255140 | 0.234867 |
| MAE | 0.093233 | 0.085242 | 0.078725 | 0.191721 | 0.171740 | 0.154184 | |
| MSE | 0.025350 | 0.021423 | 0.018609 | 0.077526 | 0.064752 | 0.055178 | |
| Avg | 0.092600 | 0.084147 | 0.077916 | 0.182561 | 0.163877 | 0.148076 | |
| ANLF | RMSE | 0.154856 | 0.150690 | 0.149538 | 0.333552 | 0.324673 | 0.288066 |
| MAE | 0.090249 | 0.088172 | 0.088414 | 0.249664 | 0.226652 | 0.195575 | |
| MSE | 0.023980 | 0.022707 | 0.022362 | 0.111257 | 0.105413 | 0.082982 | |
| Avg | 0.089695 | 0.087190 | 0.086771 | 0.231491 | 0.218913 | 0.188874 |
Performance comparison (lower is better).
Avg is computed as (RMSE+MAE+MSE)/3 for each dataset. Bold values indicate the best results.
Table 4
| Baseline | RT-0.05 | RT-0.075 | RT-0.1 | TP-0.05 | TP-0.075 | TP-0.1 |
|---|---|---|---|---|---|---|
| PSLF (Δ) | 0.009367 | 0.010279 | 0.013133 | 0.018674 | 0.008068 | 0.014089 |
| PSLF (%) | 9.98% | 11.39% | 14.99% | 9.44% | 4.81% | 9.03% |
| MLF (Δ) | 0.011292 | 0.003923 | 0.000969 | 0.033668 | 0.009961 | 0.009353 |
| MLF (%) | 11.79% | 4.68% | 1.28% | 15.83% | 5.88% | 6.18% |
| NLF (Δ) | 0.008228 | 0.009193 | 0.011951 | 0.022096 | 0.025509 | 0.074944 |
| NLF (%) | 8.88% | 10.31% | 13.83% | 10.98% | 13.78% | 34.54% |
| PLF (Δ) | 0.008123 | 0.004162 | 0.003428 | 0.003480 | 0.004326 | 0.006061 |
| PLF (%) | 8.77% | 4.95% | 4.40% | 1.91% | 2.64% | 4.09% |
| ANLF (Δ) | 0.005218 | 0.007205 | 0.012283 | 0.052410 | 0.059361 | 0.046859 |
| ANLF (%) | 5.82% | 8.26% | 14.16% | 22.64% | 27.12% | 24.81% |
Per-dataset improvement of our method over each baseline based on Avg error.
Table 5
| Method | Avg RMSE | Avg MAE | Avg MSE |
|---|---|---|---|
| ACELF | 0.195671 | 0.122656 | 0.041472 |
| PSLF | 0.212083 | 0.136311 | 0.048210 |
| MLF | 0.210256 | 0.135687 | 0.048440 |
| NLF | 0.227138 | 0.151828 | 0.056794 |
| PLF | 0.201642 | 0.129141 | 0.043806 |
| ANLF | 0.233563 | 0.156454 | 0.061450 |
Average performance of different methods across all datasets (lower is better).
Bold values indicate the best results.
Table 6
| Baseline | ΔRMSE | ΔMAE | ΔMSE |
|---|---|---|---|
| PSLF | 0.016412 (9.14%) | 0.013655 (12.18%) | 0.006738 (19.24%) |
| MLF | 0.014584 (6.92%) | 0.013031 (9.09%) | 0.006968 (14.52%) |
| NLF | 0.031466 (15.39%) | 0.029172 (21.86%) | 0.015322 (34.38%) |
| PLF | 0.005970 (3.31%) | 0.006485 (6.57%) | 0.002334 (6.82%) |
| ANLF | 0.037891 (16.93%) | 0.033798 (24.14%) | 0.019978 (37.69%) |
Average performance improvement of our method over different baselines.
Bold values indicate the largest improvement over the baseline.
4.2.1 Overall performance on individual metrics
From Table 3 and Figures 3–5, ACELF consistently achieves the best performance across all six subsets and all three metrics. For instance, on the most sparse RT subset (RT-0.05), ACELF achieves RMSE = 0.148353, which is lower than PSLF (0.163019), MLF (0.166271), NLF (0.161597), PLF (0.159218), and ANLF (0.154856). This trend is consistent on MAE and MSE as well: on RT-0.05, ACELF obtains MAE = 0.083069 and MSE = 0.022009, both being the lowest among all methods.
Figure 4
Figure 5
A similar conclusion holds for TP subsets. On TP-0.075, ACELF achieves RMSE = 0.250409, outperforming PSLF (0.259275), MLF (0.261697), NLF (0.278093), PLF (0.255140), and ANLF (0.324673). The margin becomes more evident when considering MSE, where ACELF obtains 0.062705 on TP-0.075, compared with 0.067223 (PSLF), 0.068486 (MLF), 0.077336 (NLF), 0.064752 (PLF), and 0.105413 (ANLF). These results show that ACELF improves both average error level (MAE) and large-error sensitivity (RMSE/MSE), indicating superior robustness.
4.2.2 Overall Avg error comparison and stability across sparsity levels
The overall Avg error provides a unified view by aggregating RMSE, MAE, and MSE into one score for each dataset. As shown in Figure 6 and the “Avg” rows in Table 3, ACELF achieves the lowest Avg error on all six subsets. For example, on RT-0.075, ACELF yields Avg = 0.079985, while PSLF, MLF, NLF, PLF, and ANLF achieve 0.090264, 0.083908, 0.089178, 0.084147, and 0.087190, respectively. On TP-0.10, ACELF further reduces Avg to 0.142016, compared with 0.156104 (PSLF), 0.151369 (MLF), 0.216960 (NLF), 0.148076 (PLF), and 0.188874 (ANLF).
Figure 6
Importantly, ACELF shows stable superiority across different sparsity levels. On the RT side, Avg decreases monotonically as the sampling ratio increases (from 0.084477 on RT-0.05 to 0.074488 on RT-0.10), and ACELF remains the best in all cases. On the TP side, ACELF also maintains the best Avg, demonstrating that the proposed approach is robust under both sparse and relatively dense observation settings.
4.2.3 Per-dataset improvements over baselines
Table 4 quantifies how much ACELF improves over each baseline on each dataset using Avg error. Several observations can be drawn.
ACELF consistently improves over all baselines on all subsets (all Δ > 0), showing that the gain is not limited to a specific competitor or a specific sparsity level. For example, compared with PSLF, ACELF reduces Avg error by 0.009367 (9.98%) on RT-0.05 and by 0.018674 (9.44%) on TP-0.05.
The relative improvements are more pronounced on TP subsets than on RT subsets, as reflected by the Avg-error reductions in Table 4. For instance, compared with ANLF, ACELF reduces the Avg error by 0.059361 (27.12%) on TP-0.07 and 0.046859 (24.81%) on TP-0.10, whereas the corresponding reduction on RT subsets is smaller (e.g., 0.005218 (5.82%) on RT-0.05). We attribute this phenomenon to the differences in data scale and error distribution between RT and TP after preprocessing, which lead to different sensitivity of the aggregated Avg metric. Nevertheless, ACELF consistently achieves the best performance on both RT and TP subsets, demonstrating the effectiveness and robustness of the proposed core-enhanced modeling and PID-driven adaptive regularization across different QoS types.
Among the compared baselines, PLF tends to be the closest competitor to ACELF on TP subsets (e.g., TP-0.05 Avg: 0.182561 vs. 0.179081), indicating that PID-based adaptation already provides some benefit. However, ACELF still achieves consistent gains, e.g., 0.003480 (1.91%) on TP-0.05 and 0.006061 (4.09%) on TP-0.10. This demonstrates that combining a more expressive core-enhanced interaction with PID-driven adaptive regularization yields additional improvements beyond controlling per-entry errors alone.
4.2.4 Average performance across datasets
Table 5 summarizes the average metric values across all six datasets. ACELF achieves Avg RMSE = 0.195671, Avg MAE = 0.122656, and Avg MSE = 0.041472, which are the best among all methods. Compared with PLF, which is the second-best method in terms of average RMSE (0.201642) and average MSE (0.043806), ACELF still yields lower errors, showing that the proposed core-enhanced modeling provides consistent global benefits beyond adaptive control.
In contrast, acceleration-only baselines (MLF and NLF) are less competitive on average. For example, NLF has Avg RMSE = 0.227138 and Avg MSE = 0.056794, substantially worse than ACELF. This indicates that faster optimization alone does not guarantee better generalization in QoS prediction, while ACELF improves both optimization stability and model expressiveness.
4.2.5 Average improvement over baselines
Table 6 provides an aggregated view of improvements of ACELF over baselines. ACELF consistently reduces all three metrics compared with every baseline. For example, relative to PSLF, ACELF reduces RMSE by 0.016412 (9.14%), MAE by 0.013655 (12.18%), and MSE by 0.006738 (19.24%). Relative to ANLF, the reductions are even larger: RMSE decreases by 0.037891 (16.93%), MAE by 0.033798 (24.14%), and MSE by 0.019978 (37.69%). These reductions indicate that ACELF is particularly effective at suppressing prediction errors, which is crucial in QoS prediction scenarios where occasional extreme deviations can severely affect service selection.
4.2.6 Discussion
Overall, the superior results can be attributed to two complementary factors. First, the learnable core interaction in ACELF models cross-dimension dependencies between latent user and service factors, overcoming the rigid one-to-one interaction assumption of standard LF variants. Second, the PID-driven adaptive regularization dynamically adjusts the complexity of the core interaction during training, which helps prevent overfitting under sparse observations while preserving expressiveness when more complexity is needed. As evidenced by the consistent improvements across all metrics, datasets, and sparsity levels, ACELF achieves a favorable balance between representation capacity, robustness, and generalization, making it a reliable solution for QoS prediction in large-scale service environments.
5 Conclusions
This paper proposed ACELF, an adaptive core-enhanced latent factor model for QoS prediction. By introducing a learnable core interaction matrix, ACELF captures cross-dimension user-service interactions beyond the standard bilinear assumption, while an incremental PID mechanism adaptively adjusts the regularization strength of the core during training to balance model expressiveness and generalization. Extensive experiments on the QoS datasets under multiple sparsity settings demonstrate that ACELF consistently outperforms strong LF-based baselines in terms of RMSE, MAE, and MSE, achieving the best overall performance. Despite its effectiveness, ACELF has several limitations. First, the introduction of a full core interaction matrix increases computational cost, especially when the latent dimension grows. Second, the PID gains are empirically set and remain fixed during training, which may limit adaptability across different datasets. In future work, we plan to investigate more efficient core parameterizations, such as structured or low-rank cores, to reduce computational overhead. We also aim to explore adaptive or data-driven tuning strategies for PID parameters and to extend ACELF by incorporating richer contextual QoS information.
Statements
Data availability statement
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.
Author contributions
SA: Methodology, Formal analysis, Software, Writing – original draft. PL: Writing – original draft, Visualization, Validation. HF: Investigation, Writing – original draft, Data curation. YX: Supervision, Writing – review & editing, Conceptualization.
Funding
The author(s) declared that financial support was received for this work and/or its publication. This work was supported by the New Chongqing Youth Innovation Talent Project under Grant CSTB2024NSCQ-QCXMX0035.
Conflict of interest
SA was employed by the Beijing Mybull Technology Co., Ltd.
The remaining author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Generative AI statement
The author(s) declared that generative AI was not used in the creation of this manuscript.
Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
1
AhmadianS.BerahmandK.RostamiM.ForouzandehS.MoradiP.JaliliM. (2025). Recommender systems based on nonnegative matrix factorization: a survey. IEEE Trans. Artif. Intell. 6, 2554–2574. doi: 10.1109/TAI.2025.3559053
2
AmertetS.GebresenbetG.AlwanH. M. (2024). Modeling of unmanned aerial vehicles for smart agriculture systems using hybrid fuzzy pid controllers. Appl. Sci. 14:3458. doi: 10.3390/app14083458
3
BiF.HeT.OngY.-S.LuoX. (2025a). Discovering spatiotemporal-individual coupled features from nonstandard tensors–a novel dynamic graph mixer approach. IEEE Trans. Neural Netw. Learn. Syst. 36, 19834–19848. doi: 10.1109/TNNLS.2025.3592692
4
BiF.HeT.OngY.-S.LuoX. (2025b). Graph linear convolution pooling for learning in incomplete high-dimensional data. IEEE Trans. Knowl. Data Eng. 37, 1838–1852. doi: 10.1109/TKDE.2024.3524627
5
BoikoO.KominA.MalekianR.DavidssonP. (2024). Edge-cloud architectures for hybrid energy management systems: a comprehensive review. IEEE Sensors J. 24, 15748–15772. doi: 10.1109/JSEN.2024.3382390
6
BoraseR. P.MaghadeD.SondkarS.PawarS. (2021). A review of pid control, tuning methods and applications. Int. J. Dyn. Control9, 818–827. doi: 10.1007/s40435-020-00665-4
7
CaoB.PengQ.XieX.PengZ.LiuJ.ZhengZ. (2024). Web service recommendation via combining topic-aware heterogeneous graph representation and interactive semantic enhancement. IEEE Trans. Serv. Comput. 17, 4451–4466. doi: 10.1109/TSC.2024.3418328
8
ChenJ.LiuK.LuoX.YuanY.SedraouiK.Al-TurkiY.et al. (2024a). A state-migration particle swarm optimizer for adaptive latent factor analysis of high-dimensional and incomplete data. IEEE/CAA J. Autom. Sin. 11, 2220–2235. doi: 10.1109/JAS.2024.124575
9
ChenM.WangR.QiaoY.LuoX. (2024b). A generalized nesterov's accelerated gradient-incorporated non-negative latent-factorization-of-tensors model for efficient representation to dynamic qos data. IEEE Trans. Emerg. Top. Comput. Intell. 8, 2386–2400. doi: 10.1109/TETCI.2024.3360338
10
ChenX.DuY.HanY.HuangJ.QianZ. (2025). Hybrid reputation fusion and mutual information maximization for web services qos prediction. IEEE Trans. Serv. Comput. 18, 3878–3891. doi: 10.1109/TSC.2025.3623459
11
ChenY.YuP.ZhengZ.ShenJ.GuoM. (2022a). Modeling feature interactions for context-aware qos prediction of iot services. Fut. Gen. Comput. Syst. 137, 173–185. doi: 10.1016/j.future.2022.07.017
12
ChenY.ZhangY.XiaH.GaoC.WangZ.WangF.et al. (2022b). A hybrid tensor factorization approach for qos prediction in time-aware mobile edge computing. Appl. Intell. 52, 8056–8072. doi: 10.1007/s10489-021-02851-z
13
GhafouriS. H.HashemiS. M.HungP. C. K. (2022). A survey on web service qos prediction methods. IEEE Trans. Serv. Comput. 15, 2439–2454. doi: 10.1109/TSC.2020.2980793
14
GnanasekaranA.ChinnasamyA. A.ParasuramanE. (2022). Analyzing the qos prediction for web service recommendation using time series forecasting with deep learning techniques. Concurr. Comput.: Pract. Exp. 34:e7356. doi: 10.1002/cpe.7356
15
HeT.LiuY.OngY.-S.WuX.LuoX. (2024). Polarized message-passing in graph neural networks. Artif. Intell. 331:104129. doi: 10.1016/j.artint.2024.104129
16
HeT.OngY. S.BaiL. (2021). Learning conjoint attentions for graph neural nets. Adv. Neural Inform. Process. Syst. 34, 2641–2653.
17
HuangW.ZhangP.ChenY.ZhouM.Al-TurkiY.AbusorrahA. (2022). Qos prediction model of cloud services based on deep learning. IEEE/CAA J. Automat. Sin. 9, 564–566. doi: 10.1109/JAS.2021.1004392
18
JamilA. A.TuW. F.AliS. W.TerricheY.GuerreroJ. M. (2022). Fractional-order pid controllers for temperature control: a review. Energies15:3800. doi: 10.3390/en15103800
19
JawabrehE.TaweelA. (2024). Qos-based web service selection using time-aware collaborative filtering: a literature review. Computing106, 2033–2058. doi: 10.1007/s00607-024-01283-0
20
JiaM.WuJ.GuoQ.YangY. (2024). Service-oriented sagin with pervasive intelligence for resource-constrained users. IEEE Netw. 38, 79–86. doi: 10.1109/MNET.2024.3353414
21
JiaZ.JinL.ZhangY.LiuC.LiK.YangY. (2023). Location-aware web service qos prediction via deep collaborative filtering. IEEE Trans. Comput. Soc. Syst. 10, 3524–3535. doi: 10.1109/TCSS.2022.3217277
22
KimbugweN.PeiT.KyebambeM. N. (2021). Application of deep learning for quality of service enhancement in internet of things: a review. Energies14:6384. doi: 10.3390/en14196384
23
LebibF. Z.KichouS. (2024). Recommending cloud services based on social trust: an overview. Concurr. Comput.: Pract. Exp. 36:e8262. doi: 10.1002/cpe.8262
24
LeiY.LiH.LiG. (2024). “Prsamf: personalized recommendation based on sentiment analysis and matrix factorization,” in 2024 IEEE International Conference on Bioinformatics and Biomedicine (BIBM) (Piscataway, NJ: Institute of Electrical and Electronics Engineers, Inc. (IEEE)), 6553–6560. doi: 10.1109/BIBM62325.2024.10822471
25
LiC.CheH.LeungM.-F.LiuC.YanZ. (2023). Robust multi-view non-negative matrix factorization with adaptive graph and diversity constraints. Inform. Sci. 634, 587–607. doi: 10.1016/j.ins.2023.03.119
26
LiJ.YuanY.LuoX. (2025). Learning error refinement in stochastic gradient descent-based latent factor analysis via diversified pid controllers. IEEE Trans. Emerg. Top. Comput. Intell. 9, 3582–3597. doi: 10.1109/TETCI.2025.3547854
27
LiaoX.WuH.HeT.LuoX. (2025). A proximal-admm-incorporated nonnegative latent-factorization-of-tensors model for representing dynamic cryptocurrency transaction network. IEEE Trans. Syst. Man Cybernet.: Syst. 55, 8387–8401. doi: 10.1109/TSMC.2025.3605054
28
LinM.LinX.XuX.XuZ.LuoX. (2025a). Neural networks-incorporated latent factor analysis for high-dimensional and incomplete data. IEEE Trans. Syst. Man Cybernet.: Syst. 55, 7302–7314. doi: 10.1109/TSMC.2025.3583919
29
LinX.YuS.LinM.XuX.LinJ.XuZ. (2025b). An incremental nonlinear co-latent factor analysis model for large-scale student performance prediction. IEEE Trans. Serv. Comput. 18, 3463–3476. doi: 10.1109/TSC.2025.3621687
30
LuoX.WangD.ZhouM.YuanH. (2021). Latent factor-based recommenders relying on extended stochastic gradient descent algorithms. IEEE Trans. Syst. Man Cybernet.: Syst. 51, 916–926. doi: 10.1109/TSMC.2018.2884191
31
LyuC.MaZ.LuoX.ShiY. (2026). Dynamic stochastic reorientation particle swarm optimization for adaptive latent factor analysis in high-dimensional sparse matrices. IEEE Trans. Knowl. Data Eng. 38, 222–234. doi: 10.1109/TKDE.2025.3621469
32
MerabetF. Z.BenmerzougD. (2022). Qos prediction for service selection and recommendation with a deep latent features autoencoder. Comput. Sci. Inform. Syst. 19, 709–733. doi: 10.2298/CSIS210518054M
33
Mohamed HadjkouiderA.KerracheC. A.KorichiA.SahraouiY.CalafateC. T.DhelimS.et al. (2024). A review of service selection strategies in mobile iot networks. IEEE Open J. Commun. Soc. 5, 3229–3244. doi: 10.1109/OJCOMS.2024.3400981
34
QinW.LuoX.LiS.ZhouM. (2024). Parallel adaptive stochastic gradient descent algorithms for latent factor analysis of high-dimensional and incomplete industrial data. IEEE Trans. Autom. Sci. Eng. 21, 2716–2729. doi: 10.1109/TASE.2023.3267609
35
SaadM.EnamR. N.QureshiR. (2024). Optimizing multi-objective task scheduling in fog computing with ga-pso algorithm for big data application. Front. Big Data7:1358486. doi: 10.3389/fdata.2024.1358486
36
SadatN.DaiR. (2025). A survey of quality-of-service and quality-of-experience provisioning in information-centric networks. Network5:10. doi: 10.3390/network5020010
37
SongY.LiM.LuoX.YangG.WangC. (2020). Improved symmetric and nonnegative matrix factorization models for undirected, sparse and large-scaled networks: a triple factorization-based approach. IEEE Trans. Indus. Inform. 16, 3006–3017. doi: 10.1109/TII.2019.2908958
38
SunY.LiuQ. (2025). Collaborative filtering recommendation based on k-nearest neighbor and non-negative matrix factorization algorithm. J. Supercomput. 81:79. doi: 10.1007/s11227-024-06537-4
39
SyedN.AnwarA.BaigZ.ZeadallyS. (2025). Artificial intelligence as a service (aiaas) for cloud, fog and the edge: STATE-of-the-art practices. ACM Comput. Surveys57, 1–36. doi: 10.1145/3712016
40
SyuY.WangC.-M. (2021). Qos time series modeling and forecasting for web services: a comprehensive survey. IEEE Trans. Netw. Serv. Manag. 18, 926–944. doi: 10.1109/TNSM.2021.3056399
41
TianY.ZhangY.ZhangH. (2023). Recent advances in stochastic gradient descent in deep learning. Mathematics11:682. doi: 10.3390/math11030682
42
UtaM.FelfernigA.LeV.-M.TranT. N. T.GarberD.LubosS.et al. (2024). Knowledge-based recommender systems: overview and research directions. Front. Big Data7:1304439. doi: 10.3389/fdata.2024.1304439
43
WangJ.JoshiG. (2021). Cooperative sgd: a unified framework for the design and analysis of local-update sgd algorithms. J. Mach. Learn. Res. 22, 1–50.
44
WangS.MaY.ChengB.YangF.ChangR. N. (2019). Multi-dimensional qos prediction for service recommendations. IEEE Trans. Serv. Comput. 12, 47–57. doi: 10.1109/TSC.2016.2584058
45
WuD.HeY.LuoX. (2023a). A graph-incorporated latent factor analysis model for high-dimensional and sparse data. IEEE Trans. Emerg. Top. Comput. 11, 907–917. doi: 10.1109/TETC.2023.3292866
46
WuD.HuY.LiuK.LiJ.WangX.DengS.et al. (2025a). An outlier-resilient autoencoder for representing high-dimensional and incomplete data. IEEE Trans. Emerg. Top. Comput. Intell. 9, 1379–1391. doi: 10.1109/TETCI.2024.3437370
47
WuD.LiZ.YuZ.HeY.LuoX. (2025b). Robust low-rank latent feature analysis for spatiotemporal signal recovery. IEEE Trans. Neural Netw. Learn. Syst. 36, 2829–2842. doi: 10.1109/TNNLS.2023.3339786
48
WuD.LuoX.ShangM.HeY.WangG.WuX. (2022). A data-characteristic-aware latent factor model for web services qos prediction. IEEE Trans. Knowl. Data Eng. 34, 2525–2538. doi: 10.1109/TKDE.2020.3014302
49
WuD.ZhangP.HeY.LuoX. (2023b). A double-space and double-norm ensembled latent factor model for highly accurate web service qos prediction. IEEE Trans. Serv. Comput. 16, 802–814. doi: 10.1109/TSC.2022.3178543
50
WuD.ZhangP.HeY.LuoX. (2024). Mmlf: Multi-metric latent feature analysis for high-dimensional and incomplete data. IEEE Trans. Serv. Comput. 17, 575–588. doi: 10.1109/TSC.2023.3331570
51
WuH.WangQ.LuoX.WangZ. (2025c). Learning accurate representation to nonstandard tensors via a mode-aware tucker network. IEEE Trans. Knowl. Data Eng. 37, 7272–7285. doi: 10.1109/TKDE.2025.3617894
52
XuX.LinM.LiW.ZhangJ.WuH. (2023). “Time-varying qos estimation via non-negative latent factorization of tensors with extended linear biases,” in 2023 IEEE International Conference on Big Data (BigData) (Piscataway, NJ: Institute of Electrical and Electronics Engineers, Inc. (IEEE)), 86–95. doi: 10.1109/BigData59044.2023.10386709
53
YangJ.WuQ.FengZ.ZhouZ.GuoD.ChenX. (2025). Quality-of-service aware llm routing for edge computing with multiple experts. IEEE Trans. Mobile Comput. 24, 13648–13662. doi: 10.1109/TMC.2025.3590969
54
YuanY.LuS.LuoX. (2025). A proportional integral controller-enhanced non-negative latent factor analysis model. IEEE/CAA J. Autom. Sin. 12, 1246–1259. doi: 10.1109/JAS.2024.125055
55
ZhangP.RenJ.HuangW.ChenY.ZhaoQ.ZhuH. (2024). A deep-learning model for service qos prediction based on feature mapping and inference. IEEE Trans. Serv. Comput. 17, 1311–1325. doi: 10.1109/TSC.2023.3326208
56
ZhangZ.ChenL.JiangT.LiY.LiL. (2022). Effects of feature-based explanation and its output modality on user satisfaction with service recommender systems. Front. Big Data5:897381. doi: 10.3389/fdata.2022.897381
57
ZhengZ.LiX.TangM.XieF.LyuM. R. (2022). Web service qos prediction via collaborative filtering: a survey. IEEE Trans. Serv. Comput. 15, 2455–2472. doi: 10.1109/TSC.2020.2995571
58
ZhouH.HuangW.ChenY.HeT.CongG.OngY.-S. (2024). “Road network representation learning with the third law of geography,” in Advances in Neural Information Processing Systems, vol. 37, eds. A. Globerson, L. Mackey, D. Belgrave, A. Fan, U. Paquet, J. Tomczak, et al. (Red Hook, NY: Curran Associates, Inc.), 11789–11813. doi: 10.52202/079017-0376
Summary
Keywords
adaptive regularization, latent factor (LF) model, Proportional-Integral-Derivative (PID) control, QoS prediction, quality of service (QoS), representation learning
Citation
Ai S, Li P, Fang H and Xia Y (2026) Adaptive core-enhanced latent factor model for highly accurate QoS prediction. Front. Big Data 9:1775728. doi: 10.3389/fdata.2026.1775728
Received
26 December 2025
Revised
11 January 2026
Accepted
13 January 2026
Published
02 February 2026
Volume
9 - 2026
Edited by
Qingguo Lü, Chongqing University, China
Reviewed by
Dianlong You, Yanshan University, China
Tiantian He, Agency for Science, Technology and Research (A*STAR), Singapore
Updates
Copyright
© 2026 Ai, Li, Fang and Xia.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Yonghui Xia, xyhh123456@email.swu.edu.cn
Disclaimer
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.