Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Mater., 28 January 2026

Sec. Computational Materials Science

Volume 12 - 2025 | https://doi.org/10.3389/fmats.2025.1732297

Intelligent pavement moduli back-calculation using an SEM–transformer framework

Guozhong Wang,
Guozhong Wang1,2*Yanqing ZhaoYanqing Zhao3
  • 1School of Infrastructure Engineering, Dalian University of Technology, Dalian, China
  • 2Shanxi Provincial Transportation Construction Engineering Quality Inspection Center (Co., Ltd.), Taiyuan, China
  • 3Department of Transportation and Logistics, Dalian University of Technology, Dalian, China

This study proposes an intelligent back-calculation framework to estimate multilayer pavement elastic moduli from FWD deflection data under realistic measurement uncertainty. A spectral element method (SEM) model is used to simulate transient FWD responses and generate large-scale datasets. A Transformer regression model is trained to map peak deflection basins to layer moduli, considering four noise scenarios (no error, random, systematic, and combined). Baseline models (BPNN, SVR, and XGBoost) are also evaluated for comparison. The proposed SEM–Transformer framework achieves strong accuracy and robustness, with average R2>0.94 and MAPE < 8% across all noise cases, and shows superior performance for the base course under noisy conditions. The results demonstrate a reliable and efficient data-driven feasibility framework to support pavement structural evaluation and future digital-twin-based pavement management.

1 Introduction

The Falling Weight Deflectometer (FWD) test has become one of the most widely used nondestructive evaluation techniques for assessing pavement structural performance (Elbagalati et al., 2018; Nam et al., 2016; Plati et al., 2016). By applying an impulse load to the pavement surface and recording the resulting deflection data, the FWD test provides valuable information about the mechanical response of pavement layers. However, the measured surface deflections do not directly yield the material properties of each layer; therefore, an inverse analysis, commonly referred to as back-calculation, is required to estimate key parameters such as elastic moduli. Most current studies focus on the surface layer (Shamiyeh et al., 2022; Plati et al., 2024), lacking an overall performance evaluation of the pavement structure including the base layers (Yang et al., 2025). Accurate parameter back-calculation is essential for evaluating the structural integrity, residual life, and load-bearing capacity of pavements, serving as a foundation for performance prediction and maintenance decision-making. With the increasing demand for data-driven and intelligent infrastructure management, the integration of intelligent algorithms into the back-calculation process has emerged as a promising approach to enhance efficiency, robustness, and automation of pavement performance evaluation and smart maintenance systems.

Over the past several decades, numerous back-calculation methodologies have been developed to interpret FWD deflection data and estimate pavement layer moduli. Classical approaches, such as the layered elastic theory (LET) and finite element-based iterative algorithms, have formed the foundation of conventional inverse analysis. Early methods, such as the ILLI-BACK (Ioannides et al., 1989), BISDEF (Bush, 1985), CHEVDEF (Bush and Alexander, 1985) and MODCOMP (Irwin, 1994; Irwin and Szebenyi, 1983) or MODULUS (Scullion et al., 1990) programs, relied heavily on deterministic optimization techniques such as the regression formula based on experience, Newton-Raphson, gradient descent, or least-squares fitting. These methods typically minimize the discrepancy between measured and calculated deflections by repeatedly adjusting material parameters within predefined bounds. Although these traditional approaches have contributed significantly to the advancement of pavement evaluation, they suffer from several inherent limitations. The inverse problem is often ill-posed and highly nonlinear, making the solution sensitive to measurement noise and initial guesses (Jiang et al., 2022; Ullidtz, 1998). Moreover, conventional optimization algorithms tend to converge to local minima, require significant computational effort, and exhibit poor adaptability when dealing with complex pavement structures or large-scale datasets (Coletti et al., 2024; Torquato E Silva et al., 2025). The phenomenon of modulus layering, which undermines the credibility of the assessment results, occurs from time to time (Wang et al., 2024). These shortcomings highlight the need for more robust, efficient, and intelligent back-calculation strategies capable of capturing the nonlinear mapping between deflection responses and pavement material properties.

In recent years, the rapid development of artificial intelligence (AI) and machine learning (ML) techniques has provided new opportunities for solving the complex and nonlinear inverse problems in pavement engineering. Data-driven models, such as artificial neural networks (ANNs) (Khazanovich and Roesler, 1997; Sharma and Das, 2008; Tarefder et al., 2015), BPNN (Meier et al., 1997; Wang and Zhao, 2022), support vector machines (SVMs) (Wang et al., 2023; Zhang et al., 2021), and deep learning architectures (Chen et al., 2025), have been successfully applied to capture the intricate relationships between FWD deflection data and pavement parameters. These intelligent approaches overcome many limitations of traditional iterative methods by learning from large datasets and establishing direct mappings between input and output variables without the need for repeated forward simulations. Among various deep learning frameworks, Transformer-based models have recently attracted growing attention due to their outstanding ability to process sequential data and model long-range dependencies through self-attention mechanisms (Vaswani et al., 2017). Unlike conventional neural networks, Transformers can effectively learn complex spatial-mechanical correlations in multi-layer pavement systems, enabling more accurate and robust modulus back-calculation under uncertain or noisy measurement conditions. Consequently, the integration of Transformer architectures into FWD-based parameter back-calculation represents a promising direction toward automated, data-driven, and intelligent pavement evaluation and maintenance.

Beyond pavement engineering, physics-informed and data-driven inverse analysis based on indirect structural responses has been extensively investigated in broader civil and structural engineering domains. In the context of acoustic emission (AE)–based damage identification, deep residual learning has been successfully applied to AE source localization in steel–concrete composite slabs, demonstrating strong capability in learning inverse mappings under complex wave propagation conditions (Zhou et al., 2024b). AE-based data-driven approaches have also been employed for damage pattern recognition in corroded reinforced concrete beams strengthened with CFRP anchorage systems (Pan et al., 2023), as well as for localized corrosion-induced damage monitoring of large-scale RC piles in marine environments (Zheng et al., 2020), highlighting the effectiveness of deep learning in extracting damage-sensitive features from high-dimensional AE signals. In parallel, hybrid physics–data-driven frameworks that integrate numerical modeling with deep learning have gained increasing attention. Representative examples include a hybrid FEM and 1D-CNN methodology for structural damage detection in typical high-pile wharves (Zhou et al., 2022). Moreover, vibration-based damage localization frameworks combining ambient vibration measurements with multi–1D CNN ensemble models have been proposed and validated on large-scale reinforced concrete pedestrian bridges (Zhou et al., 2025b), demonstrating the scalability of data-driven inverse identification methods to complex, real-world structures. At a more fundamental level, lattice modeling approaches have been developed to simulate complete AE waveforms and fracture-induced AE wave propagation in concrete, providing physically interpretable forward models for inverse analysis (Zhou et al., 2024a; Zhou et al., 2025a).

Although these studies focus on different sensing modalities (AE or vibration) and structural systems, they share a common methodological paradigm with the present work: leveraging physics-based models to generate informative data and employing deep learning architectures to learn inverse mappings from indirect measurements to internal structural states. The proposed SEM–Transformer framework follows this paradigm in the context of pavement engineering by integrating high-fidelity numerical simulations with attention-based learning for FWD-based modulus back-calculation.

With the advancement of sensing technologies and the increasing availability of large-scale pavement monitoring data, data-driven pavement management and intelligent maintenance systems have become an emerging trend in modern infrastructure engineering (Golmohammadi et al., 2025; Li et al., 2025; Lu et al., 2025). By integrating FWD test results with other sensing and inspection data, it is now possible to continuously evaluate pavement health conditions, predict performance degradation, and optimize maintenance scheduling through automated analytical frameworks. In this context, intelligent back-calculation serves as a crucial component of smart pavement management, enabling real-time structural assessment and decision support. Leveraging powerful deep learning models such as Transformers, the back-calculation of pavement mechanical parameters can be achieved with high efficiency and accuracy, supporting predictive maintenance and life-cycle performance optimization. Therefore, this study aims to develop a Transformer-based intelligent back-calculation framework for modulus back-calculation of pavements, providing a foundation for data-driven performance evaluation and intelligent pavement operation and maintenance.

The remainder of this paper is organized as follows. Section 2 introduces the overall methodology, including the spectral element method (SEM) for forward simulation, the Transformer-based intelligent back-calculation framework, and the evaluation metrics employed to assess model performance. Section 3 describes the procedures of data collection, extraction, and preprocessing, emphasizing the introduction of random and systematic measurement errors to simulate realistic field conditions. Section 4 presents the results and discussion, where the Transformer-based back-calculation model is comprehensively evaluated under four noise scenarios (no measurement error, random error, systematic error, and combined random–systematic error) and benchmarked against representative machine learning models (BPNN, SVR, and XGBoost), followed by a comparative discussion of robustness and generalization, an assessment of the physical plausibility of the predicted moduli, and considerations regarding potential overfitting. Finally, Section 5 summarizes the main findings of this study and outlines potential directions for future research in intelligent pavement performance evaluation and maintenance.

It should be emphasized that the present study focuses on a numerical feasibility investigation, in which both training and testing datasets are generated using a validated spectral element method (SEM). Although synthetic noise is introduced to approximate typical measurement uncertainty in Falling Weight Deflectometer (FWD) tests, no field FWD dataset is directly used for model validation at this stage. Consequently, the primary objective of this work is to evaluate the learning capability, robustness, and stability of the proposed SEM–Transformer framework under controlled yet realistic conditions, rather than to claim immediate applicability to in-service pavements.

2 Methodology

The overall workflow of the proposed intelligent back-calculation system integrates three key components: 1) numerical simulation of pavement responses using the SEM, 2) machine learning-based modulus prediction using the Transformer architecture, and 3) performance evaluation through multiple statistical metrics. The methodology is illustrated in Figure 1.

Figure 1
Flowchart illustrating a three-phase process. Phase 1: Data collection and preprocessing involve numerical simulation, feature selection, error handling, and data preprocessing, splitting data into training and testing sets. Phase 2: Model construction and training use a transformer model with input embedding, multi-head attention, and output embedding. Phase 3: Model assessment and result analysis assess performance using MAE, MSE, RMSE, R-squared, and MAPE metrics, with a graph comparing predicted versus actual values.

Figure 1. Flowchart of the intelligent back-calculation methodology for pavement structure parameter prediction.

2.1 SEM

The SEM is employed to simulate the pavement surface deflection response under FWD loading. Compared with conventional finite element or finite difference schemes, the SEM achieves high accuracy by interpolating the field variables with high-order spectral shape functions within each element and by describing the distributed mass inertia exactly. In this study, a one-dimensional axisymmetric SEM formulation is adopted following Zhao et al. (2015) and Cao et al. (2020). The layered pavement structure is modeled as a stack of homogeneous, isotropic layers characterized by thickness, elastic modulus, Poisson’s ratio, and density, resting on a semi-infinite subgrade.

The governing equations of motion for the axisymmetric elastic medium are given in Equation 1:

λt+μt·u+μ2u=ρu¨(1)

where u represents the displacement vector composed of the radial component u and the vertical component w, u¨ refers to the acceleration vector, λt presents Lame’s constant for the material, while μt represents the shear modulus. denotes the gradient differential operator, ·u represents the divergence of u, 2u represents the Laplacian of u, and ρ signifies the material density.

In the vertical direction, each pavement layer is discretized by a 2-node axisymmetric spectral layer element, while the semi-infinite subgrade is represented by a 1-node throw-off element that conducts energy out of the system. Each node carries two degrees of freedom (radial and vertical displacements). Within a spectral element, the displacement field is interpolated by high-order spectral shape functions constructed from Lagrange polynomials passing through Gauss–Lobatto–Legendre points, so that one spectral element per physical layer is sufficient and no further mesh refinement is required through the thickness.

In the radial direction, the domain is discretized by a graded mesh that is refined beneath and near the FWD loading area and gradually coarsened toward an outer truncation radius. This radius is chosen sufficiently large such that the computed surface vibration decays to negligible levels, which avoids spurious reflections from the lateral boundary. Axisymmetry is enforced at the centerline r=0, and the pavement surface is traction-free outside the circular loading area where the FWD pressure is applied. At the bottom of the truncated domain, vertical displacement is fixed while radial displacement continuity is maintained through the throw-off spectral element to mimic the semi-infinite half-space.

The spatial discretization leads to a semi-discrete system of second-order ordinary differential equations in time. This system is advanced using an explicit central-difference time integration scheme, with the time step selected according to the standard SEM stability criterion based on the smallest element size and the maximum wave speed. Since all layers are modeled as linear elastic materials and no additional material or Rayleigh damping is introduced, the computed response corresponds to the undamped elastic wave propagation problem. The resulting SEM formulation has been validated in previous studies (Cao et al., 2020; Zhao et al., 2015), confirming its accuracy and stability for simulating pavement surface deflection histories. The peak values of the computed surface deflection basins at the sensor locations are then extracted and used as input features for the Transformer-based learning model described in Section 2.2.

2.2 Intelligent back-calculation methodology

It should be clarified that the term physics-informed in this study refers to the use of SEM-based forward simulations to generate physically consistent training and testing datasets, rather than to the explicit enforcement of physical laws or inequality constraints within the neural network architecture itself. The Transformer model is trained as a data-driven regression mapping from deflection basins to layer elastic moduli and does not impose hard constraints such as modulus ordering or monotonicity during learning. In this study, an intelligent back-calculation framework based on the Transformer architecture is established to predict the elastic modulus of pavement layers from measured deflection data. The Transformer model, originally proposed by Vaswani et al. (2017), has demonstrated exceptional performance in capturing long-range dependencies through its self-attention mechanism, making it well-suited for modeling complex nonlinear relationships in pavement structural systems.

2.2.1 Overall structure of the transformer

As shown in Figure 2, the proposed model is designed as an Encoder-only Transformer architecture specifically optimized for regression based back-calculation tasks. The input vector, composed of multiple deflection peaks extracted from FWD data, is first transformed into a high-dimensional feature representation through an input embedding layer, allowing the model to capture latent spatial and mechanical patterns. Within the Transformer encoder, each layer consists of two fundamental components: the Multi-Head Self-Attention (MHSA) mechanism and the feed-forward network (FFN). The MHSA module enables the model to learn global correlations among deflection points by dynamically computing the weighted relevance between all positions in the input sequence, effectively capturing inter peak dependencies that reflect subsurface mechanical interactions. The subsequent FFN applies nonlinear transformations to further refine and abstract the learned features, thereby enhancing the model’s expressive capability. Each sublayer is enclosed within a residual connection and layer normalization (Add and Norm), which collectively stabilizes the training process, prevents gradient degradation, and accelerates convergence. Finally, the encoder output is passed through a regression head, which maps the learned feature representations to the predicted modulus values of each pavement layer, enabling accurate and interpretable estimation of structural parameters for intelligent pavement evaluation.

Figure 2
Diagram of a Transformer model architecture with Encoder and Decoder blocks. The Encoder includes Input Embedding, Positional Encoding, Multi-Head Attention, Add & Norm, and Feed Forward layers. The Decoder contains Masked Multi-Head Attention, Add & Norm, Feed Forward, Linear, ReLU, and Softmax layers. Arrows indicate data flow, and components are color-coded.

Figure 2. Overall architecture of the Transformer-based intelligent back-calculation model.

An encoder-only Transformer architecture is adopted in this study because the back-calculation task involves a fixed-length regression mapping from FWD deflection basins to elastic moduli, rather than a sequence-to-sequence or generative problem. The encoder-only design is therefore sufficient and computationally efficient. Compared with simpler architectures such as one-dimensional convolutional neural networks or attention-augmented multilayer perceptrons, the Transformer encoder enables direct modeling of global, nonlocal interactions among all deflection sensors through self-attention, without imposing predefined receptive fields or handcrafted feature aggregation rules.

2.2.2 Multi-head attention mechanism

The core of the Transformer lies in its multi-head attention mechanism, as shown in Figure 3. For each attention head, the input sequence is linearly projected into three matrices: the Query (Q), Key (K), and Value (V). The scaled dot-product attention is computed as Equation 2:

AttentionQ,K,V=SoftmaxQKTdkV(2)

where dk denotes the dimensionality of the key vectors. Multiple attention heads operate in parallel to capture diverse feature interactions, and their outputs are concatenated and linearly transformed. This mechanism allows the model to learn complex dependencies between deflection measurements and corresponding modulus responses at multiple representation levels (Dosovitskiy et al., 2020).

Figure 3
Diagram of multi-head attention architecture in transformers. It includes Scaled Dot-Product Attention with components: MatMul, Scale, optional Mask, and SoftMax, followed by MatMul. Multiple heads process linear transformations of input vectors V, K, and Q, and results are concatenated. Output passes through a Linear layer.

Figure 3. Structure of the multi-head attention mechanism.

2.2.3 Application to pavement modulus back-calculation

The developed Transformer model is employed to perform back-calculation of pavement layer elastic moduli from FWD deflection data. It learns a direct mapping between surface deflection basins (either measured in the field or generated through numerical simulations) and the elastic moduli of individual pavement layers. Through supervised training on paired datasets of deflection responses and known material parameters, the model captures the complex nonlinear relationships between surface mechanical behavior and the internal structural characteristics of the pavement system. Unlike conventional iterative back-calculation algorithms that rely heavily on initial guesses and are prone to convergence to local minima, the Transformer exploits a data-driven learning mechanism and attention-based architecture to achieve high generalization performance across diverse pavement configurations, while also enabling efficient parallel computation and substantially reducing computational time. In addition, by incorporating global contextual dependencies among sensor readings, the Transformer exhibits strong robustness to measurement noise and maintains prediction stability under uncertain or imperfect data conditions. These characteristics make it a powerful and intelligent approach for accurately estimating layer moduli in multilayer pavement systems, thereby providing a solid foundation for automated and reliable pavement evaluation and maintenance decision-making.

The nonlinear mapping from the FWD deflection basin to the layer elastic moduli is realized by an encoder-only Transformer architecture. The model input is the vector of nine peak surface deflections DJ9i=1, measured at radial distances ri=0,20,30,50,80,110,140,170,200 mm from the load center. To better exploit both the magnitude and spatial layout of these measurements, the raw deflections are first processed by a dedicated embedding module, denoted as PeakEmbed. In this module, each scalar peak deflection is projected from R into a dmodel=128-dimensional feature space by a fully connected layer. A sinusoidal positional encoding, similar to that used in the original Transformer formulation, is then added to retain the ordered sensor index information along the radial direction. Furthermore, the physical sensor spacing is explicitly encoded through a small multilayer perceptron (MLP) that maps the normalized sensor distance (in meters) through a 1–128–128 MLP with ReLU activation. The output of this distance MLP is added elementwise to the peak-value embedding, so that the final token representation accounts for both the measured deflection and its radial location. This procedure yields an input sequence of length M=9 with feature dimension dmodel=128.

On top of the PeakEmbed module, we employ an encoder-only Transformer with Nenc=2 identical encoder layers. Each encoder layer is implemented using the standard TransformerEncoderLayer in PyTorch with batch_first = True. The multi-head self-attention (MHSA) block uses Nhead=4 attention heads, leading to key and value dimensions dk=dv=dmodel/Nhead=32 for each head. The position-wise feed-forward network (FFN) in each encoder layer consists of two fully connected layers with an intermediate dimension dff=256 and a ReLU nonlinearity. Residual connections, layer normalization, and dropout are applied around both the MHSA and FFN sublayers following the default PyTorch implementation, with a dropout rate of 0.1 in each encoder layer. To obtain a compact representation of the entire deflection basin, a learnable “[CLS]” token of size 1×1×128 is prepended to the embedded sequence. The CLS token is introduced as a learnable global representation to aggregate information from all sensor tokens through self-attention. Although the input deflection vector has a fixed length, the CLS-based aggregation provides a principled alternative to fixed pooling operations (e.g., mean or max pooling) and allows the model to adaptively learn the relative contribution of each deflection measurement to the inverse mapping. The concatenated sequence (CLS token plus nine sensor tokens) is passed through the Transformer encoder, and only the output corresponding to the CLS position is retained as a global feature vector. This global vector is then mapped to the target elastic moduli through a regression head comprising a two-layer MLP: a fully connected layer from 128 to 128 units with ReLU activation, followed by a linear layer from 128 to 3 units. The three outputs correspond to the standardized (Z-score) elastic moduli E1,E2,E3 of the surface layer, base layer, and subgrade, respectively.

Prior to training, both the input peak deflections and the output moduli are standardized by Z-score normalization using statistics (mean and standard deviation) computed solely from the training subset. The Transformer is trained to minimize the Smooth L1 loss (Smooth L1 Loss with β=0.5) between the predicted and true standardized moduli, which provides a compromise between the robustness of L1 loss and the sensitivity of L2 loss to small errors. The optimizer is AdamW with an initial learning rate of 1.0×103 and a weight decay of 1.0×104. Training is performed for 90 epochs with a mini-batch size of 256, on a GPU when available or otherwise on a CPU. During training, we monitor the average training loss in the standardized space as well as the mean absolute error (MAE) on the held-out test set to verify convergence and stability.

The number of encoder layers and attention heads was selected based on empirical trade-offs between model capacity, training stability, and overfitting risk. Given the relatively small number of input sensors (nine deflection measurements) and the synthetic nature of the dataset, deeper or wider Transformer configurations were found to offer limited performance gains while increasing computational cost and susceptibility to overfitting. Accordingly, a compact configuration with two encoder layers and four attention heads was adopted as a balanced and reproducible design for the present feasibility study.

All key architectural and training hyperparameters of the Transformer model are summarized in Table 1 for ease of reference and reproducibility.

Table 1
www.frontiersin.org

Table 1. Transformer architecture and training hyperparameters used for the back-calculation of multilayer pavement elastic moduli from FWD deflection data.

2.3 Evaluation metrics

The model’s predictive performance is evaluated using five standard statistical metrics: MAE, Mean Squared Error (MSE), Root Mean Squared Error (RMSE), Mean Absolute Percentage Error (MAPE), and the Coefficient of Determination (R2). These metrics assess both the magnitude and distribution of prediction errors, providing a comprehensive evaluation of model accuracy.

2.3.1 MAE

The MAE calculates the average magnitude of the absolute differences between predicted values and observed values. It is a linear score, meaning all individual differences are weighted equally in the average. MAE (Equation 3) avoids the issue of error cancellation and thus accurately reflects the actual size of the prediction errors (Wudil et al., 2024).

MAE=1ni=1ny^iyi(3)

2.3.2 MSE

The MSE is a statistical metric used to evaluate the accuracy of a model. It is calculated by taking the average of the squared differences between the actual and predicted values (Goodfellow et al., 2016). MSE (Equation 4) is sensitive to outliers—since large deviations between predictions and true values become even larger after squaring—but this property also allows it to effectively reflect the overall distribution of prediction errors.

MSE=1ni=1ny^iyi2(4)

2.3.3 RMSE

The RMSE (Equation 5) represents the sample standard deviation of the differences—known as residuals—between predicted and observed values (Bypour et al., 2024). It indicates the degree of dispersion of the sample errors. In practical measurements, the number of observations n is always limited, and the true value can only be approximated by the most reliable (best-estimated) value.

RMSE=1ni=1ny^iyi2(5)

2.3.4 MAPE

The MAPE (Equation 6) is a statistical metric used to measure the degree of error between predicted and actual values (Chen et al., 2024). It is calculated by taking the absolute difference between the predicted and actual values as a percentage of the actual value, and then averaging these percentages to reflect the overall accuracy of the predictions.

MAPE=1ni=1ny^iyiyi×100%(6)

2.3.5 R2

The R2 (Equation 7) is a statistical measure based on the decomposition of the total sum of squares, used to evaluate how well a regression model fits the observed data. It represents the proportion of variance in the dependent variable that is explained by the regression model (Draper and Smith, 1998). Therefore, the higher the R2 value, the better the model fits the data.

R2=1i=1ny^iyi2i=1nyiy¯2(7)

where y^i represents the predicted value, yi denotes the actual value, y¯ the average value of the actual values, and n indicates the number of samples. Smaller values of MAE, MSE, RMSE, and MAPE indicate better predictive performance of the model, and the closer the value of R2 is to 1, the better the performance of the model is.

3 Data collection, extraction and preprocessing

This section presents the complete workflow of data preparation for the intelligent back-calculation model, as illustrated in Figure 4. The entire process integrates four major stages: numerical simulation, feature extraction, noise processing, and data preprocessing.

Figure 4
Diagram showing an applied load of 0.7 MPa on a pavement with measurement points labeled D1 to D9. A graph compares deflection peaks from numerical simulations and SEM in millimeters. The workflow includes data collection, error handling, feature selection, and database management. Various error scenarios are listed: none, random error, systematic error, and a combination of both.

Figure 4. Flowchart of Data collection, extraction and preprocessing.

Firstly, a three-layer pavement system comprising surface course, base course, and subgrade is modeled, and FWD loading is applied to reproduce field testing conditions. The dynamic responses of the pavement structure are computed using the SEM, which offers high precision and computational efficiency for transient wave propagation in layered media.

Secondly, the simulated deflection time histories at multiple measurement points are analyzed to obtain the peak deflection values, which serve as representative features reflecting the stiffness characteristics of the pavement layers. These extracted features are paired with their corresponding layer moduli to form the raw dataset.

Thirdly, in order to account for possible measurement uncertainty and improve model generalization, noise processing is introduced. Synthetic noise consistent with the statistical properties of field measurements is added to part of the dataset, simulating realistic variability in FWD test data.

Finally, the dataset undergoes preprocessing steps, including data normalization (via z-score standardization), sample shuffling, and train-test partitioning. These operations ensure that all features are dimensionally comparable and that the Transformer model can achieve stable convergence during training.

Overall, this systematic data preparation framework establishes a solid foundation for the subsequent intelligent back-calculation analysis, ensuring both the physical realism of the inputs and the statistical robustness of the learning process.

3.1 Data collection and extraction

To train and evaluate the Transformer-based intelligent back-calculation model, a large-scale synthetic dataset was established through numerical simulations using the validated SEM model. The SEM approach provides high computational efficiency and accuracy for solving dynamic response problems of layered pavement systems, making it particularly suitable for simulating FWD tests.

The modeled pavement structure consists of three primary layers: a surface course, a base course, and a subgrade. Each layer is characterized by its elastic modulus, Poisson’s ratio, and thickness, as summarized in Table 2. In the SEM simulations, pavement layers are modeled as linear elastic materials, and light viscous damping is introduced at the dynamic response level, following standard practice in SEM-based dynamic analysis of pavements (Al-Khoury et al., 2001; 2002). The mechanical parameters were randomly combined within reasonable engineering ranges to ensure adequate representation of various pavement conditions, resulting in a total of 20592 combinations of pavement structures.

Table 2
www.frontiersin.org

Table 2. Structure information of asphalt pavements.

During the simulation, the FWD test applies an impulsive load to the pavement surface in the form of a half-sine pulse with a peak pressure of 0.7 MPa and a duration of 25 ms. The loading plate radius is set to 15 cm, following standard FWD testing procedures. The pavement response was monitored at nine measurement points located at 0, 20, 30, 50, 80, 110, 140, 170, and 200 cm from the load center, corresponding to the typical sensor arrangement used in field testing.

For each simulation case, the SEM model outputs the deflection time history at all nine sensors. The peak deflection values were extracted from these time histories using an automated peak detection algorithm, representing the maximum surface displacement under the dynamic load. These peak values form the input features for the back-calculation model, while the corresponding elastic moduli of the three pavement layers serve as the output targets. Consequently, a comprehensive dataset with nine input variables (deflection peaks) and three output variables (layer moduli) was constructed, containing 20592 samples in total. This dataset was subsequently normalized and divided into training and testing subsets for the Transformer model development and performance evaluation.

3.2 Data preprocessing

3.2.1 Noise processing

From each simulated time-domain response, the peak deflection values are extracted as input features. To account for potential measurement imperfections in real-world applications, random and systematic errors are introduced into the simulated deflection data to simulate realistic noise conditions. The corresponding error assumptions are listed in Table 2.

From each simulated time-domain response, the peak deflection values are extracted as input features representing the pavement structural stiffness. However, the idealized numerical simulations do not fully reflect the uncertainties that commonly occur in field FWD measurements, such as sensor inaccuracies, temperature effects, and load plate contact variations. To account for these potential measurement imperfections and to enhance the model’s robustness, different error treatment strategies were implemented during data preprocessing, as summarized in Table 3.

Table 3
www.frontiersin.org

Table 3. Specific information of measurement errors.

Four distinct data processing scenarios were designed to assess the model’s sensitivity to measurement noise:

3.2.1.1 Case ①: No error treatment

The original simulated deflection data are used directly without any modification. This serves as the baseline condition, representing an ideal, noise-free environment where the inverse model is trained purely on clean data.

3.2.1.2 Case ②: Random error only

In this scenario, only random errors are introduced to each deflection value to simulate stochastic disturbances arising from equipment fluctuations or environmental noise. The random error term, denoted as εir, follows a Gaussian distribution εirN0,2, meaning the noise has a zero mean and a standard deviation of 2 μm, consistent with typical FWD sensor resolution limits (Stubstad et al., 2000).

3.2.1.3 Case ③: Systematic error only

To represent bias-type deviations caused by sensor miscalibration, temperature drift, or uneven load application, a systematic error term εis is applied uniformly across all sensors in each test. The error magnitude is randomly selected within the range from 4% to +4%, implying either underestimation or overestimation of the deflection amplitude by the entire measurement system.

3.2.1.4 Case ④: Combined random and systematic errors

In the most realistic scenario, both error components are simultaneously introduced. The final deflection at measurement point i is expressed as

Di*=Di1+εis+εir

where Di is the true simulated deflection, εis the systematic error, and εir the random error. This condition closely emulates the uncertainty characteristics encountered in actual FWD testing, where both instrument bias and stochastic fluctuations coexist.

Through this four-level noise injection strategy, the constructed datasets enable comprehensive evaluation of the Transformer model’s robustness, generalization capability, and resistance to measurement uncertainty, ensuring its applicability to real-world pavement deflection data.

It should be emphasized that the present dataset is fully generated from numerical simulations, and the introduced noise scenarios only approximate, rather than fully reproduce, the complexity of real FWD measurements. In practice, measurement errors may exhibit spatial correlation among sensors, time-dependent drift, temperature-induced bias, and coupling effects between sensors and pavement surface conditions. These factors are not explicitly modeled in the current study. Therefore, the adopted noise model should be regarded as a first-order representation designed to test model robustness, rather than a comprehensive description of field measurement uncertainty.

3.2.2 Z-score normalization

The deflection data and modulus labels are standardized using Z-score normalization, ensuring zero mean and unit variance. This normalization helps maintain numerical stability and prevents feature dominance caused by scale differences. Z-score normalization is a method that transforms data into a distribution with zero mean and unit variance by subtracting the mean and dividing by the standard deviation. Before model training, both the input features (FWD peak deflections) and output variables (elastic moduli of the three pavement layers) were standardized using the Z-score method to eliminate dimensional differences and improve numerical stability during training. Let the training samples be denoted as xij and yij, with their respective means and standard deviations represented by μj and σj. The standardization formulas are as follows:

xij=xijμjσj(8)
yij=yijμjYσjY(9)

Both model training and prediction are performed in the standardized space. The means μxj, μyk and standard deviations σxj, σyk used in Equations 810 are computed exclusively from the training subset. The same statistics are then applied to normalize the validation and test subsets, and to inverse-transform the predicted outputs back to physical units (MPa). This protocol ensures that no information from the validation or test data “leaks” into the training process through normalization, and that the reported performance truly reflects the model’s generalization capability. After prediction, the results are transformed back to the physical units (MPa) through an inverse transformation as follows:

y^ij=y^ijσjY+μjY(10)

where xij represents the original value of the jth input feature for the ith sample (for example, the peak deflection measured by the jth sensor); yij denotes the original target value of the jth output variable for the i-th sample (i.e., the elastic modulus of the corresponding layer, in MPa). μj and σj are the mean and standard deviation of the jth input feature in the training set, respectively. μjY and σjY denote the mean and standard deviation of the j-th output variable (modulus) in the training set. xij represents the dimensionless value obtained by applying Z-score normalization to the input feature xij; yij is the normalized value of the output variable yij; y^ij is the model’s predicted output in the standardized space; and y^ij represents the actual predicted value after inverse standardization (in MPa). The notation “·” indicates a standardized variable, while the “^” symbol denotes a predicted value. To avoid numerical instability, a lower bound correction is applied to very small standard deviations, defined as σj=maxσj,108. Both training and testing data are standardized using the statistics (μj,σj) computed from the training set to prevent data leakage.

For each noise scenario, the synthetic SEM-based dataset of noiseless input–output pairs is first generated and then randomly partitioned into two mutually exclusive subsets, with 70% of the samples used for training and 30% reserved for testing, using a fixed random seed to ensure reproducibility (Section 3.1). The clean deflection basins are computed from the SEM model and, after this train–test split, are corrupted by the prescribed noise models: random measurement noise is introduced by adding zero-mean Gaussian noise to each sensor deflection with a standard deviation proportional to the local peak deflection magnitude, whereas systematic noise is represented by a constant bias term applied to all sensors; a combined-noise case is constructed by superposing the random and systematic components. Prior to training, both the input peak deflections and the output elastic moduli are standardized via Z-score normalization. All standardization operations are performed in a strictly training-only manner to avoid data leakage: the means and standard deviations of the input and output variables are computed from the training subset only and then used to normalize both the training and test data, as well as to inverse-transform the model predictions back to physical units (MPa). The normalized training subset is finally fed to the Transformer model described in Section 2.2, and model training and loss computation are carried out entirely in this standardized space.

4 Results and discussion

This section presents a comprehensive evaluation of the Transformer-based intelligent back-calculation model under four distinct noise conditions: (1) no measurement error, (2) random error, (3) systematic error, and (4) combined random and systematic error. Each condition corresponds to a realistic field scenario, reflecting the influence of measurement imperfections in FWD testing. Model performance was quantitatively assessed using the MAE, MSE, RMSE, MAPE, and R2. These metrics were calculated for each pavement layer (surface course, base course, and subgrade) as well as averaged over all layers to provide a holistic understanding of model behavior.

4.1 Model performance without measurement error

The benchmark case, without any added noise, represents the ideal data condition for model evaluation. As shown in Figure 5; Table 4, the Transformer model demonstrates excellent agreement between predicted and true elastic moduli for all pavement layers. The predicted points in Figure 5 closely follow the 1:1 reference line, indicating strong consistency across the entire modulus range.

Figure 5
Three scatter plots (a, b, c) compare predicted versus actual values of E1, E2, and E3 in MPa for training and testing sets. Each plot includes a diagonal line y = x, histograms on the axes, and a legend indicating cyan for training and magenta for testing data. There is a positive correlation in each plot.

Figure 5. Comparison between predicted and true moduli for all pavement layers with no measurement error: (a) surface course modulus E1; (b) base course modulus E2; (c) subgrade modulus E3.

Table 4
www.frontiersin.org

Table 4. Model performance evaluation on the test dataset with no measurement error.

Quantitatively, the average MAE reaches 710.95 MPa, and the MAPE remains as low as 5.93%, signifying a high prediction accuracy. The average R2 of 0.96 further confirms that the model captures over 96% of the variance in the true modulus values.

Among individual layers, the subgrade modulus exhibits a very high statistical correlation with the reference values (R2 close to 1.00) and relatively small absolute errors (MAE = 0.79 MPa), reflecting its dominant influence on the overall deflection basin under the considered parameter ranges and sensor configuration. Conversely, the surface course exhibits slightly larger deviations due to its higher stiffness and greater sensitivity to small perturbations in deflection measurements.

These results highlight the Transformer model’s powerful feature extraction ability and its capacity to establish a robust nonlinear mapping between deflection patterns and pavement layer moduli under ideal conditions.

4.2 Model performance under random error

To emulate random fluctuations in field measurements, Gaussian noise with zero mean and specified variance was introduced into the input data. The corresponding results are illustrated in Figure 6; Table 5. Remarkably, even in the presence of random noise, the model maintains a high level of predictive accuracy. The average MAE (674.37 MPa) and RMSE (979.83 MPa) are slightly lower than those in the noise-free case, and R2 remains above 0.95, suggesting that the model benefits from minor data perturbations, which can enhance generalization by reducing overfitting.

Figure 6
Scatter plots with histograms compare predicted versus actual values for three elastic moduli: E1, E2, and E3 in megapascals. In each plot, cyan dots represent the training set, and magenta dots represent the testing set. The diagonal line denotes y = x. Histograms display distributions of actual values above and predicted values to the right of each plot. Panel (a) depicts E1, (b) E2, and (c) E3.

Figure 6. Comparison between predicted and true moduli for all pavement layers with random error: (a) surface course modulus E1; (b) base course modulus E2; (c) subgrade modulus E3.

Table 5
www.frontiersin.org

Table 5. Model performance evaluation on the test dataset with random error.

The R2 values remain consistently high (≥0.95 for all layers), indicating that the random disturbances do not significantly affect the model’s regression capability. This robustness can be attributed to the self-attention mechanism in the Transformer architecture, which effectively identifies key spatial dependencies among deflection features and suppresses the influence of random noise.

In particular, the base course achieves an R2 of 0.95 with MAPE below 8.1%, demonstrating the model’s adaptability to intermediate stiffness layers. The scatter distribution in Figure 6 remains tightly clustered around the reference line, further confirming the model’s insensitivity to random fluctuations.

4.3 Model performance under systematic error

Systematic errors, such as sensor calibration bias or consistent drift in FWD equipment, were next introduced to evaluate the model’s resilience to directional deviations. The outcomes are summarized in Figure 7; Table 6. Compared with the previous cases, the performance metrics show a moderate decline. The average MAE increases to 856.30 MPa, RMSE to 1205.76 MPa, and MAPE to 8.19%, while the average R2 decreases slightly to 0.94.

Figure 7
Three scatter plots, each comparing predicted versus actual values of elastic moduli (E1, E2, E3) in MPa. Panel (a) shows data for E1 from 0 to 30,000, (b) for E2 from 4,000 to 20,000, and (c) for E3 from 40 to 100. In each plot, training data is teal, testing data magenta, with a y=x dashed line indicating perfect prediction. Vertical and horizontal histograms show data distribution.

Figure 7. Comparison between predicted and true moduli for all pavement layers with systematic error: (a) surface course modulus E1; (b) base course modulus E2; (c) subgrade modulus E3.

Table 6
www.frontiersin.org

Table 6. Model performance evaluation on the test dataset with systematic error.

Visual inspection of Figure 7 reveals that the predicted moduli tend to deviate systematically from the 1:1 line, producing a slight offset pattern. This shift reflects the influence of persistent bias in the input data, which cannot be fully corrected by the model’s internal learning process. The Transformer architecture, while capable of capturing complex nonlinear relationships, inherently inherits a portion of the systematic bias embedded in the training data distribution.

Nevertheless, even under such challenging conditions, the model’s prediction accuracy remains acceptable for engineering applications. The R2 values for all layers remain above 0.90, demonstrating that the model retains substantial predictive capability. These findings suggest that moderate systematic measurement errors do not critically impair the Transformer’s inference reliability, making it feasible for use with field FWD data where small calibration biases are common.

4.4 Model performance under combined random and systematic errors

The most realistic testing condition involves the coexistence of both random and systematic errors. Figure 8; Table 7 show that under this comprehensive noise environment, the Transformer model continues to perform robustly. The average MAE increases modestly to 732.56 MPa, while the average R2 remains high at 0.95. The MAPE of 7.48% indicates that overall prediction deviations remain within an acceptable engineering range.

Figure 8
Three scatter plots (a, b, c) depict predicted versus actual values of E₁, E₂, and E₃ in MPa. Each plot includes data points for training (cyan) and testing sets (pink), with a diagonal y=x line indicating perfect prediction. Density plots and histograms along the axes and top right corners provide additional data distribution insights.

Figure 8. Comparison between predicted and true moduli for all pavement layers with random and systematic error: (a) surface course modulus E1; (b) base course modulus E2; (c) subgrade modulus E3.

Table 7
www.frontiersin.org

Table 7. Model performance evaluation on the test dataset with random and systematic error.

The subgrade layer once again demonstrates the highest stability, with an R2 of 0.96, reflecting its lower sensitivity to noise due to smaller deflection amplitude variability. The surface and base layers experience minor performance degradation; however, the overall trend remains consistent, confirming that the Transformer effectively generalizes the underlying input–output relationship even when measurement uncertainty increases.

The results collectively demonstrate that the Transformer-based back-calculation model is not only accurate under ideal conditions but also robust and reliable under realistic noise perturbations.

4.5 Compared with common machine learning models based on random and systematic dataset

For a fair and consistent comparison, all baseline models (BPNN, SVR, and XGBoost) were trained and evaluated under the same experimental conditions as the proposed Transformer model. Specifically, all models used identical input features (nine FWD deflection peaks), output targets (layer elastic moduli), train–test split (70%/30%), and data preprocessing procedures, including Z-score normalization. The comparative evaluation was conducted on the same synthetic dataset with combined random and systematic errors, and model performance was assessed on an identical test set using the same evaluation metrics (MAE, MSE, RMSE, MAPE, and R2) for the surface course, base course, and subgrade. The hyperparameters of all baseline models were selected using standard tuning strategies within commonly accepted ranges. For the BPNN, the number of hidden neurons and the learning rate were adjusted empirically based on validation performance. For SVR, key hyperparameters including the kernel type, penalty parameter, and kernel width were optimized using grid search. For XGBoost, the tree depth, learning rate, and number of estimators were tuned through empirical validation. Default parameter settings were avoided when they resulted in clear underfitting or overfitting. Consequently, the adopted configurations represent reasonable and competitive baselines rather than minimally tuned models. The corresponding test results are summarized in Table 8.

Table 8
www.frontiersin.org

Table 8. Test set performance of different models on the random and systematic noisy dataset.

As shown in Table 8, the Transformer model achieves the lowest overall error levels and the most consistent performance across all three layers. In terms of MAPE, the average values across the three layers are approximately 7.48% for the Transformer, compared with 8.44% for BPNN, 12.83% for SVR, and 18.62% for XGBoost. The corresponding average R2 values are about 0.95 for both the Transformer and BPNN, but decrease to roughly 0.89 and 0.77 for SVR and XGBoost, respectively. These results indicate that although BPNN can reach a comparable average R2, it still yields larger errors than the Transformer, whereas SVR and XGBoost suffer from a clear degradation in predictive accuracy under the random and systematic noisy condition.

The advantage of the Transformer is particularly evident for the base course, which is generally the most difficult layer to identify due to its intermediate position and strong interaction with both the surface course and the subgrade. For this layer, the Transformer attains MAE = 882.85 MPa, RMSE = 1172.39 MPa, MAPE = 8.83%, and R2 = 0.94, outperforming BPNN (MAE = 1073.63 MPa, MAPE = 10.63%, R2 = 0.91) and substantially surpassing SVR and XGBoost (e.g., XGBoost yields MAPE = 28.04% and R2 = 0.56). For the surface course and subgrade, the Transformer also provides competitive MAE/RMSE and high R2 values, remaining at least as accurate as, and in several cases more accurate than, the baseline models.

These quantitative comparisons help clarify why a Transformer is preferred over simpler models in the proposed SEM + Transformer framework. The multi-head self-attention mechanism enables the Transformer to explicitly capture global dependencies among all FWD deflection measurements, allowing the network to focus on physically informative deflection patterns and to down-weight noisy or less relevant components. In contrast, BPNN relies on fixed fully connected mappings, SVR depends on pre-defined kernel functions, and XGBoost aggregates a series of decision trees, all of which have more limited capacity to represent the highly nonlinear and ill-posed mapping from surface deflections to multilayer elastic moduli under noisy conditions. Consequently, the Transformer not only achieves lower errors and higher R2 on the random and systematic noisy dataset, but also exhibits stronger robustness and generalization, especially for the critical base course.

4.6 Physical plausibility, identifiability, and limitations of unconstrained learning

The above results presented above indicate that the proposed SEM–Transformer framework achieves high predictive accuracy and strong robustness across all considered noise scenarios. Beyond numerical accuracy, however, two fundamental issues deserve careful discussion: physical plausibility of the predicted moduli and identifiability of the inverse mapping from FWD deflections to multilayer elastic properties. It should be emphasized that the adopted Transformer configuration is not claimed to be universally optimal. Rather, it represents a compact and effective design choice tailored to the specific characteristics of the FWD-based back-calculation problem considered in this study.

From the perspective of physical plausibility, the predicted moduli in this study remain consistent with expected pavement mechanics within the predefined parameter space. Across all experiments and noise levels, no non-physical outcomes such as negative elastic moduli or severe modulus layering were observed in the reported test cases. In particular, the stiffness ordering between the surface course, base course, and subgrade is generally preserved. This behavior can be attributed primarily to the use of SEM-generated training data, which inherently satisfy mechanical consistency and realistic stiffness hierarchies.

Nevertheless, it is important to emphasize that the Transformer model itself is unconstrained. No explicit monotonicity, ordering, or inequality constraints (e.g., EsurfaceEbaseEsubgrade) are enforced during training or inference. As a result, the observed physical consistency arises implicitly from the data distribution rather than from hard constraints embedded in the learning model. In principle, when applied outside the training distribution or under substantially different field conditions, the model may produce physically inconsistent modulus combinations, such as a surface-layer modulus lower than that of the subgrade. This limitation is common to most purely data-driven back-calculation approaches and should be carefully considered in practical applications.

The issue of identifiability and uniqueness is intrinsic to FWD-based modulus back-calculation and is independent of the specific learning algorithm employed. The inverse mapping from surface deflection basins to multilayer elastic moduli is inherently ill-posed and non-unique: different combinations of layer properties may yield very similar surface deflection responses, particularly under limited sensor spacing and in the presence of measurement noise. Consequently, even under ideal noise-free conditions, a mathematically unique inverse solution does not generally exist.

In this context, the role of the proposed Transformer model is not to recover a unique physical solution, but rather to learn a statistically optimal inverse mapping conditioned on the assumed parameter ranges, pavement configurations, sensor layout, and noise characteristics represented in the training data. The predicted moduli should therefore be interpreted as the most probable estimates within this constrained statistical space, rather than as exact physical truths. This interpretation is consistent with both classical optimization-based back-calculation methods and recent data-driven approaches reported in the literature.

Finally, it should be recognized that a non-negligible domain shift exists between SEM-generated responses and real-world FWD measurements. Real pavements exhibit temperature-dependent and viscoelastic material behavior, layer heterogeneity, construction-induced variability, and non-ideal load–pavement contact conditions, whereas the present SEM model assumes linear elasticity, homogeneity, and axisymmetry. Field FWD data are also affected by sensor coupling effects and spatially correlated measurement errors that are difficult to reproduce numerically. While the noise models adopted in this study provide a first-order approximation of measurement uncertainty, they do not fully capture these complexities. Addressing both physical constraint enforcement and the simulation-to-field domain gap will be essential steps toward reliable deployment of the proposed framework in real-world pavement evaluation and digital twin–based management systems.

From the perspective of inverse problem theory, pavement modulus back-calculation based on FWD deflections is a fundamentally ill-posed problem. The surface deflection basin represents an aggregated structural response, and different combinations of layer moduli may produce very similar deflection profiles, particularly when sensor spacing is limited and measurement noise is present. The proposed Transformer model does not eliminate this ill-posedness, but rather provides a data-driven regularization by learning the most statistically probable inverse mapping under the assumed parameter ranges and noise conditions.

It should also be noted that different pavement layers exhibit markedly different sensitivities in FWD measurements. The subgrade modulus predominantly controls the overall curvature and far-field deflections of the basin, while the surface and base layers mainly affect near-load deflections. As a result, the inverse mapping is inherently more sensitive to subgrade stiffness variations than to variations in upper-layer moduli. This sensitivity imbalance explains the near-perfect R2 values observed for the subgrade in the present study. Such high R2 values reflect dominant sensitivity rather than guaranteed identifiability or uniqueness of the subgrade modulus.

The current study does not explicitly assess the model’s ability to distinguish between different modulus combinations that generate nearly indistinguishable deflection basins. A rigorous identifiability or sensitivity analysis—such as controlled perturbation studies or equivalence-class analysis—would be required to quantify this capability and is left for future work. The goal of this study is not to exhaustively benchmark all possible architectures, but to demonstrate feasibility and robustness of a Transformer-based inverse framework. The comparative analysis is intended to provide contextual performance references under consistent experimental conditions, rather than a statistically exhaustive or uncertainty-aware benchmark across all model families.

4.7 Overall summary

Across all four scenarios, the Transformer-based intelligent back-calculation framework demonstrates high accuracy, stability, and adaptability. The model achieves an average R2 exceeding 0.94 under all noise conditions, confirming its robustness against both random and systematic measurement errors. The average MAPE values remain below 8%, well within acceptable limits for pavement engineering applications.

These findings verify that the Transformer model effectively learns the intrinsic relationships between FWD deflection responses and layer elastic moduli, even in the presence of complex noise patterns. Consequently, this method provides a reliable and data-driven solution for practical modulus back-calculation tasks, offering improved accuracy and interpretability compared with traditional approaches.

5 Conclusion

This study developed an intelligent back-calculation framework integrating the SEM and a Transformer-based deep learning model to estimate multilayer pavement elastic moduli from FWD deflection data. Based on the numerical simulations, data preprocessing, and performance evaluations under four noise conditions, the main findings are summarized as follows:

5.1 High prediction accuracy and robustness

The Transformer-based model achieved excellent predictive performance, with average R2 exceeding 0.94 and MAPE below 8% across all scenarios. Even under combined random and systematic noise, the model maintained stable accuracy, demonstrating strong generalization and robustness to measurement uncertainty.

5.2 Superior feature learning and physical consistency

By leveraging multi-head self-attention, the Transformer effectively captured global dependencies among deflection sensors, enabling precise mapping between surface deflection patterns and underlying layer moduli. The predicted trends were physically consistent with pavement structural behavior—subgrade moduli showed the highest stability due to smoother deformation responses, while the surface layer exhibited slightly higher variability owing to its stiffness contrast.

5.3 Efficiency and applicability

Once trained, the proposed model provided rapid, millisecond-level predictions, offering a computationally efficient and fully data-driven solution for modulus inversion. Its end-to-end design minimizes manual parameter tuning and avoids convergence issues common in traditional iterative back-calculation, supporting integration into real-time pavement condition evaluation and intelligent maintenance systems.

Overall, the developed SEM–Transformer framework demonstrates strong potential for intelligent, accurate, and efficient pavement structural evaluation, and provides a promising basis for data-driven digital twin systems in pavement management. However, the present study also has clear limitations. All training and testing data are synthetically generated using an SEM model, and no field FWD dataset is used for direct validation. As a result, the current findings should be interpreted as a numerical benchmark demonstrating feasibility and robustness, rather than as evidence of immediate field applicability. Future work will focus on validating the proposed framework using large-scale field FWD datasets, incorporating temperature-dependent and viscoelastic material behavior, and developing physics-guided or domain-adaptive learning strategies to mitigate the simulation-to-field gap. These efforts are essential before the proposed method can be reliably deployed in real-world pavement evaluation and digital twin–based infrastructure management systems. Future work will also explicitly address physical constraint enforcement by incorporating monotonicity or inequality constraints into the learning process, for example, through output reparameterization, physics-guided loss functions, or hybrid inversion frameworks. Such extensions are expected to further improve physical interpretability, reduce the risk of inconsistent predictions, and enhance robustness when applying the model to real-world FWD datasets. It should be emphasized that the reported prediction accuracy does not imply mathematical uniqueness of the inverse solution. The proposed framework provides statistically optimal estimates conditioned on the assumed data distribution, rather than resolving the intrinsic non-uniqueness of pavement modulus back-calculation. Future work will incorporate uncertainty quantification and statistical testing, such as repeated sampling, confidence interval estimation, or Bayesian approaches, to further strengthen the rigor of comparative performance evaluation.

Data availability statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Author contributions

GW: Investigation, Methodology, Software, Writing – original draft, Data curation. YZ: Conceptualization, Funding acquisition, Supervision, Writing – review and editing.

Funding

The author(s) declared that financial support was received for this work and/or its publication. This work was supported by the National Natural Science Foundation of China NSFC (51678114), Urumqi Transportation Research Project (JSKJ201806), and Shanxi Province Transportation Research Project (19-JKKJ-4). The funder was not involved in the study design, collection, analysis, interpretation of data, the writing of this article, or the decision to submit it for publication.

Acknowledgements

The authors gratefully acknowledged their financial support.

Conflict of interest

Author GW was employed by Shanxi Provincial Transportation Construction Engineering Quality Inspection Center (Co., Ltd.).

The remaining author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declared that generative AI was not used in the creation of this manuscript.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Footnotes

Abbreviations:u, Displacement vector SI unit: m; u¨ Acceleration vector; λt, μt, Lame’s constant of the material; , Gradient differential operator; ·u, Divergence of u; 2u, Laplacian of u; ρ, Material density SI unit:kg/m3; dk, Dimensionality of the key vectors in the attention mechanism; Di, True simulated deflection value at the i-th measurement point SI unit: μm; εis, Systematic error component for the i-th measurement; εir, Random error component for the i-th measurement; yi, Actual (observed) value of the target variable SI unit: MPa; y^i, Predicted value obtained from the model SI unit: MPa; yˉ, Mean value of the actual target variable SI unit: MPa; n, Number of samples; xij, Original value of the jth input feature for the ith sample (e.g., peak deflection) SI unit: μm; yij, Original target value of the jth output variable for the ith sample (elastic modulus) SI unit: MPa; μj, Mean of the jth input feature in the training set; σj, Standard deviation of the jth input feature in the training set; μjY, Mean of the jth output variable (modulus) in the training set SI unit: MPa; σjY, Standard deviation of the jth output variable (modulus) in the training set SI unit: MPa; xij, Standardized (dimensionless) value of input feature xij after Z-score normalization; yij, Standardized (dimensionless) value of output variable yij; y^ij, Predicted output in the standardized space; y^ij, Predicted output after inverse standardization (restored to physical units) SI unit: MPa.

References

Al-Khoury, R., Kasbergen, C., Scarpas, A., and Blaauwendraad, J. (2001). Spectral element technique for efficient parameter identification of layered media: part II: inverse calculation. Int. J. Solids Struct. 38 (48), 8753–8772. doi:10.1016/S0020-7683(01)00109-3

CrossRef Full Text | Google Scholar

Al-Khoury, R., Scarpas, A., Kasbergen, C., and Blaauwendraad, J. (2002). Spectral element technique for efficient parameter identification of layered media. Part III: viscoelastic aspects. Int. J. Solids Struct. 39 (8), 2189–2201. doi:10.1016/S0020-7683(02)00079-3

CrossRef Full Text | Google Scholar

Bush, A. J. (1985). Computer program BISDEF. Vicksburg. Miss: US army corps of engineer waterways experiment station.

Google Scholar

Bush, A., and Alexander, D. (1985). Pavement evaluation using deflection basin measurements and layered theory, 1022, 16–29.

Google Scholar

Bypour, M., Mahmoudian, A., Yekrangnia, M., and Kioumarsi, M. (2024). Explainable tuned machine learning models for assessing the impact of corrosion on bond strength in concrete. Clean. Eng. Technol. 23, 100834. doi:10.1016/j.clet.2024.100834

CrossRef Full Text | Google Scholar

Cao, D., Zhou, C., Zhao, Y., Fu, G., and Liu, W. (2020). Effectiveness of static and dynamic backcalculation approaches for asphalt pavement. Can. J. Civ. Eng. 47 (7), 846–855. doi:10.1139/cjce-2019-0052

CrossRef Full Text | Google Scholar

Chen, Q., Wang, H., Ji, H., Ma, X., and Cai, Y. (2024). Data-driven atmospheric corrosion prediction model for alloys based on a two-stage machine learning approach. Process Saf. Environ. Prot. 188, 1093–1105. doi:10.1016/j.psep.2024.06.028

CrossRef Full Text | Google Scholar

Chen, S., Cao, J., Wan, Y., Huang, W., and Abdel-Aty, M. (2025). A novel CPO-CNN-LSTM based deep learning approach for multi-time scale deflection basin area prediction in asphalt pavement. Constr. Build. Mater. 458, 139540. doi:10.1016/j.conbuildmat.2024.139540

CrossRef Full Text | Google Scholar

Coletti, K., Romeo, R. C., and Davis, R. B. (2024). Bayesian backcalculation of pavement properties using parallel transitional markov chain monte carlo. Comput.-Aided Civ. Infrastruct. Eng. 39 (13), 1911–1927. doi:10.1111/mice.13123

CrossRef Full Text | Google Scholar

Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., et al. (2020). An image is worth 16x16 words: transformers for image recognition at scale.

Google Scholar

Draper, N. R., and Smith, H. (1998). Applied regression analysis.

Google Scholar

Elbagalati, O., Elseifi, M., Gaspard, K., and Zhang, Z. (2018). Development of the pavement structural health index based on falling weight deflectometer testing. Int. J. Pavement Eng. 19 (1), 1–8. doi:10.1080/10298436.2016.1149838

CrossRef Full Text | Google Scholar

Golmohammadi, A., Hernando, D., Van den Bergh, W., and Hasheminejad, N. (2025). Advanced data-driven FBG sensor-based pavement monitoring system using multi-sensor data fusion and an unsupervised learning approach. Measurement 242, 115821. doi:10.1016/j.measurement.2024.115821

CrossRef Full Text | Google Scholar

Goodfellow, I., Bengio, Y., and Courville, A. (2016). Deep learning. The MIT Press.

Google Scholar

Ioannides, A. M., Barenberg, E. J., and Lary, J. A. (1989). “Interpretation of falling weight deflectometer results using principles of dimensional analysis,” in Paper presented at the the 4th international conference on concrete pavement design and rehabilitation: proceedings, west Lafayettefrom.

Google Scholar

Irwin, L. H. (1994). Instructional guide for back-calculation and the use of MODCOMP3 version 3.6. Ithaca, NY: Cornell University Local Roads Program, CLRP Publications, 4–10.

Google Scholar

Irwin, L. H., and Szebenyi, E. (1983). User's guide to modcomp2. Ithaca, NY: Cornell University Local Roads Program, 83–88.

Google Scholar

Jiang, X., Gabrielson, J., Huang, B., Bai, Y., Polaczyk, P., Zhang, M., et al. (2022). Evaluation of inverted pavement by structural condition indicators from falling weight deflectometer. Constr. Build. Mater. 319, 125991. doi:10.1016/j.conbuildmat.2021.125991

CrossRef Full Text | Google Scholar

Khazanovich, L., and Roesler, J. (1997). DIPLOBACK: neural-network-based backcalculation program for composite pavements. Transp. Res. Rec. 1570 (1), 143–150. doi:10.3141/1570-17

CrossRef Full Text | Google Scholar

Li, J., Zhang, S., and Wang, X. (2025). Physics-informed neural network with fuzzy partial differential equation for pavement performance prediction. Autom. Constr. 171, 105983. doi:10.1016/j.autcon.2025.105983

CrossRef Full Text | Google Scholar

Lu, L., D'Avigneau, A. M., Pan, Y., Sun, Z., Luo, P., and Brilakis, I. (2025). Modeling heterogeneous spatiotemporal pavement data for condition prediction and preventive maintenance in digital twin-enabled highway management. Autom. Constr. 174, 106134. doi:10.1016/j.autcon.2025.106134

CrossRef Full Text | Google Scholar

Meier, R., Alexander, D., and Freeman, R. (1997). Using artificial neural networks as a forward approach to backcalculation. Transp. Res. Rec. 1570, 126–133. doi:10.3141/1570-15

CrossRef Full Text | Google Scholar

Nam, B. H., An, J., Kim, M., Murphy, M. R., and Zhang, Z. (2016). Improvements to the structural condition index (SCI) for pavement structural evaluation at network level. Int. J. Pavement Eng. 17 (8), 680–697. doi:10.1080/10298436.2015.1014369

CrossRef Full Text | Google Scholar

Pan, T., Zheng, Y., Zhou, Y., Luo, W., Xu, X., Hou, C., et al. (2023). Damage pattern recognition for corroded beams strengthened by CFRP anchorage system based on acoustic emission techniques. Constr. Build. Mater. 406, 133474. doi:10.1016/j.conbuildmat.2023.133474

CrossRef Full Text | Google Scholar

Plati, C., Georgiou, P., and Papavasiliou, V. (2016). Simulating pavement structural condition using artificial neural networks. Struct. Infrastruct. Eng. 12 (9), 1127–1136. doi:10.1080/15732479.2015.1086384

CrossRef Full Text | Google Scholar

Plati, C., Gkyrtis, K., and Loizos, A. (2024). A practice-based approach to diagnose pavement roughness problems. Int. J. Civ. Eng. 22 (3), 453–465. doi:10.1007/s40999-023-00900-x

CrossRef Full Text | Google Scholar

Scullion, T., Uzan, J., and Paredes, M. (1990). MODULUS: a microcomputer-based backcalculation system. Transp. Res. Rec. 1260, 180–191.

Google Scholar

Sharma, S., and Das, A. (2008). Backcalculation of pavement layer moduli from falling weight deflectometer data using an artificial neural network. Can. J. Civ. Eng. 35 (1), 57–66. doi:10.1139/l07-083

CrossRef Full Text | Google Scholar

Shamiyeh, M., Gunduz, M., and Shamiyeh, M. E.(2022). Assessment of pavement performance management indicators through analytic network process. IEEE Trans. Eng. Manage. 69(6), 2684–2692. doi:10.1109/TEM.2019.2952153

CrossRef Full Text | Google Scholar

Stubstad, R., Irwin, L., Lukanen, E., and Clevenson, M. (2000). It's 10 o'clock: do you know where your sensors are? Transp. Res. Rec. 1716, 10–19. doi:10.3141/1716-02

CrossRef Full Text | Google Scholar

Tarefder, R. A., Ahsan, S., and Ahmed, M. U. (2015). Neural network–based thickness determination model to improve backcalculation of layer moduli without coring. Int. J. Geomech. 15 (3), 4014058. doi:10.1061/(asce)gm.1943-5622.0000407

CrossRef Full Text | Google Scholar

Torquato E Silva, S. D. A., Oliveira, J. L. F. D., Furtado, L. B. G., Babadopulos, L. F. A. L., Parente Junior, E., and Batista Dos Santos, J. (2025). Effect of the input of structural parameters’ uncertainties and analysts’ arbitrary decisions on the results of backcalculated pavement materials’ resilient moduli. Can. J. Civ. Eng. 52 (9), 1743–1751. doi:10.1139/cjce-2024-0256

CrossRef Full Text | Google Scholar

Ullidtz, P. (1998). Modelling flexible pavement response and performance. Lyngby: Polyteknisk Forlag.

Google Scholar

Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., et al. (2017). Paper presented at the proceedings of the 31st international conference on neural information processing systems. Long Beach, California, USA.

Google Scholar

Wang, Y., and Zhao, Y. (2022). Predicting bedrock depth under asphalt pavement through a data-driven method based on particle swarm optimization-back propagation neural network. Constr. Build. Mater. 354, 129165. doi:10.1016/j.conbuildmat.2022.129165

CrossRef Full Text | Google Scholar

Wang, Y., Zhao, Y., Sun, Q., and Fu, G. (2023). Influence of bedrock on viscoelastic responses and parametric back-calculation results for asphalt pavements and prediction of bedrock depth under FWD tests. Constr. Build. Mater. 377, 131158. doi:10.1016/j.conbuildmat.2023.131158

CrossRef Full Text | Google Scholar

Wang, Y., Zhao, Y., Wu, F., and Sun, Q. (2024). Intelligent back-calculation approach to obtain viscoelastic properties of asphalt pavements on bedrock using falling weight deflectometer tests. Transp. Res. Rec. 2679 (4), 431–447. doi:10.1177/03611981241292582

CrossRef Full Text | Google Scholar

Wudil, Y. S., Shalabi, A. F., Al-Osta, M. A., Gondal, M. A., and Al-Nahari, E. (2024). Effective corrosion detection in reinforced concrete via laser-induced breakdown spectroscopy and machine learning. Mater. Today Commun. 41, 111005. doi:10.1016/j.mtcomm.2024.111005

CrossRef Full Text | Google Scholar

Yang, L., Chen, Z., Cheng, H., Yang, R., Sun, L., and Cui, C. (2025). Integrating FWD test and laboratory observation for assessing the damage state of semi-rigid base in asphalt pavement. Constr. Build. Mater. 496, 143769. doi:10.1016/j.conbuildmat.2025.143769

CrossRef Full Text | Google Scholar

Zhang, W., Khan, A., Huyan, J., Zhong, J., Peng, T., and Cheng, H. (2021). Predicting marshall parameters of flexible pavement using support vector machine and genetic programming. Constr. Build. Mater. 306, 124924. doi:10.1016/j.conbuildmat.2021.124924

CrossRef Full Text | Google Scholar

Zhao, Y., Cao, D., and Chen, P. (2015). Dynamic backcalculation of asphalt pavement layer properties using spectral element method. Road. Mater. Pavement Des. 16 (4), 870–888. doi:10.1080/14680629.2015.1056214

CrossRef Full Text | Google Scholar

Zheng, Y., Zhou, Y., Zhou, Y., Pan, T., Sun, L., and Liu, D. (2020). Localized corrosion induced damage monitoring of large-scale RC piles using acoustic emission technique in the marine environment. Constr. Build. Mater. 243, 118270. doi:10.1016/j.conbuildmat.2020.118270

CrossRef Full Text | Google Scholar

Zhou, Y., Zheng, Y., Liu, Y., Pan, T., and Zhou, Y. (2022). A hybrid methodology for structural damage detection uniting FEM and 1d-CNNs: demonstration on typical high-pile wharf. Mech. Syst. Signal Proc. 168, 108738. doi:10.1016/j.ymssp.2021.108738

CrossRef Full Text | Google Scholar

Zhou, Y., Aydin, B. B., Zhang, F., Hendriks, M. A. N., and Yang, Y. (2024a). A lattice modelling framework for fracture-induced acoustic emission wave propagation in concrete. Eng. Fract. Mech. 312, 110589. doi:10.1016/j.engfracmech.2024.110589

CrossRef Full Text | Google Scholar

Zhou, Y., Liang, M., and Yue, X. (2024b). Deep residual learning for acoustic emission source localization in a steel-concrete composite slab. Constr. Build. Mater. 411, 134220. doi:10.1016/j.conbuildmat.2023.134220

CrossRef Full Text | Google Scholar

Zhou, Y., Aydin, B. B., Zhang, F., Hendriks, M. A. N., and Yang, Y. (2025a). Lattice modelling of complete acoustic emission waveforms in the concrete fracture process. Eng. Fract. Mech. 320, 111040. doi:10.1016/j.engfracmech.2025.111040

CrossRef Full Text | Google Scholar

Zhou, Y., Liu, Y., Lian, Y., Pan, T., Zheng, Y., and Zhou, Y. (2025b). Ambient vibration measurement-aided multi-1d CNNs ensemble for damage localization framework: demonstration on a large-scale RC pedestrian bridge. Mech. Syst. Signal Proc. 224, 111937. doi:10.1016/j.ymssp.2024.111937

CrossRef Full Text | Google Scholar

Keywords: data-driven modeling, FWD, intelligent back-calculation, intelligent maintenance, SEM, transformer

Citation: Wang G and Zhao Y (2026) Intelligent pavement moduli back-calculation using an SEM–transformer framework. Front. Mater. 12:1732297. doi: 10.3389/fmats.2025.1732297

Received: 28 October 2025; Accepted: 22 December 2025;
Published: 28 January 2026.

Edited by:

Alireza Tabarraei, University of North Carolina at Charlotte, United States

Reviewed by:

Zeping Yang, Griffith University, Australia
Yubao Zhou, Delft University of Technology, Netherlands

Copyright © 2026 Wang and Zhao. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Guozhong Wang, d2FuZ2d1b3pob25nXzQxQDEyNi5jb20=

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.