OPINION article
Front. Phys.
Sec. Interdisciplinary Physics
Volume 13 - 2025 | doi: 10.3389/fphy.2025.1645620
This article is part of the Research TopicGolden Fractal Jubilee: 50 Years of Bridging Art and ScienceView all articles
Operationalizing Fractal Linguistics: Toward a Unified Framework for Cross-Disciplinary Fractal Analysis
Provisionally accepted- 1Khalifa University, Abu Dhabi, United Arab Emirates
- 2Medizinische Universitat Graz, Graz, Austria
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Fractal analysis has become an essential tool in multiple scientific disciplines, including physics, biology, neuroscience, medicine, biomedical engineering, materials science, economics, and environmental sciences, to name a few. The concept was first proposed by Mandelbrot (1983), based on the self-similarity of structures across different scales that are not definable by Euclidean geometry (1)(2)(3).However, despite its widespread application, the precise definition and interpretation of fractalrelated concepts remain inconsistent across disciplines (2,(4)(5)(6)(7). Terms such as "fractal dimension", "self-similarity", "self-affinity", "roughness", "scaling laws", the role of embedding dimension and topological dimension in spatial contexts are frequently applied in contexts where their mathematical rigor is uncertain (8)(9)(10). This has led to a semantic drift in scientific discourse, creating barriers to effective interdisciplinary communication (11,12). Addressing this, a differentiation between fractal analysis and fractal synthesis has been suggested, where fractal analysis focuses on single-dimension metrics to measure the complexity of an image or time series, whereas fractal synthesis combines local and global dimensions, entropy, and spatialtemporal dynamics to establish a more comprehensive understanding of complex systems. To take this further, this manuscript addresses the technical foundations and methodologies required to reliably interpret fractal descriptions by addressing assumptions of stationarity, selection of scale range, and use of surrogate data testing. Before the advent of fractal analysis, Richardson (1961) explored the relationship between scale and length in coastlines, demonstrating that coastline length increases as measurement scale decreases, following a power-law relationship (13). The exponent in this relationship quantifies the rate at which length changes with scale, forming the foundation of fractal dimension analysis (3,14). This early work laid the foundation for modern fractal analysis, which employs techniques such as box-counting, dilation, mass-radius, and the caliper method (5,(15)(16)(17). These methods all serve as analytical tools that estimate object size or mass with the measurement scale and collectively form the basis of fractal analysis. The key exponent in these relationships, termed the fractal dimension (DE), characterizes the complexity of an object's scaling properties. However, there is ongoing debate regarding what constitutes a "dimension" in mathematical and physical sciences. While the Hausdorff dimension is mathematically rigorous, other fractal dimensions, such as Minkowski and Kolmogorov dimensions, are widely used in applied fields despite not meeting strict mathematical definitions (18,19).Many traditional fractal analysis methods, such as box-counting, assume a well-defined structure within an image or dataset, which does not always translate well to nonlinear and self-affine timeseries data. Multifractal detrended fluctuation analysis (MF-DFA) has been introduced to account for these variations but remains sensitive to preprocessing techniques and data resolution, leading to the proposal of diverse multiscaling or multifractal applications (20)(21)(22)(23)(24)(25)(26)(27)(28)(29)(30).A fundamental aspect of fractal analysis when applying DFA or box-counting is the selection of an appropriate scaling range. A shortcoming of DFA is that it assumes stationarity within each detrending window (see below for mathematical treatment). However, this is not always found in physiological time series, which leads to incorrect inferences of fractality. Testing goodness-of-fit and ensuring the correct polynomial detrending order to avoid under-or overfitting provides robust results (31). Fractal analysis has been employed across a wide spectrum of scientific and technological endeavors as well as in the arts. One of the possibly best-known applications of fractal analysis in the arts was the identification of a fractal-like pattern in the artistic work of Jackson Pollock and the Mandelbrot set (32)(33)(34)(35). Differences in different schools of Orthodox iconography and fractal analysis of scribal and ink identification also have a connection to fractal analysis (36,37). Connected to the arts is the built-up environment (38,39). From the very large scales to the molecular scale, solar and space physics use fractal approaches to model solar flare distributions, solar wind turbulence, and magnetospheric dynamics. In thermodynamics, fractal analysis deals with the molecular or atomic level (40)(41)(42). From thermodynamics, combustion theory can be understood, where fractals describe flame front irregularities and turbulent eddies (43)(44)(45)(46). In materials science, fractal models describe grain boundary growth and porous structure distributions (47). Liquid crystal textures exhibit fractal patterns during phase transitions (48). Statistical physics incorporates fractals to describe anomalous diffusion and scaling in critical phenomena (49). In geophysics and environmental sciences, fractals appear in models of porous media, glaciology (crevasse networks and ice-sheet roughness), and sedimentary layering (50,51). Earthquake prediction research uses fractal statistics to characterize fault systems and seismicity patterns (52). Ocean dynamics, river basin distribution patterns, and tsunami wave modeling leverage fractal structures to capture nonlinear wave propagation and coastline complexity (53,54). Climate science is another related area of research utilizing fractal analysis principles (55)(56)(57). In finance, fractal analysis is applied to market time series to understand volatility clustering and multifractal structures in asset returns. This rather limited overview of the wide-ranging applications highlights the need for a standardized methodological framework across disciplines. From the previous paragraphs and the citations, it becomes apparent that, for instance, the term "roughness" is quite common to describe surface complexity, yet roughness as discussed later is not a fractal property. The current paper concentrates on physiological processes and morphology related to fractal analysis and argues for a consistent use of definitions. Beyond spatial applications, fractal analysis has been increasingly applied to physiological timeseries data that can be analyzed by several different methods, briefly highlighted in this section (36,58). Similar to geometric fractals, time-series fractals exhibit scale invariance in their temporal fluctuations, which can be analyzed using a variety of techniques such as detrended fluctuation analysis (DFA), Hurst exponent estimation, wavelet transforms, Higuchi algorithm, diverse entropy-based methods, and symbolic dynamics (30,(59)(60)(61)(62)(63)(64)(65)(66)(67)(68)(69)(70)(71)(72)(73)(74)(75). These methods have been linked to understanding neurological function, cardiovascular health, and autonomic regulation which address diverse pathologies including stroke, cardiovascular disease (CVD), mental health disorders, and diabetes, amongst many others.The Hurst exponent (H) is a classical measure of the persistence or anti-persistence of a time series, including Rescaled Range (R/S) analysis (76). It quantifies whether a system exhibits memory effects, where H < 0.5 indicates anti-persistent behavior, H = 0.5 suggests randomness, and H > 0.5 denotes long-range dependence. An often-not-considered aspect of the Hurst exponent is that the traditional Hurst exponent estimation assumes stationarity. This has been addressed by the Hurst-Kolmogorov (HK) method used for estimating the Hurst exponent. The HK method accurately assesses long-range correlations when the measurement time series is short, shows minimal dispersion about the central tendency, and yields a point estimate that does not depend on the length of the measurement time series or its underlying Hurst exponent (77). Another difficulty in determining the Hurst coefficient is that the computation of H depends on the signal type (31,78). The signal type can be fractional Gaussian noise (□□□□□□) or fractional Brownian motion (□□□□□□), where mathematically □□□□□□ is the first derivative of □□□□□□, and □□□□□□ is the temporal integral of □□□□□□. □□□□□□ is stationary with zero mean and short or long-range correlations, depending on the Hurst exponent □□□□□□ has slopes of the power spectral densities (PSD) smaller than one (□□ < 1), where □□ is the spectral exponent (slope of power spectral density). Whereas □□□□□□ is non-stationary and is self-similar with long-range dependence and increasing variance over time with slope greater than one (□□ >1) (79). The power spectral density (PSD) is then defined as:□□(□□) ∝ □□ -□□where □□ is frequency and □□ is the scaling exponent, which indicates the complexity of the signal (31,80).The spectral power exponent is defined through the PSD and indicates how energy or power is distributed across frequencies in a signal and characterizes the scale-invariance and fractal properties in temporal dynamics. □□ indicates signal complexity and is linked to the long-term correlations of the signal. As such □□ defines white noise, which is a completely uncorrelated time series □□ = 0. Pink or 1/□□ noise has correlated fluctuations and is defined as 0 < □□ < 1. □□ being greater than 0.5 indicates persistent Brownian motion, whilst □□ less than 0.5 indicates antipersistent Brownian motion. When □□ = 2, the signal is akin to Brownian motion and exhibits non-stationary time evolution. Brownian motion and stronger persistence, with less local variability, are associated with □□ > 2 (79). From a fractal perspective, □□ reflects the self-affinity of the signal and signals with higher □□ exhibit smoother, more correlated behavior, whereas lower □□ implies higher-frequency fluctuations and greater complexity.(□□□□□□) generalizes classical Brownian motion by introducing memory via the Hurst exponent H∈(0,1), which controls the degree of long-range dependence and smoothness. Mandelbrot and van Ness originally defined this process as:□□ □□ (□□) = 1 Γ (□□ + 1 2 ) [∫ {(□□ -□□) □□-1 2 -(-□□) □□-1 2 } □□□□(□□) 0 -∞ + ∫ (□□ -□□) □□-1 2 □□□□(□□) □□ 0 ]The □□ □□ (□□) process is self-similar with stationary moments, where □□ □□ (□□) is the fractional Brownian motion with Hurst exponent □□; Γ( (31). This results in different values for the autocorrelation or the fractal dimension derived from □□. Here autocorrelation decays as a power law for these processes, depending on □□: □□□□□□□□(□□) ∼ □□ 2□□ -2 And the fractal dimension (D F ) of the signal is related to □□ by:D F = 2 -□□where smoother signals (with higher H) exhibit lower complexity in their geometric representation. For slopes close to one, the type of signal investigated is uncertain, and, in addition, □□□□□□ & □□□□□□ may be present in any recorded signals.In recent literature, fractal terminology has evolved to distinguish between various generalizations of □□□□□□ that are relevant in the context of non-Brownian, non-Gaussian (□□□□□□□□) dynamics observed in empirical signals (82). Normal fractional Brownian motion (□□□□□□□□) or Gaussian □□□□□□ (□□□□□□□□) is now commonly used to refer explicitly to the classical □□□□□□ as originally defined by MvN (83,84). □□□□□□□□ can be further defined as a Gaussian process where the process variance is a non-linear function of time (i.e., sub-or super-linear). By contrast, the term fBm is increasingly applied more broadly to describe processes that exhibit self-similarity and long-range dependence but may not be Gaussian or may not have strictly stationary increments. This includes fractional Lévy motion (□□□□□□), which generalizes □□□□□□ to the non-Gaussian, heavy-tailed domain by replacing the Wiener process with an α-stable Lévy process (85)(86)(87). Other terms, such as normalized fractional Lévy noise (□□□□□□) and□□□□□□ may refer to fractional Brownian-type processes with non-Gaussian features or noise contamination as described above (88,89). Similar to the Hurst exponent, detrended fluctuation analysis (DFA) is a widely used method for detecting long-range correlations in non-stationary time series. DFA was introduced by Peng et al. to address the limitations of the classical rescaled range (R/S) developed by Hurst for nonstationary signals (76,90). For monofractal signals, the scaling exponent α is equivalent to the Hurst exponent (91). DFA requires integrating the time series □□(□□) to transform the time series into a random walk profile and reveal the underlying trends and fluctuations□□(□□) = (□□(□□) - □□ □□=1 □□ҧ )Where k is the index of the cumulative summation step in the integrated signal, □□(□□). The integrated time series is then divided into non-overlapping segments of length (s), detrending each segment using polynomial fitting and computing RMS□□(□□) = ඨ ൬ 1 □□ {□□ = 1} {□□} [□□(□□) -□□ □□□□□□(□□) ]ൣ□□(□□) -□□ □□□□□□ (□□)൧ 2 ൰This procedure is repeated for various scales or window sizes, s, and α determined from□□(□□) = □□ □□DFA quantifies the scaling behavior of a signal by dividing it into segments, detrending each segment, and calculating the root-mean-square fluctuations across multiple time scales. The fractal exponent (α) derived from DFA helps classify signals as uncorrelated (α ≈ 0.5), long-range correlated (0.5 < α < 1), or Brownian motion-like (α ≈ 1.5). However, this method does not distinguish between fractional Lévy and fractional Brownian motion (92). The former is an important characteristic of physiological time series (93). □□□□□□ is defined via a convolution with Brownian motion:Where □□(□□) is the Wiener process as introduced above. The equation above is a symbolic representation of the MvN □□□□□□ conveying the power-law scaling characteristic of □□□□□□ rather than being a constructive definition that emphasizes statistical properties like scale invariance:□□ □□ (□□)~□□ □□This is a symbolic representation indicating that the variance of □□□□□□ scales with □□ 2□□ , but does not cover the integral structure and stochastic rigor of the MvN representation.Conversely, □□□□□□ uses a Lévy noise term □□□□(□□) instead:□□□□□□ and □□□□□□ are self-similar stochastic processes with stationary increments but differ in their statistical properties. □□□□□□ is a Gaussian process characterized by continuous, non-differentiable paths and finite variance. It exhibits long-range dependence when the Hurst exponent □□ > 0.5□□ and its increments follow a normal distribution, leading to light-tailed behavior. In contrast, □□□□□□ generalizes □□□□□□ by replacing Gaussian noise with Lévy-stable noise, allowing for heavy-tailed, power-law-distributed increments. This substitution introduces discontinuities in the time series and results in infinite variance when the Lévy stability index □□ < 2. While □□□□□□ and □□□□□□ are selfsimilar with the scaling relationship the interpretation of the Hurst exponent differs in □□□□□□ depending on the influence of α. The choice between □□□□□□ and □□□□□□ should therefore depend on whether the data exhibit heavy tails, discontinuities, or rare extreme fluctuations. The distinction between □□□□□□ and □□□□□□ is important for biosignals and for interpreting for instance heart rate complexity and the choice of HRV analysis features, where assumptions of stationarity and Gaussianity do not hold. Time series features associated with HRV, which for traditional time and frequency domain metrics is assumed to be linear, is usually divided into 5-minute windows, which provides a stationary signal and allows frequency domain and fractal spectral dimension analysis (94)(95)(96)(97)(98). However, the difference in practice is that □□□□□□ may capture persistent correlations under linear assumptions, while □□□□□□ provides a more appropriate framework as heart rate dynamics exhibit abrupt changes, heavy tails, or nonlinear multifractal scaling features (99). In addition, □□□□□□ maintains self-similarity and supports multifractality and modeling of heterogeneous scaling, through its nonlinear dependence on α (tail thickness) and the Hurst exponent (memory) (99). Waveletbased methods are better suited for nonlinear, nonstationary time signals and can quantify multifractality from the wavelet coefficients (100). Wavelet-based fractal analysis, particularly the wavelet transform modulus maxima (WTMM) method, extends traditional fractal measures by capturing both time and frequency information. This multifractal analysis detects singularities in a signal by applying the continuous wavelet transform (CWT) and estimates the local scaling properties at different resolutions using the scaling of the modulus maxima of the transform across scales. As WTMM provides high temporal resolution, it is ideal for transient and non-stationary signals. The method is also more robust to noise compared to DFA, but computationally intensive and requires careful selection of the wavelet function (101). The continuous wavelet transform (CWT) of a signal □□(□□) for a wavelet □□(□□) is given by:Where The Higuchi algorithm quantifies the complexity of a signal by assessing how the curve length changes as a function of the observation scale. The Higuchi method begins by constructing □□ new sub-series from the original signal □□(1), □□(2), … , □□(□□) using a delay parameter □□, where each sub-series is defined as:for □□ = 1,2, … , □□. For each sub-series, the length □□ □□ (□□) is computed by summing the absolute differences between points separated by □□ samples, normalized by a scaling factor to account for the number of terms:the average length □□(□□) is then calculated across all □□. The slope of the linear regression of log □□(□□) versus log ( 1□□ ) provides the Higuchi fractal dimension (□□) as:Although there is often a pronounced linear range in the double logarithmic plot, the fractal range must still be selected. More recent work on time series has suggested a second type of fractal feature, defined as crucial events that are also a function of a power law and derived from converting the time series into a diffusion process (104). A significant development in fractal physiology has been the recognition of crucial events, which disrupt self-similar patterns in time-series data (105)(106)(107)(108). Unlike traditional fractal models, which assume continuous self-similarity, crucial events introduce non-ergodic, intermittent bursts of activity that influence system dynamics (109). These events play a key role in heart rate variability and neural oscillations, affecting the interpretation of fractal measures in clinical diagnostics (20).Closely related to crucial events is the concept of1 □□noise, a scaling phenomenon observed in numerous biological and cognitive systems (110). The research on crucial events has now extended the concept of fractal from a Type I 1 □□ noise which originates from stochastic, memoryless fluctuations, characteristic of fractional Brownian motion to Type II 1 □□ noise, arising from self-organized criticality (□□□□□□) and extended to self-organized temporal criticality, (□□□□□□□□) where fluctuations reflect an adaptive, regulatory temporal process (111). The failure to differentiate between Type I and Type II While fractal analysis has proven useful in spatial and morphological studies, its application to time-series data introduces additional complexities. Time-series fractal analysis is commonly used to study physiological signals (EEG, ECG, HRV), financial markets, and ecological trends, but poses challenges, including non-stationarity, as many biological and economic time-series datasets do not exhibit a constant mean or variance over time, making conventional fractal analysis methods unreliable (112)(113)(114)(115)(116)(117). Scaling relationships in time series often show upper and lower bounds, beyond which self-similarity breaks down (77,118,119). Distinguishing between true selfsimilarity and random processes (e.g., Type I vs. Type II 1/f noise) remains an active area of research (20,58,120,121). Many traditional fractal analysis methods, such as box-counting, assume a well-defined structure within an image or dataset, which does not always translate well to nonlinear and self-affine time-series data. Multifractal detrended fluctuation analysis (MF-DFA) has been introduced to account for these variations but remains sensitive to preprocessing techniques and data resolution, leading to the proposal of diverse multiscaling applications such as multiscale Rényi entropy, multiscale diffusion entropy, and DFA (20-27, 29, 30, 122). DFA is multiscale in the way it is implemented. Fractal geometry is widely used to describe structures of two-dimensional (2D) and threedimensional (3D) images, which present unique methodological challenges. In 2D fractal analysis, box-counting and perimeter-based techniques are commonly employed to assess the selfsimilarity of biological forms, plant venation patterns, and coastline structures (13,17,(123)(124)(125). In contrast, 3D fractal analysis extends these principles to volumetric datasets, requiring voxelbased segmentation and multi-scale algorithms to quantify complexity in medical imaging, bone morphology, and neural connectivity studies (126)(127)(128)(129)(130). These images are characterized by a lack of infinite length and are not strictly space-filling, though they may optimize coverage through fractal-like growth patterns (131)(132)(133). This distinction is essential to avoid misclassifications or misinterpretation of natural structures as true fractals when they are more accurately described as approximations of fractal behavior and scale invariant rather than self-similar. Recent advances have now included features such as lacunarity as an additional feature describing the space-filling characteristics of objects (17,(134)(135)(136). Multifractality refers to multiple scaling exponents within a time series, suggesting that different dynamic processes operate at different scales. This phenomenon is widely observed in physiological signals, yet accurately quantifying multifractality remains a major challenge (20,130). One of the key issues is that preprocessing techniques, such as detrending and filtering, may artificially introduce correlations, leading to overestimating multifractal properties (20,137,138). Three additional shortcomings are the reduction of the artificial multiscale entropy (MSE) due to the coarse-graining procedure, the fixed cut-off for histogram binning, and the introduction of spurious MSE oscillations due to the suboptimal procedure for the elimination of the fast temporal scales (21,137,139). Artifacts due to signal distribution, autocorrelation, or nonstationarity need to be addressed to identify true multifractality by including null models or shuffled or phase-randomized surrogate data that highlight whether the multifractal spectrum is retained (140). True multifractality arises when a stochastic process intrinsically exhibits a spectrum of singularities due to multiplicative cascades or multiscale interactions. It is marked by a nonlinear dependence of the generalized Hurst exponent, □□(□□), or scaling exponent □□(□□) on the moment order □□, leading to a broad multifractal spectrum □□(□□). This behavior is robust to surrogate testing, persisting under methods like Amplitude Adjusted Fourier Transform (IAAFT) or phase-randomized surrogates that preserve linear structure but disrupt higher-order dependencies. Apparent multifractality, by contrast, mimics this behavior due to artifacts such as finite-size effects, heavy tails, or nonstationarity. While such signals may exhibit curvature in □□(□□) under MF-DFA or structure-function methods, this effect often vanishes under surrogate testing, indicating the absence of genuine multiscale complexity (141)(142)(143).However, distinguishing true from apparent multifractality does not address another fundamental issue of multifractal analysis, which is associated with how multifractality is interpreted. A striking example of the dichotomy in definitions across disciplines is associated with the fields of economics, finance, and econometrics, where multifractality is often operationalized through the time-varying Hurst exponent, or the Hölder exponent ℎ(□□), which is estimated using sliding-window techniques or local detrending procedures. This approach characterizes nonstationarity or regime-switching in market dynamics by tracking local scaling exponents (144,145). In contrast, physics-based disciplines, including physiology, geophysics, biophysics, and sociophysics, emphasize that multifractality reflects the presence of a spectrum of singularities within a signal or process. This spectrum is then quantified through the multifractal spectrum □□(□□), implying that the signal possesses a hierarchy of scaling behaviors rather than a single dominant exponent (146)(147)(148)(149)(150). This distinction reflects deep epistemological differences as the econometric approach treats the Hurst exponent as a local, potentially nonstationary parameter, and detached from a strict theoretical model of scale invariance (151,152). By contrast, in physics, multifractality arises from inherent self-organization and the aggregation of fluctuations across scales, often associated with self-organized criticality (SOC) or self-organized temporal criticality (SOTC) (153). These frameworks interpret multifractal behavior as a signature of far-from-equilibrium dynamics with embedded memory and complexity (20,154,155). Thus, complementary interpretations may provide utility in specific applied domains. However, in this work, a multifractal process is described as one that gives rise to a hierarchy of scaling exponents, characterized by a nonlinear relationship between the moment order □□ and the scaling exponent □□(□□), with the multifractal spectrum derived via Legendre transformation. This interpretation aligns with empirical findings across physiological, linguistic, and sociocultural time series, where complex coordination dynamics and long-range dependencies generate nontrivial scaling spectra.In addition to the distinction between local Hölder exponent tracking (common in econometrics) and multifractal spectrum analysis (prevalent in physics), another widely used definition of multifractality involves the generalized Hurst exponent □□(□□), which characterizes the scaling behavior of the □□ -□□ℎ order moments of the process. Specifically, for a stochastic process □□(□□), the structure function□□ □□ (□□) = □□[| □□(□□ + □□) -□□(□□) | □□ ]typically scales as □□ □□□□ (□□) . For monofractal processes, □□(□□) is constant across all □□, indicating uniform scaling. In contrast, a multifractal process exhibits a nonlinear dependence of □□(□□) on □□, reflecting the presence of multiple scaling regimes and heterogeneous singularities. Multifractal detrended fluctuation analysis (MF-DFA), which estimates □□(□□) empirically from detrended local fluctuations is based on this formalism. The nonlinear shape of the □□(□□) curve or its associated mass exponent function □□(□□) = □□□□(□□) -1 is often used to quantify the degree of multifractality. The corresponding multifractal spectrum □□(□□) can then be obtained by applying the Legendre transformation, where □□ = □□(□□) □□□□□□ □□(□□) = □□□□ -□□(□□). Thus, the generalized Hurst exponent approach bridges the statistical physics and signal processing perspectives, offering a versatile framework for detecting and characterizing scale-free structures in both stationary and nonstationary signals (143,156,157). Importantly, this interpretation also connects non-Gaussianity and intermittency where a broader spread or curvature in the □□(□□) function implies the presence of strong fluctuations (bursts or laminar phases) that contribute disproportionately to high-order moments. This makes the generalized Hurst exponent particularly relevant in the analysis of physiological, linguistic, and financial time series, where empirical signals often reveal multifractal characteristics not captured by a single global exponent (100, 103). Applying fractal and multifractal methods for spatial and time series analysis, validation of the scaling metrics is one of the most important components. Determining the appropriate scale invariance of an object or time series reflects the intrinsic property of the system. Noise, artifacts, and inappropriate preprocessing influence the scale range that is fundamental to characterizing fractality. Linearity of the log-log plot over a minimum of one to two decades provides a meaningful result. This is achieved by ensuring scale boundaries are correctly chosen to avoid short-range correlations (e.g., noise) or finite-size effects at long scales. Automated range selection based on R 2 optimization or slope stability criteria has been recommended (158)(159)(160)(161)(162)(163). Surrogate analysis can be applied to validate the scaling results, including shuffled time series and phase-randomized Fourier Surrogates to preserve linear autocorrelations but remove nonlinear structure. For DFA, comparing the α exponent or multifractal width Δα between the original and surrogate time series provides a means of assessing whether the observed complexity exceeds what may be expected by chance. Sensitivity analysis, such as detrending polynomial order in DFA, embedding dimension, correlation dimension estimation, and q-range determination in multifractal analysis, as well as stationarity testing, further supports the validity of results (24,(164)(165)(166). Practical implementations or strategies have been proposed to improve the reliability and reproducibility of fractal and multifractal analysis. Associated with the physiological time series analysis, several preprocessing protocols have been developed to ensure stationarity before applying methods that are sensitive to nonstationarity, such as DFA (67,94,167). Consistency of complexity metrics has been investigated to understand the influence of the underlying mathematical algorithms (168)(169)(170). In geophysics, embedding time windows and applying correlation dimension together with entropy measures for assessing seismic precursors has improved predictability based on fractal analysis (171). Recent analysis protocols in neuroscience and stock market analysis have emphasized the use of multifractal detrended fluctuation analysis and sliding window applications to identify changes associated with changes in task performance (172)(173)(174)(175). Local stationarity testing, time-varying complexity profiling, and hybrid decompositionscaling approaches provide more context-sensitive, translatable, and interpretable fractal analysis results across diverse applications. Scientific communication ensures that concepts, methodologies, and results are accurately described across disciplines. The interdisciplinary nature of fractal analysis presents a particular challenge in this regard, as the concept of fractals has been adapted across diverse research fields, including mathematics, physics, biology, cognitive science, and artificial intelligence (118,176). However, despite the apparent universality of fractal theory, its application in non-mathematical disciplines often results in inconsistent terminology and methodological ambiguity. The precise meaning of terms such as fractal dimension (D E ), self-similarity, and scale invariance are often diluted when applied outside their mathematical origins, leading to conceptual drift and misinterpretations (14,118,177).One of the core difficulties in applying fractal analysis across disciplines is the variability in definitions. For instance, while mathematical fractals exhibit strict self-similarity across scales (e.g., the Mandelbrot set, Koch curve, or Peano curve), natural fractals such as branching trees, neuronal structures, or vascular systems are statistically self-similar or defined as scale-invariant and do not conform to infinite recursion as is the case in mathematical, ideal, theoretical fractals (178)(179)(180). From a computer representation perspective, even theoretical fractals with infinite self-similar scaling will have limitations. Hence, in biomedical applications, fractal-based models employed in image processing, signal analysis, and physiological modeling lack standardization, which has led to conflicting interpretations of what constitutes a fractal object (181,182).From the above discussion, it becomes clear that there is an inconsistent use of fractal terminology and how analytical methods are applied to determine the fractality of spatial and temporal features across disciplines that have contributed to some extent to a semantic drift, where mathematical rigor is diluted in applied sciences. A lack of precise definitions in fractalbased biomedical research has led to numerous innovations in determining the fractal dimension of scale-invariant images or nonlinear characteristics in time series and confusion in interpreting results. Fractal linguistics, therefore, seeks to establish a standardized lexicon that preserves mathematical accuracy while facilitating interdisciplinary communication. Fractal analysis is applied across multiple scientific disciplines, yet its conceptual complexity and specialized language create barriers to effective interdisciplinary communication. As scientific communities form around fractal research, a process of professional socialization occurs, leading to the development of distinct linguistic norms. This specialization, while beneficial within specific domains, contributes to communication gaps between different research communities (179,183). To better understand the diversity of fractal analysis expertise, researchers can be classified into three broad categories:1. Researchers with expertise in other domains, such as biology, medicine, or social sciences, who have a strong background in mathematics or physics but lack in-depth fractal theory training. These researchers often rely on published methodologies but may struggle with the nuances of fractal interpretation.2. Experts in fractal and scaling theory, who possess a deep understanding of its mathematical foundations (59,113,147,149,(184)(185)(186)(187)(188)(189). This group establishes a unique identity through its specialized discourse, reinforcing its authority in the field.3. Interdisciplinary bridge-builders, who aim to develop and disseminate tools for fractal analysis in an accessible manner (18,22,118,127,132,(190)(191)(192)(193)(194)(195)(196)(197). Ideally, this category consists of members from the second group working to improve knowledge transfer to the first group.Despite the presence of category Interdisciplinary bridge-builder researchers, a significant gap remains between fractal theory specialists and applied researchers. The core issue lies in the clarity and accessibility of scientific communication. Many fractal specialists are a part of their specific research discourse and find it challenging to translate their knowledge into a more universally understandable format. Consequently, applied researchers with expertise in other domains, and acquiring literacy in fractal methods, often misinterpret or propagate inaccuracies in terminology and methodology. One of the primary risks in this process is that terminology is adopted without a complete understanding of its mathematical implications.For meaningful interdisciplinary progress, experts in fractal analysis and researchers with expertise in other domains must prioritize clearer dissemination of fractal principles, ensuring that foundational knowledge is communicated effectively. This includes not only publishing in specialized journals but also contributing to open-access resources and interdisciplinary review articles that cater to broader scientific audiences. The section below provides an example. The challenges in fractal research applied to image analysis are exemplified by the case of diffusion-limited aggregates (DLAs), a phenomenon resulting from stochastic growth processes with potential applications in biology, chemistry, and materials science. While physicists have extensively studied DLAs, debate continues over their precise fractal classification (102,(198)(199)(200)(201)(202). Key questions that remain unresolved are whether DLAs are strictly self-similar and whether DLAs are multifractal (203,204). Most publications addressing DLAs are authored by physicists and published in physics journals, reinforcing an insular discourse that limits broader interdisciplinary application. Biological sciences, despite their potential to benefit from fractal modeling, have not embraced fractal analysis to the same extent as physics and engineering. This is largely due to the lack of clear instructional resources that articulate the theoretical underpinnings and applied methodologies in an accessible manner. A crucial missing element in fractal research is the documentation of how researchers acquire fluency in fractal theory and its applications. Specialists in the field must articulate how meaning is constructed and communicated in fractal analysis. Standardized educational resources, interdisciplinary training workshops, and accessible computational tools can help bridge the gap between theory and application (20,205,206). To foster more effective knowledge transfer, future efforts should focus on:1. Developing interdisciplinary educational frameworks that introduce fractal concepts to nonspecialists in an intuitive manner (207,208).2. Encouraging cross-domain collaboration to refine methodologies and establish consensus on terminology and interpretation (18,19,209).3. Creating open-access repositories that provide validated computational tools and datasets to ensure methodological reproducibility (36,210).By improving the clarity, accessibility, and applicability of fractal discourse, the research community can enhance the interdisciplinary impact of fractal analysis across physics, engineering, biology, and cognitive sciences. The next section discusses some of the issues with terminology. This section does not aim to provide an exhaustive glossary of fractal terminology but rather to address common misconceptions and clarify foundational concepts. What is a fractal, and how is fractal analysis conducted? Understanding fractals begins with identifying their defining properties. A frequently misused term in describing fractals is "roughness". While commonly used to express irregularity, roughness in fractal linguistics corresponds more accurately to surface complexity and space-filling capacity rather than a simple deviation from a mean value (11,211,212). In contrast, fractal dimension (DF) quantifies how an object's size (length, area, or volume) varies with measurement scale (14,127,176,184,212). Unlike Euclidean shapes, fractals exhibit irregular surfaces at all scales of magnification and lack a characteristic length (3,12,118,213).To better characterize fractal properties, three attributes are particularly important:1. Characteristic length -non-fractal objects can be described using simple geometric shapes (e.g., the Earth as a sphere). The presence of a characteristic length implies a smooth surface.2. Self-similarity / scale invariance -Fractal objects maintain a repeating pattern at different magnifications. The Koch curve exemplifies this principle for a theoretical fractal, lacking a characteristic scale at every iteration. The Koch curve and other mathematically based objects are self-similar to a limit based on the computer screen resolution.3. Complexity -The fractal dimension, DF, quantifies the extent to which an object fills space. Higher values of DF indicate greater space-filling properties or more complexity.Self-similarity in fractals implies that increasing magnification does not introduce new structural changes but instead reveals finer details of the same underlying pattern. The degree of scaleinvariance in a biological structure can be quantified through fractal analysis, where the gradient of a log-log plot of count number versus scale is proportional to its fractal dimension (159,161,162,214,215). The mathematical formulation of fractal measurement follows a power-law relationship where DE is the fractal dimension and N(e) is the number of measurement units at scale e. For the Koch curve, DF ≈ 1.246, indicates a structure more complex than a simple line. A structure with D = 1.2 is less space-filling than one with D = 1.4 (131,216). A practical challenge in fractal analysis is differentiating between statistical self-similarity (scale-invariance) and true fractality (217,218). Many biological structures (e.g., lung bronchi, blood vessels, neurons) exhibit statistical selfsimilarity branching, which may suggest fractal properties, but they are not fractal in the strict mathematical sense (58,180,(219)(220)(221)(222)(223)(224). Instead, they are statistically self-similar within a finite range of scales (213). Furthermore, signals as well as 2D digital images and 3D volumes always have a limited resolution, so that the true fractality can never be determined. A digital image in medicine, for example, rarely has more than 1000x1000 pixels. That is 1 million pixels, but the effective physical range of scales is only three, with 10x10, 100x100, and 1000x1000, or four including the rather questionable scale 1x1. Even for very inconvenient images of size 10,000x10,000 pixels, the physical range of scales is only increased by one decade. Whereas a true or mathematical fractal is self-similar over an infinite range of scales, digital signals and images can only be investigated by the very limited range of three or four decades of physically relevant scales. Each object has an intrinsic topological dimension (DT) and is embedded in a space of dimensions, the embedding dimension (DE). From this, it follows that the fractal dimension DF of the object must always be in between these two limits, following DT<=DF<=DE. The topological dimension is defined by the number of directions of neighboring points. For example, a line has only one preceding and one subsequent neighboring point and has therefore DT= 1.A thin but folded or wrinkled line drawn on a sheet of paper has DT=1, and also a thin woolen thread folded in 3D has DT=1, because the neighboring relations do not change. In this example, the sheet of paper is two-dimensional and defines the embedding dimension, which is DE=2. The woolen thread in 3D space represents an embedding dimension of DE=3. Despite that, a line is a topological one dimensional, it can have fractal dimensions from 1 up to 2 or from 1 up to 3, depending on the embedding dimension, see Fig. 1. In a 2D digital image, the embedding dimension is DE=2 for binary images and DE=3 for grey value images. Objects in digital images can be points with DT=0, lines with DT=1, or areas with DT=2. If cells are imaged so small in a medical image that they are represented by points, the fractal dimension can have values from zero up to two. 3D digital volumes increase the possible embedding dimension by one. Fractal dimensions can be between 1 and 3. A crucial distinction in the fractal analysis is between true fractals and prefractals. Prefractals are approximations of fractals observed at finite iterations rather than infinitely repeating structures (210,225,226). The Koch snowflake can serve as a useful test image for calibration but is not strictly fractal due to its finite nature when represented as an image due to the resolution limits of the computer (213). When analyzing biological structures, using Euclidean reference objects (e.g., spheres, cubes) provides a baseline for comparison, but fractal-like forms require specialized methodologies. The challenge in fractal measurement lies in ensuring that the observed scaling relationship remains consistent across magnification levels. Statistical artifacts from image resolution, filtering, or incomplete iterations can lead to misclassification of biological structures as a fractal (134,159,160,227). The broad use of the term self-similarity in biological contexts has contributed to confusion. In mathematical fractals, strict self-similarity means that identical patterns occur across all scales. However, biological systems exhibit statistical self-similarity, meaning that their scaling behavior follows probabilistic rather than deterministic patterns. This distinction is critical, as many biological forms including vascular networks and dendritic structures-have been incorrectly classified as fractal when they merely exhibit scaling tendencies within a specific range (159). A biological form should only be classified as fractal if its measured size follows a consistent powerlaw relationship across all scales, without a characteristic length limit. This is rarely the case in real-world biological systems (159,228). How this may be addressed follows in the next section. A critical consideration in 2D and 3D fractal analysis is the distinction between a surface, boundary, or perimeter in 2D and 3D structures and the determination and/or fixation of the topological dimension DT (229,230). A 2D boundary may represent the outermost silhouette of an object, as seen in fractal analyses of particle aggregates, whereas a 3D surface may include internal complexity, requiring different analysis techniques such as volume fractals and porous fractals (5,126,162,(231)(232)(233). For instance, silhouette-based fractal analysis of biological aggregates has been employed to quantify sludge morphology, but results can differ depending on whether the boundary or mass is analyzed. Estimates of silhouette boundaries tend to yield lower fractal dimensions compared to sectioned boundaries, highlighting the importance of defining measurement criteria (127,160,234). Finally, any values of fractal DF smaller than DT are not reliable. And one must be aware of the actual embedding dimension DE, because any values of DF exceeding DE are also not reliable. To advance fractal linguistics from a conceptual critique to a practical, cross-disciplinary tool, a comprehensive taxonomy and validation framework designed to identify, classify, and resolve inconsistencies in fractal analysis is required. The proposed framework aims to standardize terminology, standardize methodologies without impeding new developments, and enhance interpretability across diverse fields of research. By providing a structured approach to fractal analysis, the framework promotes clearer communication, improves reproducibility, and enhances the translational potential of fractal measures in applications ranging from biomedical diagnostics to environmental modeling and seismic forecasting. The model framework includes semantic drift identification, methodological variability audit, and interpretation mapping. A prototypical application of the taxonomy is illustrated in Table 1.Semantic drift occurs when fractal-related terms diverge from their original mathematical definitions as they are adopted across disciplines. This component of the taxonomy systematically traces and quantifies such divergences to restore terminological precision by identifying the definition of the term as established in mathematical fractal theory, documenting how the term is repurposed in applied disciplines, and developing qualitative or quantitative measures to assess the extent of drift. Practical implementation of semantic drift identification can be achieved through interdisciplinary workshops where domain experts and mathematicians collaboratively review terminology usage. Automated tools, such as natural language processing (NLP) algorithms, can scan literature databases to flag inconsistent definitions, enabling researchers to align their terminology with mathematical foundations. By clarifying terms like fractal dimension or self-similarity, this component addresses the communication gaps between mathematicians, who prioritize rigor, and applied scientists, who often adopt terms heuristically. See Table 2 for examples.Fractal analysis employs a variety of methods, such as box-counting, detrended fluctuation analysis (DFA), wavelet transforms, entropy measures, point processing and Higuchi's algorithm, each with multiple implementation variants. These variants can lead to divergent outcomes, reducing comparability across studies. The methodological variability audit addresses this challenge through cataloging the specific fractal analysis method used, documenting variations in implementation, and conducting checks of robustness to evaluate how methodological choices affect results (235,236). Practical implementation of the audit is already available and supported by open-access datasets and repository internet sites such as GitHub, which host computational tools and protocols for fractal analysis. Python or MATLAB libraries (e.g. https://www.mathworks.com/matlabcentral/fileexchange/71770-fractal-analysis-package; https://nolitia.com/; https://tsfel.readthedocs.io/en/latest/)can also provide reference implementations of DFA or box-counting, and include built-in sensitivity analysis modules to quantify the impact of parameter variations (237,238). Researchers would be encouraged to report method-specific details in publications, adhering to a checklist derived from the audit framework (124,159,160). This component ensures methodological transparency and reproducibility, which are critical for fields like biomedical engineering, where fractal analysis of heart rate variability (HRV) may use DFA with varying detrending orders. By standardizing reporting, the framework enables meta-analyses and cross-study comparisons, facilitating the development of universal benchmarks for fractal measures in physiological systems (239).The last section of the framework addresses the interpretation of fractal analysis results, which often relies on implicit assumptions that may not hold across contexts, leading to overgeneralization or misinterpretations (240). This component maps reported conclusions to their underlying assumptions and realigns them with the proposed framework to ensure validity.To achieve this, it is necessary to identify the reported findings of the study, explicitly articulating assumptions underpinning the interpretation, and adjusting interpretations based on validated assumptions. Practical implementation of interpretation mapping can be integrated into peerreview processes, where reviewers use standardized checklists to evaluate the alignment between claims and assumptions. Computational tools can assist by embedding diagnostic tests into fractal analysis pipelines, which provide researchers with feedback on the validity of their interpretations. This component has the potential to enhance the reliability of fractal-based conclusions in applied domains. This structured approach preserves mathematical rigor while enabling innovative solutions for complex natural and physiological systems. To validate the usefulness of this framework, we conducted a preliminary comparative analysis across two disciplines: biomedical signal processing, including heart rate variability (HRV), and geophysics, specifically earthquake models.In HRV research, the term "fractal dimension" is often used interchangeably with measures of signal complexity, including Higuchi and Katz dimensions or DFA exponents, often without verifying scale-invariance or embedding dimension assumptions (31,62,113). In contrast, geophysical models apply fractal statistics to seismic fault distributions and aftershock sequences, frequently assuming long-range correlations and multifractality, yet rarely assess stationarity or artifact influence (51,171,241).By applying the proposed taxonomy, we reclassified methods and terminology in five key studies from each domain (Table 2). This revealed that in over 60% of HRV studies, reported scaling exponents reflected preprocessing artifacts or misapplication of DFA without proper data stationarity checks. Similarly, in geophysics, inconsistent use of the term "multifractal" emerged due to insufficient analysis of scale windows and spurious correlations introduced through detrending. DFA α₁ calculated over very short segments of the time series (<15 seconds). Improvement requires validating power-law linearity for such short segments from a mathematical perspective. A review of the pivotal contributions to the field of fractal analysis, including the authors of Table 2 indicates the expanding interdisciplinary research in applying fractal-based analyses to natural and physiological systems (See also Supplementary Table 2). These studies have laid critical groundwork for modeling complexity, yet there is significant potential to fractal analysis methodology, reporting, and understanding of physical and biological systems by operationalizing a standardized framework that enhances clarity and precision. By adopting consistent reporting of scaling ranges, rigorous statistical validation of fractal dimensions, and clear differentiation between multifractality, long-range dependence, and stochastic variability, researchers can improve methodological robustness. Practical steps, such as integrating surrogate testing, resolution sensitivity analysis, and explicit log-log scaling diagnostics, will reinforce the fractal nature of the observed dynamics. Furthermore, refining the semantic precision of terms like "fractal" and "scale-free" by defining them through entropy-based metrics will lead to enhanced cross-disciplinary interpretability. These advancements will have an impact on translational opportunities that drive innovation in fields such as seismic forecasting, biomedical diagnostics, and networked system modeling, while promoting a unified and accessible fractal linguistics framework. The variability in definitions across diverse fields includes terms like fractal dimension and selfsimilarity, which are used inconsistently and do not consider scale-invariance, the correct term to describe natural objects. Methodological uncertainty is perhaps the most challenging due to the plethora of computational methods available on the internet and in-house coding, which often generates fractal parameters without clarifying their theoretical foundations. Finally, overgeneralization, which addresses the assumption that all self-similar structures exhibit fractal properties, has resulted in the overuse of fractal models in biology and medicine.Future research needs to consider prioritizing clearer definitions, interdisciplinary collaboration, and standardized methodologies in fractal-based studies to address these issues (183,205,246). The current paper addresses some of these issues to fully realize the potential of fractal analysis in modeling complex systems. This requires greater emphasis on fractal analysis descriptions and definitions, especially in biological and clinical sciences. By establishing a linguistics model, which integrates correct descriptions of spatial, temporal, and multifractal approaches, the accuracy and reliability of fractal models can be improved (8,20,228). The emerging field of fractal linguistics provides a framework for refining terminology, standardizing methodologies, and improving interdisciplinary discourse (11). Fractal analysis is a versatile tool for quantifying the complexity of spatial structures and time series data. However, inconsistent terminology, methodological variability, and interpretation ambiguities reduce the effectiveness of the results and lead to semantic drift and communication barriers across multidisciplinary projects. This paper provides a critical commentary on the evolving usage of fractal and multifractal terminology in the analysis of complex systems and examines how foundational concepts such as fractional Brownian motion (□□□□□□), self-affinity, and multifractality have been adapted, reinterpreted, and occasionally misapplied across disciplines.There is a growing divergence in how terms such as roughness, complexity, and scaling are understood in physics, finance, and biology. The oversimplification of the term roughness, which is a necessary but not sufficient condition for fractality, is often equated directly with the Hurst exponent and ignores the mathematical underpinning associated with scaling phenomena. A signal may be rough (e.g., non-differentiable or noisy) without being self-affine, multifractal, or long-range correlated. Clarifying this distinction is critical to avoid misleading interpretations of results. Multifractality is another ambiguous term if used in diverse contexts. As such, multifractality in econometrics may refer to time-varying Hölder exponents, while in statistical physics it implies a nonlinear □□(□□) spectrum arising from multiplicative cascades. There is also a need to distinguish between true multifractality, characterized by nonlinear moment scaling and a broad multifractal spectrum □□(□□), and apparent multifractality, which may result from nonstationarity, fat-tailed distributions, or finite-size effects. This distinction is important for correctly describing underlying dynamical mechanisms. Throughout the manuscript, emphasis is placed on correctly applying contemporary descriptors, such as □□□□□□□□, □□□□□□□□, and □□□□□□, that extend the original Mandelbrot and van Ness model and incorporate non-Gaussian, nonstationary, and generalized scaling behaviors. Correct use of these terms, which are found now in mathematical and physical sciences, is necessary to prevent terminological drift.Ultimately, this work contributes to the metatheoretical architecture of fractal analysis. It seeks to foster terminological coherence and epistemological clarity in interdisciplinary research where fractals, scaling, and complexity are described. By tracing the genealogy of key concepts and explicitly differentiating between methods, phenomena, and metaphors, we hope to motivate more rigorous and reflexive applications of fractal models in empirical science. Future work should focus on expanding this framework through transdisciplinary reflection by researchers to use fractal language with both mathematical precision and conceptual transparency. This formalization is critical for improving accuracy and enhancing cross-disciplinary communication.
Keywords: Fractals, Chaos, Linguistics, time series analysis, image processing
Received: 12 Jun 2025; Accepted: 23 Jul 2025.
Copyright: © 2025 Jelinek and Ahammer. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Helmut Ahammer, Medizinische Universitat Graz, Graz, Austria
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.