- Jinan Institute of Supercomputing Technology, Jinan, Shandong, China
Some non-standard physical problems are challenging to solve because of the fundamental impossibility of experimental confirmation of theories, resulting from the lack of adequate methods and equipment with the required parameters, such as energy and frequency on the order of Planck. In recent years, generative artificial intelligence (AI) has significantly enhanced the efficiency of solving many standard problems, particularly in fields such as diagnostics, business analytics, pattern recognition, programming, and prediction. There are also attempts to leverage AI to address complex physical issues through indirect approaches, such as simulating a training dataset. This article aims to formulate the basic requirements that an AI system must meet to support the solution of non-standard physical problems that are also complex in their interdisciplinarity, data scarcity, time constraints, energy limitations, and hypothetical goals. This was conducted as a thought experiment by analysing two hypothetical phenomena: the emergence of cosmic strings (CS) and photons during the Planck and Grand Unification epochs. The author’s convergent method, based on thermodynamics and inverse problem-solving in topological space, has ensured the research’s stability and purposefulness. As a result of the article, it is justified that, to significantly improve AI support for the solution of some scientific non-standard problems, it is necessary to use the full-analogue photonic AI, which can be realised on a holographic basis and a Fourier transform approach. The conceptual architecture of a required AI system is represented.
1 Introduction
Modern artificial intelligence (AI), despite its impressive successes in various fields, including politics, economics, the social sphere, and technology, has fundamental limitations in addressing non-standard physical problems. For example, there are non-standard problems of experimentally proving the existence of cosmic strings (CSs) or the presence/absence of a photon structure. CS cannot yet be demonstrated experimentally, and, according to the Standard Model of quantum physics, a photon does not have a structure. To date, as these phenomena have not been experimentally confirmed, advanced AI could be a valuable tool to support such research.
The goal of this article is to identify special requirements for advanced AI by addressing support for solving non-standard physical problems, which are both complex in their interdisciplinary nature and characterised by high solution uncertainty.
To achieve the article’s purpose, the reverse approach was applied in this study. It was explicitly framed as a thought experiment, not as a set of implied facts, namely, assuming that CSs exist and a single photon has a structure that can carry information about its source. It is also assumed that they could initially form in the state of the Planck epoch of the Universe’s birth in the Big Bang model, when space and time had not yet acquired meaning.
Currently, such assumptions as the existence of CS and the presence of photon structure cannot be experimentally proven or disproved because the processes of their original creation have already passed, and they require unique equipment that we do not have on Earth; for example, to generate extremely high energies of the order of Planck. However, if certain hypothetical physical phenomena cannot be directly verified, it is necessary to explore indirect approaches, such as AI.
Modern theories of the Universe’s creation describe a series of triggered phase transitions and phenomena. Specific unified theories predict the formation of stable topological defects in the early Universe, such as CS [1], which were hypothetically created in the Grand Unification epoch. It is believed that this is not a material object in space-time, but its very deformation, which occurred due to a violation of its original symmetry [2], with a size comparable to that of space objects. CS can extend across the entire Universe or form closed loops on megaparsec scales. A metric with a conical singularity on the axis may describe the CS. They can create a gravitational lens effect on light passing through them. Hypothetically, CS does not have an inside structure. This way, the CSs are special entities that directly connect astronomy and physics with ultra-high-energy elementary particles, thereby bridging the microscopic and macroscopic worlds.
In turn, photon formation is viewed as a purposeful process, the result of which can be experimentally studied only by examining the photon’s state after the recombination epoch, when photons began to spread freely in space. The photon has no mass; its velocity in a vacuum has been determined. It is a boson and a portion of energy. It has polarisation, spin, path, and frequency of the electromagnetic wave. It carries a fragment of information about the source.
Due to the laws of physics, local invariance (gauge symmetry) is a fundamental principle that defines the existence and properties of the photon. As is known, the ideal photon can be described by its frequency (ν), which is related to its energy by the equation E = hν, where h = 6.6262 × 10−34 Js is Planck’s constant. So, at the Planck energy level, the photon’s frequency would be huge: Planck Energy ≈1.956 × 109 J, νmax ≈ 2.95 × 1042 Hz, which are far beyond what we can get or measure on Earth.
The electromagnetic spectrum of different photons is continuous and bounded by the Planck frequency. A physically realisable single photon is not perfect and does not have a single frequency. It has a continuous (not-discrete) frequency spectrum. Any photon can be described in terms of its electromagnetic field envelope. The photon is a part of a wave packet or pulse, which has a finite duration in time, and according to the Fourier method, must be composed of many frequencies.
Due to the quantum uncertainty principle, there is a photon’s time-energy uncertainty. This uncertainty translates into a frequency uncertainty ΔE = hΔν, where ΔE is the uncertainty in energy, and Δν is the uncertainty in frequency. So, if we know the duration a photon was created, its frequency has a very large Δν; and vice versa, if we have a well-defined frequency, it must have a very large Δt. For example, suppose an atom in an excited state decays during its finite lifetime and emits a single photon. In that case, the photon has a creation time of Δt (a few nanoseconds), which results in spectrum linewidths.
So, in our thought experiment, the possibility of a single photon’s structure cannot be analysed solely by detecting its discrete frequency spectrum, due to its continuous character and the uncertainty principle.
In addition to exhibiting physical characteristics, the photon is also credited with collecting various aspects of consciousness. Roger Penrose’s book [3] bridges the gap between physics, biology, and the philosophy of mind. He argues that consciousness is its non-computability and cannot be represented by a conventional computational model; human mathematical understanding is non-algorithmic, and humans can see the truth that a computer cannot; it is the capacity for genuine insight that transcends formal proof systems. This book also proposed that consciousness arises from quantum computations and that wave-function collapse is not a random event but a spontaneous, objective process; the biological structure coordinates the collapse in a non-computable way.
The focus of the thought experiment is on creating advanced AI to analyse the reasons for the sustainable and purposeful formation of photon structure and CS during the Planck and Grand Unification epochs, when there was no space-time, and all four quantum interactions were combined into a single fundamental force.
The material nature of the hypothetical structure of the photon and CS was not considered in this article. Different authors have already discussed this subject. For example, they can be built from unknown forms of matter, such as the preon model [4] or quantum fields. They may be a manifestation of hidden dimensions or unknown topological defects in the nascent space-time. It can be assumed that they are virtual, or the electromagnetic field of a photon may contain some patterns resembling the image of a soliton.
The thought experiment involves different non-formalizable and topological characteristics of an analogue cognitive nature when the goals are vague, and resources for the research are limited. In these conditions, the stability and purposefulness of the research process were ensured by its special structuring and by finding a unique balance between order and chaos, both in the research itself and in the functioning of the advanced AI system created. For this, the article exploits the author’s convergent method, the core of which is based on fundamental thermodynamics and inverse problem-solving in topological space [5, 6, 7].
The advanced AI created should be used to bridge the gap between the values required for research that are currently inaccessible to standard methods and the available capabilities.
2 AI capabilities
2.1 Modern AI for the analysis of photon
Modern digital generative AI has limitations in its development for solving non-standard problems with an analogue nature, and it has limited capabilities for explanation [6, 8]. The AI requires significantly reducing energy and time consumption, as well as accounting for the non-formalisable cognitive semantics interpretation of AI models [6]. However, the development of traditional AI is hindered by the need to pre-convert the analogue natural signal into digital form, which distorts and truncates its spectrum, as follows from the Kotelnikov-Nyquist-Shannon sampling theorem.
Many publications address the estimation of error from sampling an analogue signal, for example, [9], [10], and [11]. The sampling theorem provides a reconstruction of a digital signal, but it rests on abstract assumptions. When a continuous signal is replaced by a discrete one, several sources of error and loss of accuracy are introduced.
The first source of error is aliasing (spectrum overlap). It requires that the signal be bandlimited to frequencies below νmax and that the sampling rate νs exceed 2×νmax (the Nyquist rate). In the real world, we encounter aliasing that gives noise: any frequency above νs/2 cannot be correctly represented; it has folded back into the lower band. The resulting error can be estimated by analysing the power spectral density (PSD) of the original signal relative to that of the sampled signal. To address this error, an anti-aliasing filter is placed before the sampler, which intentionally loses high-frequency information and can distort the sampled signal by appearing as high-frequency components at lower frequencies.
The second source of error is the quantisation of the analogue signal, when an analogue-to-digital converter (ADC) converts continuous amplitude to a finite number of bits, resulting in quantisation noise that can be modelled as uniform white noise. The signal-to-quantisation-noise ratio (SQNR) compares the powers of the desired signal and quantisation noise, helping measure the loss of accuracy due to quantisation. This ratio is proportional to the number of bits used for coding an analogue signal.
The third source is aperture error, since ADCs require a finite time to acquire their value. The error is proportional to the rate of change of the signal and the aperture time. Of course, for very high-frequency analogue signals, this error can be significant.
The fourth source is the impossibility of the clock, which triggers the sampling, not being perfect; sampling instants vary randomly around the ideal time. It is the jitter error, which is proportional to the signal’s slope and the amount of timing jitter. This is challenging for high-frequency waves.
The fifth source is the error from reconstructing the original signal. This is a predictable distortion. The loss of accuracy can be calculated, and the error can be corrected using a reconstruction filter. It can be mitigated by introducing a slight amount of high-frequency attenuation and using known images of the original signal.
Moreover, a photon is a quantum excitation of the electromagnetic field. It is the smallest possible quantum of that field; the photon itself is the fundamental sample. Sampling a single-photon wave will destroy it because you must interact with it. The detection process for a continuous photon wave can be a binary event: photon arrived or did not arrive. That is more extreme for high-frequency photons.
These make it challenging to study a single photon using the sampling approach.
So, the total error can be analysed by considering these sources. Let us notice that the error cannot be mitigated at all for the photon case. For example, the best femtosecond and attosecond spectrometers aim to detect frequencies up to 1015–1017 Hz [12]. Gamma-ray interferometry (more than 1019 Hz) is not technically feasible, and X-ray interferometry (1016–1019 Hz) is still in the conceptual stage. However, the highest frequency of a photon wave can be close to the Planck frequency, which is currently unattainable, and gamma-ray frequencies from cosmic events can reach 1030 Hz. Thus, the digitisation approach and the sampling theorem are challenging for supporting the solution of the considered nonstandard issues.
The paper [13] noted that research on astronomical and interstellar missions suffers from brittleness and a lack of common-sense understanding of the world through logical symbol manipulation. This paper claims that AI problems grow exponentially with the dimensionality of the environment’s state space. Traditional AI is weak at solving non-standard tasks because it comprises a suite of approaches that provide narrow, specialist capabilities; we lack an understanding of the general mechanisms of intelligence. The topic of human thinking is referred to as non-formalisable cognitive semantics [6].
Currently, studying non-standard issues cannot have a real training dataset; a digitally artificially generated sample can replace them only partially. To create a training sample for CSs’ research, there are theoretically proven patterns of their behaviour [1, 2].
An analogue natural signal can be processed by optical AI without digitalisation by replacing the many-level neural network with a single-layer holographic plate [7, 14]. This approach eliminates the need to represent the analysed image with values of tens and hundreds of billions of parameters, thereby reducing the time and energy required to train AI systems. The holographic model of human memory can be applied in this context [15, 16]. However, numerous optical AI projects involve a previously digitised signal [17–19]. The paper [20] on an analogue optical computer for AI uses a separate digital subsystem for training AI. They make AI models with up to 4,096 weights at 9-bit precision and claim that training is a low-energy operation, consuming only 10% of the total energy.
For further development of AI to support the solution of non-standard problems, it needs to clarify new requirements that address the limitations of modern digital-like AI, including high energy and time consumption, and shortening of the signal spectrum due to digitalisation. This can be illustrated by the example of formulating requirements for AI to support the solution of currently impossible scientific tasks.
2.2 Possible CS recognition with AI
The paper [21] presents two AI approaches to place tight upper bounds on the level of CS contribution to the observed cosmic microwave background (CMB) anisotropies using Planck18 mission data and to forecast detectability for the tension Gμ—where the minimum detectable tension is Gμmin = 1.9 × 10−7. It was noted that accurate simulations of CS-induced anisotropies are pretty expensive, which motivates the use of alternative approaches to make simulations. Paper [22] also recommends exploring topological measures, which may imply non-quantitative or unknown factors characterising the CS.
The paper [23] demonstrates that many proposals for detecting CSs from CMB or 21 cm temperature maps do not aim to locate the strings. The convolutional neural network was trained on simulations of CMB temperature anisotropy maps. It suggests a Bayesian interpretation of CS detection using machine learning (ML), which can detect and locate CS on a noiseless CMB temperature map down to a string tension of only Gμ = 5 × 10−9. However, to confirm these indirect results, they need to be verified on other maps and AI tools, and with CS tension better than 10–9.
The Laser Interferometer Space Antenna (LISA, European Space Agency) and Taiji (China) projects aim to detect the stochastic gravitational wave background (SGWB) from CSs [24, 25]. These projects may enable SGWB to be registered when the CSs are tensioned 10–17. However, current gravitational-wave detectors (LIGO/Virgo) can detect the SGWB from strings with a tension as low as 10–8. The sensitivity of the observations already made by the cosmic microwave background (CMBR) is such that everything that can be identified in the global map has already been detected, including the dipole and quadrupole components. Searches for smaller structures have so far yielded inconclusive localisations, which researchers have unsuccessfully attempted to confirm with additional observations of gravitational lenses in the optical range.
Optical telescopes with pixel diagnostic arrays, charge-coupled devices (CCDs), and radio telescopes are used to diagnose CS. A single element of CCD is sensitive over the entire spectral range, so a light filter is used over the photodiodes of colour CCD arrays, which allows only one of the three colours to pass through, making the pixel scheme even more granular. Moreover, in turn, there are no such filters in the black-and-white CCD sensor. The signal’s highest frequencies are truncated, as per the Kotelnikov-Nyquist-Shannon sampling theorem, to prevent aliasing and noise effects. However, the sampling of a signal can lead to the loss of valuable information (see Section 2.1). This remark is critical if the task is, for example, to identify distortions of Lorentz invariance (LIV) or decoherence because quantum fluctuations may influence particle propagation [26, 27]. The CS should produce unique signals that must be isolated from noise, which includes:
• Radio and optical band waves, reflecting the features of the CS.
• Periodic oscillations of closed loops of CS, fractures, giving less powerful but frequent bursts.
• Gravitational waves created by points on a CS moving at the speed of light and producing short (<1 ms), powerful pulses.
The signal patterns from the CS can be represented by:
• The form of a pulse with a sharp spike and a slow decline.
• A frequency spectrum with a power-law frequency distribution.
CS detection frequency ranges: 10–5,000 Hz for strings with a tension of Gμ ∼ 10–11; 0.1 MHz–0.1 Hz for light strings with Gμ ∼ 10–15–10–17; pulsar timings of ∼1,000 Hz for heavyweight strings with Gμ >> 10–8 in the low frequency range [28, 29]. At the same time, the quantum characteristics of the CS, including quantum fluctuations, require diagnostics of significantly higher frequencies, which are now practically impossible but could be achieved using indirect methods with ML:
• Gravitational lensing with a pattern in the form of doubling the image of a source (galaxy, quasar, pulsar) with the same redshift and angular separation of ∼1–100 arcseconds at optical or infrared frequencies.
• Synchrotron radiation occurs when CS interacts with a plasma with a pattern in the form of radio or X-ray filaments in space with a linear structure and with abnormally high polarisation at radio or X-ray frequencies.
• Microlensing with a pattern in the form of a short-term (<1 min) increase in the brightness of a star due to the movement of a string with a symmetrical signal in the optical range.
• LIV with delay of high-frequency components of gravitational waves or changes in their velocity depending on the direction relative to the axis of the CS.
• Data on quantum decoherence (QD) of gravitational waves with their possible depolarisation and abnormally fast attenuation of high-frequency modes.
Let us focus in more detail on the last two items in the list, which differ in their non-standard approach. CS can generate signatures associated with LIV and QD. CS can spontaneously break Lorentz symmetry due to their topology or their interaction with the quantum gravitational vacuum, as evidenced by the dispersion of gravitational-wave (GW) velocities, characterised by an abnormally long delay of the high-frequency components of GW relative to the low-frequency ones during the registration of the burst. Anisotropy (different velocity/amplitude of GW depending on the direction relative to the axis of the CS) of GW propagation is possible. Frequency ranges for the search: LISA Telescope—0.1 mHz–0.1 Hz; Einstein Telescope—1–104 Hz [30].
In the case of QD, the patterns of the valuable signal can be abnormally fast attenuation of the interference pattern for high-frequency modes; depolarisation of gravitational waves with violation of standard relations between polarisations; QD can introduce specific spatiotemporal correlations into detector noise, with the appearance of unexplained correlations between remote detectors located on different telescopes. However, the existing limitations of LIV/QD diagnostics do not yet allow such anomalies to be detected, which imposes strict limits on the studied CS parameters, namely, no better than Gμ ∼ 10–12.
In diagnosing the signal received with a telescope, a key role is played by separating it from the noise. There are many original approaches to such allocation, for example, as described in the paper by [31]. In contrast to classical methods, which detect anomalies in completely observed curves, the proposed approach in this paper identifies anomalies sequentially as each point on the curve is received. It helps to monitor a new functional observation as it emerges.
Thus, direct detection of CS is not yet possible; however, integrating astrophysical observations, laboratory experiments, and advanced photonic AI can significantly narrow the parameters required to prove its existence. For this, advanced AI must help ensure the reliability and accuracy of diagnosing the rapid attenuation of high-frequency modes, full spectra of short-term patterns, depolarisation of gravitational waves, violations of Lorentz invariance and decoherence of signals, as well as separating the signal from the noise.
The motivation to prove the existence of CS stems primarily from fundamental considerations. The search process itself is already aimed at verifying existing ideas about the early Universe and its structure, developing theories of space-time, studying violations of Lorentz invariance and quantum decoherence, and deepening understanding of the origin of the quantum Standard Model. From a practical point of view, the enormous energy potential of CS and their influence on the navigation of space objects are of primary interest.
2.3 Motivation for detecting photon structure with AI
If we hypothetically assume that the photon has a structure from the opposite perspective, because it contradicts the Standard Model, then a single photon can carry information about its source. In the meantime, information about the source is encoded in a group of photons with varying characteristics. However, despite the issue’s hypothetical nature, this hypothetically reverse approach helps identify the requirements for the desired AI to solve non-standard problems.
Incoming signals from space are captured by a telescope, distorted by an optical system, transformed into a digital representation, and then analysed to obtain information about the source, such as CSs, stars, pulsars, or quasars. Using the idea of a structural photon could give rise to new modelling principles, for example, similar to the already known holographic representation of the 3D Universe as its 2D surface, according to which the structure of the Universe, space-time and matter arise as a projection of information from the boundary of the Universe [32].
Although there are many questions about the concept of a holographic Universe—for example, it is unclear how holography explains the properties of galaxies and the existence of something beyond the Universe—the holographic idea can be applied to create an advanced AI. It enables replacing a traditional multi-level neural network with a single holographic plate, thereby eliminating the need for multi-step neural network training and the requirement to transform a natural continuous (analogue) signal into a discrete (digital) form, which is accompanied by excessively high time and energy costs for ML [7].
Attempts to look at the photon in a non-standard way are known to exist, for example:
• A photon is a bound state of particle-antiparticle pairs.
• A photon is a stable wave structure in a nonlinear medium, such as a soliton.
• A photon is an excitation of a vacuum with topological features.
• A photon is like an oscillation of an open string attached to branes.
To develop the idea of photon structurality, it seems fruitful to determine the initial states of the Universe during the Planck epoch, the time of a photon’s birth. Quantum physics partially describes this epoch; however, it can be assumed that the quanta themselves emerged based on the discretisation of the primary analogue state. In this case, the description of the beginning state of the Universe can be reduced to a thermodynamic description of the discretisation of an analogue medium characterised by entropy (chaos), energy and order (structure). It should also be noted that, with the growth of entropy at the birth of the Universe, a certain purposefulness is observed in changing its local states, characterised by the spontaneous formation of particles, and then cosmic objects. To formalise this situation, methods from thermodynamics [33] and the solution of incorrect problems in topological spaces [34] were employed.
According to various approaches to the origin of the Universe, the initial entropy could be minimal, or even zero, or very large, but not infinite; similarly, energy and temperature could be enormous, but not infinite [35–39]. However, we note that the infinite entropy in the initial epoch of the Universe’s birth contradicts the observed constant entropy growth. Therefore, we will focus on the assumption of zero initial (hidden singularity) entropy when there was no statistical mixture yet.
In this context, in the thinking experiment, let us assume the presence of a photon structure that formed at the beginning of the Universe and completed its formation during the era of hydrogen recombination. In the state of origin, the photon could resemble a dynamic soliton model, which is described by electrodynamic methods, or it could be represented as an ordered structure in the form of a crystal with smooth edges and surfaces enclosing cells (membranes) of a tiny, literally infinitely small, volume. The first requires exposure to external energy, which was abundant during the singularity, and the second requires “crystallisation” from a continuous (analogue) state, a process later confirmed by the appearance of quantum particles. Considering the initial parameters that characterised the birth of the Universe, the thermodynamic instability of the medium may have caused spontaneous fluctuations, leading to the structuring of local states with greater thermodynamic stability relative to the environment.
Currently, when it is impossible to experimentally confirm the photon’s structure using traditional methods due to insufficient energy, a decrease in diagnostic quality may also occur, as usual signal digitisation involves rejecting high-frequency components. Note that the original data, which are not truncated in the spectrum, are stored for subsequent processing, for example, at the FAST radio telescope [40]. It is worth noting that distortions from imperfect telescope designs introduce greater distortions than those from sampling (digitisation); however, even minor deviations in diagnosis introduce additional risks of undetected errors. However, direct diagnostics of an individual photon’s characteristics using radio telescopes are impossible due to the low photon energy—the energy of a photon at 3 GHz, approximately 2 × 10−24 J. Optical telescopes are more promising for this type of research.
In the state of the hidden singularity of the Universe’s birth, a photon can begin to originate from the primary analogue state of the environment, acquiring the ability to exhibit discrete behaviour and representation. These initial phenomena are now impossible to verify on a free photon that has already formed. At the same time, appropriate AI methods must be outlined to identify the structure of a photon. In the context of studying the structure of the photon, we can discuss the goals and motivations behind such work in relation to the formulation of requirements for advanced AI. So, the idea of decoding the internal structure parameters of photons will allow us:
• To improve the photonic versions of AI to reduce the time and energy costs of ML by several orders of magnitude.
• To recognise space objects, including CS, without very costly telescopes, due to photons carrying information about their origin and way.
• Almost instantly solve problems of hydrodynamics, stochastic problems, and implement evolutionary calculations (genetic algorithms).
If the photon has an internal structure, it could also explain:
• Additional degrees of freedom for elementary particles.
• Unusual behaviour, up to the point of decay, of a photon in strong magnetic fields.
• Non-standard anomalies in quantum electrodynamics.
• Fluctuations of vacuum and elementary particles, as well as neurons in the human brain.
Moreover, photons span the entire electromagnetic spectrum, from radio waves to gamma rays, and their energy is determined by this spectrum. Therefore, the discovery of additional photon properties increases their value in medical, industrial and scientific research.
Thus, in the two non-standard examples considered in this article, namely, the existence/absence of CS and a photon’s structure, using advanced AI, the following features of this research should be taken into account:
• They are generated from an analogue state of the hidden singularity, followed by the local emergence of stable states.
• The changes of the local stable states are purposeful, for example, for a photon, this is its free state in the epoch of hydrogen recombination.
• The AI system should be analogue, eliminating the need to digitise the natural signal by cutting off high-frequency components.
These features of the study require identifying and observing the conditions for organising a chaotic environment that will ensure the sustainable and purposeful formation of space objects with a particular structure. The author’s convergent method has helped to understand such formation [6]. The initial state of the Universe can be seen as a kind of philosophical Nothing [41] and is associated with a hidden singularity, when the concept of space-time does not yet have meaning. Let us index this initial state with the symbol t = 0, which is not yet time. The countdown starts at Planck time t = tp = 10–43 s.
3 Convergent methods
The research on creating an adequate AI paradigm that can help solve the considered non-standard problems has specific characteristics as follows:
• The vague goals and limited resources for research define the possibility of absence solutions, their multivariate nature, and the instability of the solving process, which identify the incorrect and inverse character of the problem statement.
• The researchers must take part in and control the problem-solving and introduce additional information, which can be conceptual and non-quantitative, into the solving process that can be relevant for inverse-problem solving.
• The spontaneous nature, which may identify the requirements to take into account the fundamental thermodynamic laws for making the research process more stable.
These characteristics require ensuring that the research process of solving the non-standard problems must be stable and purposeful in the following aspects:
• The thought experiment, taking into account a lack of resources (the Planck scales) and vague goals to justify the existence or absence of the phenomena.
• A consistent change in the state of non-standard phenomena under consideration.
• Creating an adequate paradigm of AI to support solving the non-standard problems.
The primary focus of this article is the third aspect. The author’s convergent method has contributed to ensuring the stability and purposefulness of this thought experiment, considering these three aspects. It is based on methods of inverse problem-solving in topological spaces and fundamental thermodynamics. This method was used repeatedly to develop and deploy information retrieval systems, to provide technological support for collective intelligence processes [42], and to represent the cognitive semantics of AI systems [6]. For example, it was helpful to describe the evolution of proteins on Earth [7]. The following two main necessary conditions are ensuring the stability and purposefulness of the research:
• Thermodynamic condition provides stability to the researched system development by balancing chaos and order inside and outside the system.
• Topological condition ensures the purposefulness of the system development by applying a unique internal structure.
To determine the first (thermodynamic) condition, it is assumed that the chaotic stability of a dynamical system (body), such as photons or CS, can be characterised by the Lyapunov function.
The original appearance of a new body can be represented as a set of spontaneous events with an unclear purpose. The purpose is to stabilise the body state and bring order out of chaos. If nothing puts pressure on this process, it will most likely diverge and be lost. However, the process is purposeful; the body has tended to its creation.
Let us take the Lagrangian and Hamilton’s principle (minimise the action integral) to describe the state of this dynamical system. The Lagrangian was expressed in terms of the system’s kinetic (K) and potential (U) energies. The Hamilton equations helped to interpret nonlocal context in phenomena such as CS, which can also be studied using methods from smooth manifolds, thereby allowing modelling nonlocal events via Lyapunov-based stability theory.
Then, for the classical system, there exists a Lagrangian L = K–U. The stability of the system’s dynamics can be analysed using a Lyapunov function V equal to the system’s total internal energy, E = K + U [33, 43].
Now, let us incorporate all external impacts into the system and make it isolated from the other external environment. Due to the law of conservation of energy for the isolated system, Е = V = const. In a chaotic system, the production of entropy, due to the fundamental thermodynamic relation, must satisfy the formula:
where dS denotes the change of the system’s entropy (J/K), T > 0 is a normalising factor (K, temperature), P is the pressure (Pa), E is an energy (J), B is the volume of the system (m3), and dS/dt is the parameter that characterises the dynamics of entropy generation; it is not the rate, because t is not time in the Planck epoch. The differential dE is the change in the system’s total energy. The entropy change dS depends on the amount of energy added. According to this relation, in an isolated closed system, dE/dt = 0 because E = const.
For a closed system, one that cannot exchange energy or matter with the environment, dS/dt must be greater than 0, because a closed system increases its entropy and generates new motion, thereby spontaneously creating local order O(t) in the system, which can make the system more stable.
For stability of the system, the second law of thermodynamics must be satisfied, i.e., dS/dt < 0. For the relation (1), the Lyapunov stability conditions hold only if dE/dt < 0, which is impossible for a closed system. In a closed system, the conditions for stability are violated.
Thus, to ensure the stability of any system with an internal source of chaos, it is required to open the system, allowing energy and entropy exchange between the internal and external environments. Such an opening can be ensured by placing windows along the boundary of the system (the body’s shell), which creates conditions for restoring and self-organising order O(t) within the body.
For such a chaotic system with the order O(t), as was shown in [6, 7, 33, 43, 44], the stable development can be ensured by creating a window in the boundary of the body, which allows the exchange of the system’s entropy with the outside environment, can be defined:
where O(t) is the level of the order inside; O′(t)—the speed of changing O(t); Sint describes the level of the entropy inside the body; S’int—the speed of changing the entropy inside the body; Sexch is the level of exchanging entropy between the body and its environment through the window; S’exch—the speed of Sexch changing; symbol t identifies a change of states. The symbol t does not necessarily indicate time; it can indicate a change of states. In unstable states, the left-hand side of inequality (2) is nonnegative. The instability of chaos can lead to the spontaneous formation of local order as O(t) increases and Sint(t) decreases.
To determine the second (topological) condition, forming systems (bodies) can be represented by an inverse problem-solving method on a topological space [6, 34]. This necessary condition ensures that purposeful body development can converge on an unclear goal—the body and its internal structure in every state must be altered in a direction that appears to move toward a more stable state than the previous one.
To determine the conditions for the sustainable and purposeful development of the bodies, the incorrect problem x = A-1yσ is solved, where x denotes a specific state of the body in space X, yσ ㄧ is an inaccurate goal of its development in space Y, and A-1 ㄧ is the inverse operator for achieving the development goal. Incorrect tasks may not have solutions, there may be many solutions, and even a slight change in the initial data can cause the solution to become unstable. Let us assume that there is a solution, and it is the only one. This assumption is quite natural for the observed purposeful formation and expansion of the Universe.
Considering that there can be no metric in the space during the Planck period, the solution of such problems on a topological space is considered. For a stable and purposeful solution, it will be necessary to ensure a one-to-one continuous mapping between the two spaces X and Y, as well as compliance with the following necessary conditions for structuring information describing the state of the body and its environment:
•Determine the boundary of the body (x), the goals of its development (yσ), as well as the development itself (A).
•Provide conditions for Hausdorff separability of elements in the spaces X and Y.
•Divide the space X into a finite number of parts (compactness).
•Ensure that all points in the X-space are mapped to the Y-space using the A operator (mapping with a closed graph)
Applying thermodynamic and topological conditions supports the stability and purposefulness of the formation of hypothetical CSs and the photon structure. The following characteristics can represent this formation:
• A spontaneous formation of a stable local state with boundaries (bodies with shells).
• Windows are formed in the shells (windows for exchanging entropy).
• Partitions are formed inside the bodies (Hausdorff separability).
• The body’s medium is covered by a finite number of partitions (compact space).
• The body’s changes have hidden goals.
These characteristics, illustrated in Figure 1, can help take a fresh look at advanced AI devices and optical systems for researching non-standard problems.
Figure 1. The necessary conditions for stability and purposefulness of spontaneously forming bodies.
These two conditions can ensure the stability and purposefulness of the development of any open system that can exchange entropy with its environment. These conditions have been repeatedly used in practice to accelerate team strategic conversations and decision-making with traditional AI support [42, 45].
The suggested convergent method helps create an advanced AI system in the context of researching two non-standard phenomena that have been characterised as hypothetical features beyond the Standard Model. Indeed, in the study of different physical phenomena, the problem can be set in reverse, that is, to prove the absence of a characteristic. Most likely, another non-standard research, for example, on fundamental strings, quantum gravity, or teleportation, will have to refine or choose a different method for developing an adequate AI system that can ensure the acceleration and purposefulness of the research.
4 Hypothetical photon’s structure
For obtaining the requirements for advanced AI to support solving non-standard problems, such as analysing the issue about the existence of a hypothetical photon’s structure, the above (Section 3) two convergent conditions can be helpful.
From the very beginning (the Planck epoch), photons, or rather their prototypes, were indistinguishable. However, in the classical standard approach, only after the Planck epoch can the chaotic state of any system be described by Formula 1. This formula is incorrect for the Planck epoch, particularly because the energy E(t) includes a gravitational component, the pressure is too significant and incomprehensible (about 10113 Pa), and the concept of space-time cannot be applied. However, the Universe’s initial state can be hypothetically characterised by a huge yet finite amount of energy that remains unchanging. According to available observations, as mentioned above, the entropy of the entire Universe is constantly increasing while locally decreasing (as stars and galaxies form). Then, let us Formula 1 conditionally extend to states at t > 0, assuming that discreteness first appeared in the Universe, and S(t) can be defined. That is, the S/E ≈ 0 at t = 0, and the global entropy increases in subsequent states.
At the beginning of the Universe, nothing prevents entropy from spontaneously decreasing in local states, creating more stable behaviour or establishing local order O(t) in the form of bodies with windows for entropy exchange with the environment. When such bodies form, entropy from them is released into the environment. Then, a local order in Formula 2 can be expressed:
where Sp = 1028 J/K corresponds to the entropy at the Planck epoch, then, the order can be assumed to be O (0) ≈ Sp at t = 0. It is the state with the highest possible order due to the combination of all interaction forces at the Planck epoch and zero local and global entropy Sint(t) ≈ 0. Therefore, at t = 0, the Universe’s state can be considered continuous (analogue), without discretisation.
At the state of completion of the Planck epoch, the global O (tp) ≈ 0. At t = tp, the interaction forces are not yet separated, the photon and other particles may begin to be created, and space-time is no longer analogous.
To ensure the topological condition, it is necessary to partition the body and map its state to the goals as a topologically closed graph. Then, the Formula 3 can be clarified for the local order O(t) in the form of a separated body:
where δ(t) Formula 4 is an indicator of the presence of the body and its shell at state t (let us take its value: 0—absence, and 1—existence); N(t) is the number of the body’s partitions in the state t, and w(t) is the weight (importance) of the partition (let us take w = 1).
From (2) and (4) follows a restriction defining the stability of local states that can spontaneously transform into bodies with greater stability and with lower inside entropy relative to the entropy of the global environment (omit the symbol t without loss of generality):
To reflect the purposefulness of the formation of states, let us denote by X the set of the photon states x [δ, w, N, Sint, Sexch, S’int, S’exch], by Y—the set of hidden goals (for example, the free photon in the recombination epoch), and by yσ⸦Y—the unclear (hidden, inaccurate) goal of photon structure formation. Then, convergence of the inverse problem-solving can be achieved by imposing the necessary structural conditions on the sets X and Y to ensure the stability and purposefulness of body creation (from Section 3). Taking these conditions into account, the inequality (Formula 5) takes the form:
Taking into account inequality (Equation 6), consider two transitional states in the hidden singularity and the Planck epoch:
1. t = 0, Sint = 0, N = 0, Sexch = 0; the left side of inequality (Equation 6) acquires a zero or even a positive value, which characterises the instability of the local states in the Universe that randomly initiates the local drop of the entropy with the spontaneous and purposeful formation of bodies with shells and shells’ windows that have better stability.
2. 2. 0 < t << tp, to increase the stability of the local states of a spontaneously formed body, the left side of inequality (6) for these body becomes less than zero, for which the following values of variables can be as follows: Sint << Sp, 0 < Sexch << Sp, Sint << Sexch, N = const >>1, S’int < 0, S’exch < 0, |S’int| < |S’exch|; inequality (6) takes the form:
This inequality indicates that a spontaneously formed body with greater stability compared to the external environment possesses a shell with a window for exchanging entropy with the external environment. In this body, the order purposefully increases through the formation of partitions with windows, and the internal entropy grows more slowly than the external entropy.
Formula 7 dictates that the photon does not have an internal structure. This inference is not in contradiction to the Standard Model of quantum physics, which states that a photon is an elementary particle and the quantum of the electromagnetic field. However, in the context of this article’s goal of creating advanced AI, a photon was conditionally represented in a stable thermodynamic equilibrium state at the Universe’s very origin, when it was created discretely and purposefully.
Formula 7 also demonstrates that the advanced AI system should be completely full-analogue and embodied into a signal diagnostic system that must also be analogue (not digital, pixel-based). However, digitalisation of the signal after its processing by an analogue AI system may be necessary for its transmission, storage, and further processing.
So, the chaotic instability could have served as the origin of the photon, for example, through spontaneous fluctuations that led to the emergence of relatively stable local states in the Universe from which photons could be formed. Applying the proposed two conditions of the convergent method to the analogue and then chaotic environment can be used to estimate the stability and purposefulness of forming local stable states of photons and CSs at the origin. The following characteristics can represent it:
• Spontaneous formation of more stable states in the form of bodies with a shell, in which the internal entropy is less than the external one.
• Spontaneous formation of windows in the shells of bodies to balance internal entropy with external entropy.
• Partitions are spontaneously formed inside the bodies.
• A finite number of partitions with windows is created in the body’s medium.
• The body’s changes are purposeful in the direction of creating stable photons and CSs.
• Then, the chaotic processes may begin to prevail, and, for example, a photon, by the epoch of recombination, acquires the currently known state.
It was during the Planck and Grand Unification epochs that the photon and CSs may hypothetically have begun to form. Unlike a photon, which, according to the Standard Model, has no mass and structure, one-dimensional CSs have a hypothetical diameter of about 10–20 cm [1, 2]. If this is the case, there may be similar features in diagnosing hypothetical CSs’ existence and photon structure that require advanced AI support.
5 Advanced AI to support CS and photon diagnostics
What kind of AI should be used to improve the effectiveness of signal diagnostics by light beams and optical telescopes? Currently, the key principle in AI development is to ensure its seamless embodied integration into the natural physical processes. In our case, the signal received by the telescope is analogue, the measurement in the interferometer is analogue, the laser beam is analogue, and the change in the interference pattern is analogue.
Modern diagnostic systems for processing, including those that utilise AI, digitise the signal received by a telescope or an interferometer using pixel-like detectors or by sampling a continuous signal. This digitalisation violates the principle of embedded AI; the signal is also truncated in high frequencies to combat noise and meet the sampling theorem’s conditions. Typically, modern optical AI systems receive input signals from pixel arrays of detectors or in digital form, have cascaded optical neurons, and involve multi-step training of optical connections and synapses. They can use Mach-Zehnder interferometers, micro-ring resonators, or vision-language models, which have a many-level structure and require electronics for data compression, nonlinear activation, and digital post-processing [46, 47].
Modern optical AI systems typically focus on creating chips with multi-layered optical neural networks [18, 48]. An all-analogue photoelectronic chip for image recognition implements a digital data processing algorithm [49, 50]. The weight-stationary architectures compute a matrix–vector product [51]. A bottleneck of most optical AI approaches is their reliance on digital data and the difficulty in achieving the necessary precision [52]. ML in optical AI systems presents challenges in training large-scale networks [53]. A hybrid holographic architecture was experimentally developed for detecting image matches [15, 54], and a deep-learning-based computer-generated holography was demonstrated [16]. However, all of them have used algorithms similar to those used in digital systems.
To detect and isolate the signal from noise, established optical full-analogue methods based on the properties of light, including laser coherence, spatial filtering, and spectral selection, can be employed. Using laser coherence in the expectation that the objective signal has a stable signature and the noise is incoherent. The weak optical signal is mixed with the reference laser beam, as a result of which, due to the coherence of the signal and reference beam, interference occurs with the formation of an objective signal at a difference frequency. In this case, incoherent noise does not interfere with the reference beam and produces only a constant noise component at higher frequencies. Furthermore, the optical system can efficiently allocate the required difference frequency while discarding noise. Interferometers can be used for diagnostics, which record a coherent signal from an interference pattern against a background of suppressed incoherent noise.
The use of spatial filtering is based on the fact that the signal and noise come from different directions or the signal has a stable spatial structure. Then, the use of lenses and pinholes allows you to select only the focused signal, while the diaphragm blocks the scattered noise. As an optical filter, a translucent hologram or a holographic mirror can be used, which transmits or reflects light corresponding to a pre-known wave structure of the signal. Noise that does not correspond to this structure is blocked.
Spectral filtering, which uses interference filters, monochromators, and diffraction gratings, is based on the principle that the signal and noise have different wavelengths or emerge at different angles. The filter can be a slit or a detector that receives only light within a narrow spectral range corresponding to the signal. Such filters can pass a very narrow wavelength band (for example, 0.5 nm) within the optical range. If the noise is broadband (like sunlight), the filter will effectively cut most of the noise out. If the signal is pulsed, its temporary isolation against a background of constant noise can be utilised by electro-optical or acousto-optical modulators, which turn the light beam on and off in nanoseconds in response to an electrical signal.
This non-digital approach enables the establishment of new principles for experimental research on non-standard problems, rather than relying on particle accelerators and digital detectors (pixels and sampling, which require truncating high frequencies). For instance, in the case of using an optical telescope, research directions based on full-analogue photonic AI [7] can be informed by analysing the double slit effects and the spectral lines of single photons with the same characteristics observed by a telescope from space objects, including candidates for CSs.
For research on the photon structure, the Plank fluctuation must be recognised. It can be achieved through an indirect approach that helps detect LIV and decoherence with full-analogue photonic AI, which must be embodied in the analogue optical processes (before pixel-like signal detection). The experiment can be realised in the following way:
• An optical telescope receives an analogue signal from one of two stars;
• The optical filter makes a stream of single photons with similar values of all known parameters (energy, polarisation, spin, etc.).
• The optical double-slit system forms an interference spectrum on the screen, which is detected in an analogue way and processed by full-analogue photonic AI.
Such a process is repeated for the second star. If two interference pictures differ, it can be inferred that the LIV is violated or that decoherence has accumulated during the photon’s path. These results may indicate that photons have a structure, as the difference between the spectra of two similar photons reveals the presence of different objects. The challenge in this experiment is creating an optical filter and detector that can filter out photons with similar parameters [6].
To detect LIV and decoherence with full-analogue photonic AI support, the speed and spectral dispersion of different photons with varying energies can be measured using analogue and optical means. The following approaches may be instrumental for receiving and transmitting analogue signals to the input of the full-analogue photonic AI system [30, 55–59]:
• Optical telescopes with adaptive correction systems that detect gravitational lensing caused by CS (image doubling, abnormal curvature); in this case, decoherence can manifest itself in the loss of contrast of interference patterns, which such telescopes as the James Webb Space Telescope (JWST), LISA, Euclid, and Large Synoptic Survey Telescope (LSST) can provide.
• Laser interferometers for measuring the dependence of the speed of light on direction or polarisation. For example, Fabry-Perot interferometers with two perpendicular arms make it possible to detect LIV effects at the level of 4 × 10−18. They enable the detection of anisotropies in the propagation of electromagnetic waves caused by CSs.
• Multispectral cameras on spacecraft that can detect anomalies in the spectra of objects, for example, gamma and neutron spectrometers installed, for instance, on the Psyche instrument (which comprises a pair of identical cameras equipped with filters and telescopic lenses) to analyse the composition of materials, which may indirectly indicate the influence of CSs.
• Image processing systems for analysing strokes, aberrations and anomalies, and detecting extended objects, such as traces of CS that may be associated with LIV, in optical images, such as the pipeline of the StreakDet project.
Thus, to support research on non-standard phenomena, such as CS or photon’s structure existence/absence, using AI, the ideal AI must meet the following requirements:
• Be integrated into the telescope’s analogue channel for detecting the analogue signal from space, in addition to the pixel channel.
• Perceive and process an infinite signal spectrum without filtering out high frequencies with noise, which is used to do in accordance with the sampling theorem.
• Extract useful information from noise in an analogue way, for example, using lasers and holographic storage devices.
• Optimise the subsequent digitalisation of the signal using a broadband analogue-to-digital converter.
If these requirements are taken as the goal of inverse-problem solving for an AI system that supports solving non-standard problems, the architecture of the AI system can be as follows.
6 Full-analogue photonic AI
The required AI system must be embodied in a physical process of experimental equipment, replacing time- and energy-consuming digital procedures with more effective analogue-optical ones. The key to this approach may lie in the system’s unique ability to rapidly optically record training images at a specific point in the holographic matrix, as described in the aforementioned full-analogue photonic AI [7]. The architecture of this system, as illustrated in Figure 2, consists of three subsystems, each playing a pivotal role in the overall functionality and performance.
Figure 2. Full-analogue photonic AI system architecture [7].
Subsystem 1 creates an optical training set of images as a translucent matrix with each cell containing a single training image. Matrices can be constructed on a holographic basis, and each matrix corresponds to a specific thematic class of objects, such as stars, hypothetical CS, and photon images or spectra, which can be synthesised by simulating.
Subsystem 2 is responsible for the one-step training function. It uses one of the recorded translucent matrices of training images, reads all the recorded images simultaneously using an expanded and split laser beam, and performs an optical Fourier convolution of all these images. Subsequently, all beams are directed to a specific point in the holographic memory. This process ensures that each training set of images for a particular thematic class has its own point in the holographic memory.
Subsystem 3 is used for recognition. The signal from the analogue source (detector, camera, antenna, or telescope) is sent to the Fourier modulator. After passing through the beam expander, all modulated parallel beams are directed to the cells of the holographic memory. The detector matrix defines the luminance values of the resonating cells in the memory matrix, which are relevant to one of the images’ classes. The recognition is finished.
A digital computer can be used to control the deflectors that set the beam angles and to capture information from the detector array. An analogue-to-digital interface must be used for transmitting the results of analogue signal processing to a computer or supercomputer.
The modelling of the Fourier transform (FT) (and writing of many images at one cell of holographic memory) is based on the convolution theorem that states that the FT of a convolution between two functions is equal to the product of two FTs, for example, for functions f(x) and s(x), the following are true [60]:
where ℱ[φ(x)]—Fourier-transform of function φ(x), ⊗—means convolution.
The FT and convolution in Formulas 8, 9 are linear operations, from which it follows that a finite number of these procedures using addition and multiplication can be performed by optics. A double convolution of multiple images can produce a single Fourier image, allowing for changes in scale and rotation.
A more detailed description of the full-analogue photonic AI system can be found in [61] and [14]. Some main obstacles exist in its development:
• The signal distortions, noise, and physical reliability of accurate optical transformations.
• Synthesising new photonic material for holographic memory.
• The storage density and retrieval accuracy.
As for signal distortions, practical solutions to eliminate them without digital preprocessing are being sought. For example, in the wave theory of aberration, distortions in a telescope are described by decomposing them into a series of Zernike polynomials [62], which makes it possible to discard very weak distortions in the higher harmonics, and to find technical solutions for their compensation in optical ways in the lower harmonics, without digitalisation (active and adaptive optics). As shown in Section 5, for the embodied AI system in a real physical environment, the process of signal extraction from noise can be entirely analogue-optical, without traditional digitalisation, which distorts the signal spectrum and cuts off high frequencies. By combining various diagnostic methods and isolating the signal from noise—spectral and spatial filtering, the use of coherence and time gating—it is possible to use purely analogue optical means to isolate an extremely weak signal against a background of powerful noise, and then feed the signal to a fully photonic AI without intermediate digitalisation for subsequent intelligent processing. Thus, the built-in AI system for signal processing can be entirely analogue without digitisation.
As for synthesising photonic materials, it is necessary to make new materials for rewritable holographic memory, which can be achieved through a multistep, purposeful selection and integration of relevant components. For example, it can be lithium niobate with iron or other additives (Fe:LiNbO3). For example, the article by Yao et al. [63] describes a sub-ångström ultra-high-transmittance spectroscopic technique based on lithium niobate. The convergent method, with its inverse-problem-solving approach using a genetic algorithm, can help make the process of synthesising photonic materials more purposeful [7, 61]. There are some separate but interconnected directions of synthesising required photonic materials, such as follows: glass, silicon using semiconductor, 2D materials such as graphene, phase-change materials - switch amorphous and crystalline states, photonic crystals with periodic nanostructures, plasmonic nanostructures using metal-dielectric interfaces, metamaterials with properties not found in nature, quantum dots with nanoscale semiconductor structures, organic, and proteins.
Regarding the realistic storage density and retrieval accuracy of holographic memory, different natural-analogue images of the flow can be recorded at a single point in the holographic memory matrix and then retrieved and restored for the specific situation under study. The minimal image size in digital and analogue holography is limited by the wavelength of light, typically around 500–600 nm for visible light. For example, modern digital holographic technologies require approximately 600 pixels (20 × 30) for face recognition. To prevent interference/crosstalk, images must be separated by a minimum buffer of ∼ 5–10 microns. So, each face with buffer requires ∼600 square microns. So, the theoretical maximum is ∼1,400 faces per mm2. However, with error correction, the number of faces in practice will be about 166. For a non-digital case, each face recording must be at least 50–100 microns in diameter. The buffer zone between faces needs to be ∼20–30 microns. The total area per face, including buffer, is approximately 100 × 100 microns. Therefore, the theoretical maximum is ∼100 faces per mm2. To provide the required amount of memory, a cascading increase in the number of holographic plates is needed. The challenge of using holographic memory is read/write erasure, which degrades the fidelity of stored pages and requires data refresh [64]. The diffraction efficiency, which affects retrieval accuracy, depends on the light energy for hologram writing rather than the exposure duration [65].
The analogue optical approach cannot bypass fundamental Planck limits, realistically achieve the required sensitivity, and ensure noise rejection due to the lack of photonic materials and a proven method. However, the analogue approach can increase the range of detectable high frequencies by 7-9 orders of magnitude [7, 61]. This can be achieved by rejecting the signal sampling, embedding the AI system into a sensor design (e.g., a telescope), and using analogue holographic filtering of the high-frequency portion of the signal.
The project of creating full-analogue photonic AI is at a conceptual level of development: the methodology is documented, the principle is established with defined requirements, and the design elements and algorithms are proposed [6, 7, 14]. Some separate optical components, including optical setup, laser, micro-laser matrix, lenses, detectors, and beam splitters, can be purchased on the market and have been demonstrated.
7 Discussion and conclusion
The implementation of modern digital generative AI systems has encountered limitations, including their autoregressive character, high time and energy consumption, the need to use of multilevel neural networks with multistep training, and the inability of AI models to represent non-formalisable cognitive semantics [6]. Such AI systems limit their ability to support researching non-standard problems that can be described as inverse problem-solving with vague goals in a topologically and chaotically represented environment.
Some limitations can be overcome by developing a suggested full-analogue photonic AI based on holographic and Fourier transform bases [7, 61]. The article proposes the architecture of such AI, designed to meet the requirements of solving two non-standard problems. To formulate the criteria for such AI, the experimental impossibility of proving the existence of CS and the inaccessibility of observing the photon formation process in the first epochs of the Universe’s birth were chosen, in the form of a thought experiment, as examples of non-standard problems.
The full-analogue photonic AI system must expand the range of high-frequency processing and reduce the time and energy costs of ML by several orders of magnitude. It can increase the reliability and accuracy of diagnosing high-frequency signals, depolarisation of gravitational waves, violations of signal invariance and decoherence, and the separation of the signal from the noise.
However, there are obstacles to the full-analogue development of photonic AI, including signal distortions and inaccuracies in optical transformations, as well as the need to synthesise new photonic materials. The directions for overcoming these obstacles have been suggested.
The theoretical research and formulation of requirements for advanced AI to address non-standard problems were based on the author’s convergent method [5, 6], which provides the necessary conditions for purposeful and sustainable research. This method utilises inverse problem-solving in topological spaces and fundamental thermodynamics approaches.
Data availability statement
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.
Author contributions
AR: Conceptualization, Data curation, Formal Analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review and editing.
Funding
The author(s) declared that financial support was not received for this work and/or its publication.
Acknowledgements
I thank Academician V.A. Kotelnikov, who was my teacher, for deepening my understanding of the sampling theorem. I thank Academician S.L. Sobolev and Professor A.V. Chechkin for their advice to utilise methods for solving inverse problems on topological spaces to tackle complex issues. I thank Professor S.V. Ulyanov for his guidance on applying thermodynamic methods to evaluate the stability of intelligent systems. I thank Professor M.V. Sazhin for his clarifications on cosmic strings.
Conflict of interest
The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Correction note
This article has been corrected with minor changes. These changes do not impact the scientific content of the article.
Generative AI statement
The author(s) declared that generative AI was not used in the creation of this manuscript.
Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
1. Sazhina OS, Scognamiglio D, Sazhin MV, Capaccioli M. Optical analysis of a CMB cosmic string candidate. Monthly Notices R Astronomical Soc (2019) 485(2):1876–85. doi:10.1093/mnras/stz527
2. Sazhin MV, Khovanskaya OS, Capaccioli M, Longo G, Paolillo M, Covone G, et al. Gravitational lensing by cosmic strings: what we learn from the CSL-1 case. Monthly Notices R Astronomical Soc (2007) 376(4):1731–9. doi:10.1111/j.1365-2966.2007.11543.x
3. Penrose R. Shadows of the mind: a search for the missing science of consciousness. Oxford; New York: Oxford University Press (1994).
4. Dugne J-J, Fredriksson S, Hansson J. Preon trinity—A schematic model of leptons, quarks and heavy vector bosons. Europhysics Lett (2002) 60(2):188–94. doi:10.1209/epl/i2002-00337-8
5. Raikov AN. Convergent cognitype for speeding-up the strategic conversation. IFAC Proc Volumes (2008) 41(2):8103–8. doi:10.3182/20080706-5-KR-1001.01368
6. Raikov A. Cognitive semantics of artificial intelligence: a new perspective. Singapore: Springer (2021). doi:10.1007/978-981-33-6750-0
7. Raikov A, Guo M. Full-analogue photonic AI for embracing the uncertainty of the environment. In: I Perko, R Espejo, and A Reyes, editors. Shaping collaborative ecosystems for tomorrow, 12. Leeds, United Kingdom: Emerald Publishing Limited (2025). p. 77–89. doi:10.1108/978-1-83662-494-320251006
8. Raikov A, Pirani M. Human-machine duality: what's next in cognitive aspects of artificial intelligence? IEEE Access (2022) 10:56296–315. doi:10.1109/access.2022.3177657
9. Baumgart CW, Dunham ME, Moses JD. Curve fitting and error modeling for the digitization process near the Nyquist rate. IEEE Trans Instrumentation Meas (1991) 40(3):553–7. doi:10.1109/19.87018
10. Troyanovskyi VM, Koldaev VD, Zapevalina AA, Serduk OA, Vasilchuk KS. Why the using of Nyquist-Shannon-Kotelnikov sampling theorem in real-time systems is not correct? In: IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering (EIConRus). St. Petersburg and Moscow, Russia: IEEE (2017). p. 1048–51. doi:10.1109/EIConRus.2017.7910736
11. Zhmud VA, Nosek J, Dimitrov LV, Boyarchikov EY. Investigation of the dependence of the ADC error on the conversion frequency: recommendations for choosing a multiplier for the nyquist frequency. In: DB Solovev, GL Kyriakopoulos, and T Venelin, editors. SMART automatics and energy. Smart innovation, systems and technologies, 272. Singapore: Springer (2022). 183–92. doi:10.1007/978-981-16-8759-4_20
12. Jordan I, Huppert M, Brown MA, van Bokhoven JA, Wörner HJ. Photoelectron spectrometer for attosecond spectroscopy of liquids and gases. Rev Sci Instrum (2015) 86:123905. doi:10.1063/1.4938175
13. Ellery A. The state of hybrid artificial intelligence for interstellar missions. Prog Aerospace Sci (2025) 156:101100. doi:10.1016/j.paerosci.2025.101100
14. Raikov A, Guo M, Pan J, Chen W (2025). Device for image recognition based on full-analog photonic neural network and method thereof. United States Patent. US 12,222,680 B1.
15. Gamboa J, Shen X, Hamidfar T, Shahriar SM. Ultrafast image retrieval from a holographic memory disc for high-speed operation of a shift, scale, and rotation invariant target recognition system (2022). Available online at: https://arxiv.org/ftp/arxiv/papers/2211/2211.03881.pdf.
16. Shi L, Li B, Kim C, Kellnhofer P, Matusik W. Towards real-time photorealistic 3D holography with deep neural networks. Nature (2021) 591:234–9. doi:10.1038/s41586-020-03152-0
17. Khonina SN, Kazanskiy NL, Skidanov RV, Butt MA. Exploring types of photonic neural networks for imaging and computing—A review. Nanomaterials (2024) 14:697. doi:10.3390/nano14080697
18. Chen Y, Nazhamaiti M, Xu H, Meng Y, Zhou T, Li G, et al. All-analog photoelectronic chip for high-speed vision tasks. Nature (2023) 623:48–57. doi:10.1038/s41586-023-06558-8
19. Chen S, Li Y, Wang Y, Chen H, Ozcan A. Optical generative models. Nature (2025) 644:903–11. doi:10.1038/s41586-025-09446-5
20. Kalinin KP, Gladrow J, Chu J, Clegg JH, Cletheroe D, Kelly DJ, et al. Analog optical computer for AI inference and combinatorial optimization. Nature (2025) 645:354–61. doi:10.1038/s41586-025-09430-z
21. Torki M, Hajizadeh H, Farhang M, Vafaei Sadr A, Movahed SMS. Planck limits on cosmic string tension using machine learning. Monthly Notices R Astronomical Soc (2022) 509(2):2169–79. doi:10.1093/mnras/stab3030
22. Bubenik P. Statistical topological data analysis using persistence landscapes. J Machine Learn Res (2015) 16:77–102. doi:10.48550/arXiv.1207.6437
23. Ciuca R, Hernández OF. A bayesian framework for cosmic string searches in CMB maps. J Cosmology Astroparticle Phys (2017) 2017:028. doi:10.1088/1475-7516/2017/08/028
24. Danzmann K. LISA mission overview. Adv Space Res (2000) 25(6):1129–36. doi:10.1016/S0273-1177(99)00973–4
25. Gong Y, Luo J, Wang B. Concepts and status of Chinese space gravitational wave detection projects. Nat Astron (2021) 5:881–9. doi:10.1038/s41550-021-01480-3
26. Ellis J, Mavromatos N, Nanopoulos D. Derivation of a vacuum refractive index in a stringy space-time foam model. Phys Lett B (2008) 665(5):412–7. doi:10.1016/j.physletb.2008.06.029
27. Xi Y, Shu F-W. Constraints on lorentz invariance violation from GRB 221009A using the DisCan method. Chin Phys C (2025) 49(12):125101. doi:10.1088/1674-1137/adfa01
28. Stegmann J, Vermeulen SM. Detecting the heterodyning of gravitational waves. Class Quantum Grav (2024) 41:175012. doi:10.1088/1361-6382/ad682c
29. Detweiler S. Pulsar timing measurements and the search for gravitational waves. Astrophysical J (1979) 234(Part 1):1100. doi:10.1086/157593
30. Armano M, Audley H, Baird J, Binetruy P, Born M, Bortoluzzi D, et al. Temperature stability in the sub-milliHertz band with LISA pathfinder. Monthly Notices R Astronomical Soc (2019) 486(3):3368–79. doi:10.1093/mnras/stz1017
31. Austin E, Eckley IA, Bardwell L. Detection of emergent anomalous structure in functional data. Technometrics (2024) 66(4):614–24. doi:10.1080/00401706.2024.2342315
32. Bekenstein JD. Universal upper bound on the entropy-to-energy ratio for bounded systems. Phys Rev D (1981) 23:287–98. doi:10.1103/PhysRevD.23.287
33. Ulyanov SV, Raikov AN. Chaotic factor in intelligent information decision support systems. In: Proceedings of the 3rd International Conference on Application of Fuzzy Systems and Soft Computing. Wiesbaden, Germany (1998). p. 240–5.
34. Ivanov VK. Incorrect problems in topological spaces. Sib Math J (1969) 10:785–91. doi:10.1007/BF00971654
35. Zel'dovich YB, Novikov ID. Relativistic astrophysics, 2: the structure and evolution of the universe revised. Enlarged Edition. Chicago: University of Chicago Press (1983).
36. Jalalzadeh R, Jalalzadeh S, Heydarzade Y. Cosmological singularities in brane gravity. Nucl Phys B (2025) 1017:116945. doi:10.1016/j.nuclphysb.2025.116945
37. Saha S, Güdekli E, Chattopadhyay S, Yildiz GDA. Thermodynamics of the most generalized form of holographic dark energy and some particular cases with corrected entropies. Nucl Phys B (2025) 1014:116867. doi:10.1016/j.nuclphysb.2025.116867
38. Sheykhi A, Liravi L, Jusufi K. Thermodynamical properties of nonsingular universe. Phys Dark Universe (2025) 48:101931. doi:10.1016/j.dark.2025.101931
39. Di Bari P. On the origin of matter in the universe. Prog Part Nucl Phys (2022) 122:103913. doi:10.1016/j.ppnp.2021.103913
40. Qian L, ∙Yao R, Sun J, Xu J, Pan Z, Jiang P. FAST: its scientific achievements and prospects. The Innovation (2020) 1(3):100053. doi:10.1016/j.xinn.2020.100053
41. Hegel GWF. Georg wilhelm friedrich hegel: the science of logic. United States, NY: Cambridge University Press (2010).
42. Gubanov D, D, Korgin N, Novikov D, Raikov A. E-Expertise: modern collective intelligence. Springer Series: Stud Comput Intelligence (2014) 558:XVIII. doi:10.1007/978-3-319-06770-4
43. Zakharov VN, Ulyanov SV. Fuzzy models of intelligent industrial regulators and control systems. II. Evol Principles Construction. Tech Cybernetics (1993) 4:189–205. (in Russian).
44. Litvintseva LV, Ulyanov SV. Intelligent control system. I. Quantum computing and self-organization algorithm. J Computer Syst Sci Intern (2009) 48(6):946–84. doi:10.1134/S1064230709060112
45. Raikov A. Accelerating decision-making in transport emergency with artificial intelligence. Adv Sci Technology Eng Syst J (2020) 5(6):520–30. doi:10.25046/aj050662
46. Deng Z, Dang Z, Zhang Z. Flex multimode neural network for complete optical computation. iScience (2025) 28(5):112376. doi:10.1016/j.isci.2025.112376
47. Xiao J, Sun Z, An H, Zhao H, Qiu M, Li X. Optical image processing and applications empowered by vision-language models. iOptics (2025) 1(1):100003. doi:10.1016/j.iopt.2025.100003
48. Lin X, Lin X, Rivenson Y, Yardimci NT, Veli M, Luo Y, et al. All-optical machine learning using diffractive deep neural networks. Science (2018) 361(6406):1004–8. doi:10.1126/science.aat8084
49. Chen Z, Sludds A, Davis R, Christen I, Bernstein L, Ateshian L, et al. Deep learning with coherent VCSEL neural networks. Nat Photon (2023) 17:723–30. doi:10.1038/s41566-023-01233-w
50. Bandyopadhyay S, Sludds A, Krastanov S, Hamerly R, Harris N, Bunandar D, et al. (2022). Single-chip photonic deep neural network with accelerated training. doi:10.48550/arXiv.2208.01623
51. Gyger S, Zichi J, Schweickert L, Elshaari AW, Steinhauer S, Covre da Silva SF, et al. Reconfigurable photonics with on-chip single-photon detectors. Nat Commun (2021) 12:1408. doi:10.1038/s41467-021-21624-3
52. Ahmed SR, Baghdadi R, Bernadskiy M, Bowman N, Braid R, Carr J, et al. Universal photonic artificial intelligence acceleration. Nature (2025) 640:368–74. doi:10.1038/s41586-025-08854-x
53. Xu D, Ma Y, Jin G, Cao L. Intelligent photonics: a disruptive technology to shape the present and redefine the future. Engineering (2025) 46:186–213. doi:10.1016/j.eng.2024.08.016
54. Huang Z, Cao L. Quantitative phase imaging based on holography: trends and new perspectives. Light Sci Appl (2024) 13:145. doi:10.1038/s41377-024-01453-x
55. Cang J, Gao Y, Liu Y, Sun S. Implications of pulsar timing array results for high frequency gravitational waves. Phys Lett B (2025) 864:139429. doi:10.1016/j.physletb.2025.139429
56. Jiang S, Li Y, Yan H, Chen M, Qiu F, Lin X, et al. Modulation parameters characterisation of electro-optic phase modulators for space-based gravitational wave detection. Measurement (2025) 256(A):117971. doi:10.1016/j.measurement.2025.117971
57. Chintalapati B, Precht A, Hanra S, Laufer R, Liwicki M, Eickhoff J. Opportunities and challenges of on-board AI-based image recognition for small satellite Earth observation missions. Adv Space Res (2025) 75(9):6734–51. doi:10.1016/j.asr.2024.03.053
58. Dibb SD, Asphaug E, Bell JF, Binzel RP, Bottke WF, Cambioni S, et al. A post-launch summary of the science of NASA's psyche mission. AGU Adv (2024) 5:e2023AV001077. doi:10.1029/2023AV001077
59. Pöntinen M, Granvik M, Nucita AA, Conversi L, Altieri B, Auricchio N, et al. Euclid: identification of asteroid streaks in simulated images using StreakDet software. Astron and Astrophysics (2020) 644:A35–12. doi:10.1051/0004-6361/202037765
60. Zhang D. Fundamentals of image data mining: analysis, features, classification and retrieval. Cham, Switzerland: Springer (2019). doi:10.1007/978-3-030-17989-2
61. Raikov A. Photonic artificial intelligence. Singapore: Springer (2024). doi:10.1007/978-981-97-1291-5
62. Lakshminarayanan V, Fleck A. Zernike polynomials: a guide. J Mod Opt (2011) 58(18):1678. doi:10.1080/09500340.2011.633763
63. Yao Z, Liu S, Wang Y, Xiaoyun Yuan X, Fang L. Integrated lithium niobate photonics for sub-ångström snapshot spectroscopy. Nature (2025) 646:567–75. doi:10.1038/s41586-025-09591-x
64. Chu J, Cheriere N, Brennan G, Yang M, O’Shea G, Gladrow J, et al. Can holographic optical storage displace hard disk drives? Commun Eng (2024) 3:79. doi:10.1038/s44172-024-00225-0
Keywords: analogue artificial intelligence, convergent method, cosmic strings, inverse problem-solving, structureless photon, thermodynamics
Citation: Raikov A (2026) Architecture of full-analogue photonic AI for non-standard problem solving. Front. Phys. 13:1704910. doi: 10.3389/fphy.2025.1704910
Received: 14 September 2025; Accepted: 22 December 2025;
Published: 22 January 2026; Corrected: 26 January 2026.
Edited by:
Sauro Succi, Italian Institute of Technology (IIT), ItalyReviewed by:
Pedro Caridade, University of Coimbra, PortugalOem Trivedi, Ahmedabad University, India
Copyright © 2026 Raikov. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Aleksandr Raikov, YWxla3NhbmRyQGpuaXN0LmNu