ORIGINAL RESEARCH article

Front. Phys., 18 March 2026

Sec. Interdisciplinary Physics

Volume 14 - 2026 | https://doi.org/10.3389/fphy.2026.1760244

Loophole-free Bell inequality violation experiments verifying the realism and locality principles

  • Swiss Federal Institute of Technology Lausanne, Lausanne, Switzerland

Abstract

The aim of this study is to produce a simulation of EPR experiments violating the CHSH-Bell inequalities using physically interpretable objects (properties defined before measurement) and interactions (no supraliminal communication), without influence from any theory. It turns out that the proposed model systematically violates the CHSH-Bell inequalities reaching all the values . Our simulation reproduces experimental entanglements with greater efficiency. This approach has at least two consequences. First, it demonstrates that violations of the CHSH-Bell inequalities are possible while still verifying the principles of realism and locality, but in a different sense from that of Bell’s theorem. Therefore, Bell’s theorem itself is not called into question, nor are the results of the EPR experiments. However, it is the deduction that leads to the current interpretation (often described as strange because it does not follow one of the principles of realism and locality as defined by Bell’s theorem) that is challenged. Second, it challenges real-world EPR experiments to exceed the efficiency rates of our simulation. Our idealization demonstrates that, between the efficiency rates required to confirm the violation of Bell’s inequalities and the efficiency rates of our idealization, the interpretation of experiments remains possible within a framework of “classical” physical principles (properties defined before measurement and no supraliminal communication). Confirming the strangeness of quantum mechanics would therefore require obtaining efficiency rates higher than those of Bell’s theorem.

1 Introduction

The EPR experiments on entanglement [112] demonstrated that Quantum Mechanics (QM) violates the CHSH-Bell inequalities [13]. They were one of the most surprising and spectacular discoveries of recent years. The interpretation of QM since its inception has been problematic [14], and the results of the EPR experiments confirmed the strangeness of the physics underlying this theory, or at least its nonconformity with respect to the physical principles known to date, particularly with regard to the principles of reality and locality [15].

The aim of this article and our simulation of EPR experiments is not to reject the repeatedly demonstrated experimental results of CHSH-Bell’s inequalities violation. It is also known and proven that a local realistic model cannot violate the CHSH inequality. This seems to contradict our simulation, but it does not; we will provide some explanations which will certainly need further investigation. The objective is to find a physically interpretable model of EPR experiments. QM provides a mathematical interpretation that is difficult to reconcile with current physical knowledge, giving rise to numerous, often troubling or even debatable, physical interpretations (multiple or parallel worlds, instantaneous “information” transmission, and so on) that generally appear unsatisfactory. To achieve this, our constraints will be to simply generate a source with random characteristic and then obtain binary results (“” or “” which are the only two results accessible to observers) by establishing measurement and detection rules in the form of logical tests to determine whether the source satisfies these rules. There is no use of equations that would allow for obtaining specific statistics, therefore no QM equations are used in our simulation. However, the results will reproduce the statistics of QM, and more generally, all simulations will violate CSCH-Bell’s inequalities with values of that cover the entire possible range . Therefore, we can also obtain statistics that go beyond QM, demonstrating the generality of our model. This modeling can be adapted to the interpretation of other QM experiments (Mach-Zender for example,). However, this model needs improvement in order to be applicable to all QM experiments. The QM results will appear as a special case of a large family of models violating Bell’s inequalities. Moreover, our simulation of EPR experiments will violate Bell’s inequalities with detector efficiency rates systematically higher than those of current EPR experiments. While many studies provide minimum detector efficiency rates from which EPR experiments can conclude that Bell’s inequalities are violated, thus confirming the validity of QM, our study will therefore provide maximum detector efficiency rates. If these EPR experiments were unable to exceed the rates that our simulations obtain, it could mean that our modeling is a possible first approach to realistically and locally interpreting QM.

2 Basic objects of our EPR experiments

We are using the basic design of “ideal” EPR experiments, CHSH (Clauser, Horne, Shimony and Holt) experiment, with two channels that we will model (Figure 1).

FIGURE 1

The source “” (Figure 1) will produce a disk sector with a principal axis (the axis of symmetry of this sector) and a size parameter that we will call “” (Figure 2A) that defines half the size of the sector. The source object will then extend between and around the principal axis. The source object will be called a photon because it can be interpreted as a polarized photon. The source will emit two identical photons, one towards “” and the other towards “” (Figure 1). We also provide a second simulation with anti-correlated photon pairs where the photon on the “” channel is rotated by . But our text is based on the case of pairs of identical photons.

FIGURE 2

The measuring apparatus “” or “” (Figure 1) will sort the emitted photons according to whether they pass through one of the four quadrants that divide a disk characterized by a principal axis and four sectors that defines the division of this disk (Figure 2B). Relative to this principal axis, these four quadrants will measure the states (between and ), (between and ), (between and ), and (between and ). The measuring apparatus will be called a polarizer because our experiment can be interpreted quite naturally in terms of polarization experiments.

The “” or “” detector (Figure 1) will detect the photon according to its polarization whatever its orientation, that is to say either or , i.e., it will indicate in which quadrant of the disk (both vertical or both horizontal) the sector of the source is located in the measuring apparatus.

The coincidence counter “” (Figure 1) will be the analysis that consists of counting simultaneous detections.

3 The rules

For source “”, the principal axis angle will be chosen randomly. All principal axis directions are possible between and . The size of the disk sector will be a fixed angle, the same value for all photons of the experiment. We therefore have a source that emits photons polarized along a principal axis in all possible directions, and a fixed size that characterizes the photon. In our simulation files, the value of “” is a parameter that can be modified by the experimenter, allowing us to test countless definitions of “photons”. It is through this parameter that all the values for Bell’s inequality violations can be obtained.

For the measuring apparatus (polarizers “

” or “

”), the orientations of the disk sector (the photons) are analyzed to determine which quadrant they are located in. The fact that, in our experiment, the photon is a sector (and not a simple vector) implies that there are two possible measurement situations. Either the sector is entirely contained within one of the polarizer’s quadrants, or the disk sector straddles two quadrants. We will call these sectors straddling two quadrants “ambiguous photons”. We therefore need to define a behavior for these two situations. For this experiment, the basic behavior will be as follows:

  • When the photon is entirely contained within one quadrant, it passes through the polarizer unchanged and can be measured.

  • When the photon straddles two quadrants, it is absorbed and can not be measured.

In this configuration, at the output of the polarizers, photons that are ambiguous with respect to the current measurement are eliminated, absorbed. The other photons are not modified and pass through the polarizer and will therefore be detected (giving or ).

For detectors ( and ), the value will be given when the photon passes through the or quadrants, thus indicating the state regardless of the or phase. The value will be given when the photon passes through the or quadrants, thus indicating the state regardless of the or phase. Absorbed photons (ambiguous photons) won’t be recorded because they no longer exist just as in a real-world experiment that would not detect them. Nevertheless, they could be recorded for allowing us to perform statistical analyses impossible in reality, because with our computer experiment we know all the emitted pairs and the fate of the photons through the measuring apparatus and detectors. But to be as close as possible to a real-world EPR experience, we do not record them. We will thus count the number of photons detected in the detectors on each line “” or “” relative to the number of photons emitted, which will give us what we call the detector efficiency. We develop this concept in Section 6.

4 Principle of calculating CHSH-Bell inequalities

This model is implemented in the simulation files (cf.Supplementary Data). The main results of the ongoing experiment are found in the “Summary” tab. The four other tabs simulate the ongoing EPR experiment according to the four polarizer configurations expected to obtain the greatest value of in the QM experiments:

The analysis of the results consists of counting the number of emitted photon pairs detected in the states , , , and . This count is performed for all four polarizer configurations (Equations 14). This allows us to obtain the probabilities of obtaining these states:

In our simulation, these counts (Equations 58) are located in cells , , , and of the four “Exp” tabs of the simulation file. As a reminder, QM gives.

In our simulation, these theoretical counts (Equations 9, 10) are located in cells , , , and of the four “Exp” tabs of the simulation file in order to compare our experimental results with the theory.

Next, we obtain the values of the following term for each of the four polarizer configurations:

In our simulation, its experimental value (Equation 11) is in cell and its theoretical value is in cell of the four “Exp” tabs of the simulation file. The simulation values are also reported in cells to of the “Summary” tab.

Finally, we obtain the value of Bell’s inequality:

In our simulation, this value (Equation 12) is found in cell of the “Summary” tab of the simulation file. QM predicts , which violates Bell’s inequalities.

The detector’s efficiency is calculated by dividing the number of detections (that is, the photons not absorbed by the measuring device, i.e., the number of photons known and measurable by Alice or Bob) by the total number of emissions (the number of trials, column of the four “Exp” tabs of the simulation file). It should be noted that this is a theoretical efficiency, since Alice and Bob do not know the exact number of emitted photons. If the pair of emitted photons is absorbed by both of their measuring devices, this pair is, a priori, nonexistent for Alice and Bob. For them, no pair has been emitted. If this calculation were based on the number of detections identifiable by Alice and Bob, this required experimental efficiency should be even higher. The number of detections is indicated for Alice in cell and for Bob in cell of the four “Exp” tabs of the simulation file. The detector’s efficiencies are indicated for Alice in cells to and for Bob in cells to of the “Summary” tab of the simulation file. We added a “Conditions Test” tab that allows us to verify that the formula indicating the photon’s state is error-free. We indicated the angles at which the state measurement changes. This formula is used in columns , , and for polarizer “” and , , and for polarizer “” in the four “Exp” tabs of the simulation file.

To summarize our experiment as if it were a real-world experiment, a pair of identical photons (defined in columns to for Alice and in columns to for Bob) is sent to Alice’s polarizer (columns to ) and to Bob’s polarizer (columns to ). Alice’s list of results (column ) is then compared to Bob’s list of results (column ), giving the results in columns to . No modification nor selection is made, there is no Eve or eavesdropper.

More precisely, in each four “Exp” tabs, the definition of the photons (our sector) for Alice is randomly assigned in columns and (principal axis and sector size). In columns and this definition of the sector is translated into explicit angles of the sector’s extremities (lower and upper limits). Alice’s polarizer definition (the angle measured by the polarizer) is given in columns and . Columns and contain the results of the lower and upper limit measurements (i.e., the quadrant where these limits pass). Column corresponds to the translation of these results for the sector ( or if it is entirely contained within a single quadrant, no data when absorbed if it spans two quadrants). Columns to contain the counts of each separated results ( and , the only results Alice is aware of, those absorbed do not exist in Alice’s eyes) and the calculated probabilities of obtaining and for Alice. Once again, in a real-world experiment, the absorbed photons are unknown to Alice. These probabilities must then be close to 0.5 for the experiment to be relevant because no known results of Alice are missing and they are equally distributed randomly.

For Bob, the definition of the photons in columns and (principal axis and sector size) is assigned with the same values than Alice (i.e., case of emission of pair of identical photons) in the first file of the repository, but a second file with anti-correlated photons is also proposed in the repository giving the same results. In columns and there are the explicit angles of the sector’s extremities (lower and upper limits). Bob’s polarizer definition is given in columns and . Columns and contain the results of the lower and upper limit measurements (i.e., the quadrant where these limits pass). Column corresponds to the translation of these results for the sector ( or if it is entirely contained within a single quadrant, no data when absorbed if it spans two quadrants). Columns to contain the counts of each separated results ( and , the only photons that exit Bob’s measuring apparatus) and the calculated probabilities of obtaining and for Bob. In a real-world experiment, the absorbed photons are unknown to Bob, they do not appear in his results list. These probabilities must then be close to 0.5 for the experiment to be relevant because no known results of Bob are missing and they are equally distributed randomly. Columns to contain the counts of coincident pairs and their probabilities in the experiment (and also the expected theoretical results).

5 Results of the simulations

All simulations are controlled from the “Summary” tab (no changes are necessary in the other tabs) of the simulation file (cf.Supplementary Data). The user-adjustable parameter is the photon size, i.e., the disk sector size in cell . We also added the possibility to modify the angles of the polarizers in cells , , and , but by default, the values correspond to the four angles that maximize the value of (Equations 14). These 5 cells, which can be modified by the experimenter, have a red background.

As the file stands, 60,000 random emissions of photon pairs are provided (this number can be increased by the reader and experimenter by copying and pasting lines). These photon pairs are distributed across the four EPR polarizer configurations that maximize the violation of Bell’s inequalities (i.e., according to the combinations of the , , , and orientations), resulting in 15,000 photon pairs being tested per configuration.

The curve in Figure 3 shows the values that reaches as a function of the size parameter of the emitted photons. The simulation file as provided offers a source that emits a pair of identical photons (of the same polarization), but the file can be modified so that the second photon is rotated , resulting in a source of a pair of perpendicularly polarized photons as presented in the second simulation file (cf.Supplementary Data). The results will remain unchanged.

FIGURE 3

This curve shows us that our model systematically violates Bell’s inequalities and is capable of reaching all possible values of depending on the value of the sector size. According to this curve, a size value of (the size parameter of the emitted photons) demonstrates in Figure 4 entanglement results similar to those obtained in QM.

FIGURE 4

This result is not only similar to the expected statistics of QM, but more importantly, this experiment conforms to what is expected for a loophole-free EPR experiment, as we achieved a detection rate of on the detectors. As a reminder, the minimum required value is [16].

It can also be noted that by setting a size value of , in Figure 5 we obtain the entanglement levels found in what is one of the best experimental loophole-free proofs to date [12] with for a detection rate of at best . Here are our results in Figure 5. Again, we obtain a very good detection rate of .

FIGURE 5

They even obtain , which we obtain with a size of and with a detection rate of (Figure 6).

FIGURE 6

6 About loopholes

The first thing to note is that our computer experiments do not have a loophole, at least not in the experimental sense. The reasons for the lack of detections in real experiments are numerous. Generally, it is assumed to be due to materiel defects (difficulty in controlling emissions, polarizer imperfections, detector imperfections). In our experiment, most of these defects are absent.

The emissions of the photon pairs are perfectly known, both the characteristics of the emitted photons and the time of their emission.

Similarly, the detections are perfectly controlled; there is no background noise, no photon is detected if it was not emitted by the source, and no photon can pass through the detectors without being measured if it is detected (i.e., not absorbed). This complete knowledge of the emissions, and of the rules applied systematically and without error to the measuring apparatus and detectors, ensures that in our computer experiments, the strong hypothesis that “the sample of detected pairs is representative of the pairs emitted by the source” is naturally verified.

Regarding the measuring apparatus, there are instances of absorbed photons, but this is due to “mathematical” reasons. Because some photons straddle two quadrants of the disk (two potential states), their detection is indeterminate. In our computer experiments, these photons are effectively absorbed by the measuring apparatus. There are no imperfections in our measuring apparatus. On the contrary, all photons, without exception, are detected if they are entirely within one quadrant of the disk, and all photons, without exception, are not detected if they straddle two quadrants of the disk. However, it is important to understand that some information known in a simulation is not known in a real-world experiment. Thus, we have two scenarios for a pair of emitted photons: either both detectors and absorb the photon (bilateral absorption), and in this case, the observers do not know a priori that a pair of photons has been emitted; it is as if these events do not exist in a real-world experiment; or only one photon of the pair is absorbed, and in this case, each observer individually cannot know that a pair of photons has been emitted; it is only by combining the results from and that we can deduce unilateral absorptions (we develop this important aspect a little more in the Discussion section). Therefore, in practical terms (for real-world EPR experiments), these unilateral absorptions can be misinterpreted as either an imperfection in the measuring apparatus, an imperfection in the detectors, or even an imperfection in the emissions. Our study shows that these defects can also be genuine, non-defective behaviors, inherent to the physics involved in these experiments, and therefore unavoidable even in an ideal experiment. This observation leads us to ask what the maximum detection level is that we can hope to achieve with our model. Here are in Figure 7 the values of S obtained as a function of the detection rates given by our simulations. The lower curve represents the minimum detection rate required to obtain loophole-free violations of Bell’s inequalities, i.e., the already known inequality [17].

FIGURE 7

While previous studies about loophole provide a lower limit for the detection rate ( curve in Figure 7), our simulation, on the contrary, provides an upper limit for the detection rates that can be obtained for any violation of a given value ( curve in Figure 7) for our model, i.e., that still compliant with the “classical” physical principles despite values of . If our model is representative of EPR experiments, this means that the detection rates of EPR experiments should lie between these two curves. If larger violations are reached, this would mean that our model is not representative of the physics underlying these experiments. Our study thus provides a new objective for EPR experiments, namely, not only reaching the minimum rate to constitute a genuine violation of Bell inequality, but also exceeding the maximum rate to invalidate this potential interpretation of QM. One can add that it would be interesting to define these upper limits revealed by our simulations in a more theoretical way (not just based on computer experiments).

And to finish about the loopholes, of course, the locality (or communication) loophole doesn’t exist in our simulations. In conclusion, our simulations perform loophole-free EPR experiments.

7 Discussion

An interesting point to note is that the S values for Bell’s inequality violation are always greater than, and more importantly, different from, the classical case , except for the limiting case of zero size, which then corresponds to a classical model of vector polarization (no extension of the polarization region around the principal axis). Our model thus allows us to continuously move beyond the “linear” idealization (vector modeling) thanks to these extended objects (defined over an interval between and ). By extension, the emissions of our sources can therefore represent not only the polarization of a photon (the viewpoint adopted in our article), but also a spin, a momentum vector, or any other vectorial concept.

Our model also allows us to simulate Mach-Zender experiments where photons pass through semi-reflective mirrors. In this case, the passage through such a mirror can be modeled in our theoretical framework by a duplication of the incident photon into an identical photon and a second photon undergoing a rotation. This allows us to reproduce the statistics of these experiments because on one of the output channels, there will be a difference, which will position the two photons of this channel in the same state but with opposite phases, resulting in destructive interference.

This modeling allows us to consider a preliminary interpretation of QM more compliant with the known principles of physics (reality and locality), but it seems to us that the results provided by our simulations are already a sufficiently important first point without immediately entering into this discussion, especially because our model requires some improvement to cover other characteristics of QM. Nevertheless, we can still indicate that in our entanglement experiments, the source emits an identical pair of photons or , i.e., of the same polarization. This corresponds to the superposition state . This superposition in this case does not occur simultaneously, but over time; that is, the source emits either or , but in equal proportions over time. All this to say that it is perfectly possible to rotate the polarization of the second photon by to simulate entanglement on pairs of photons with opposite polarizations, corresponding to the state (or with a rotation of ). This then gives the same results as can be seen in the second simulation file.

We can also discuss the third result “absorption” compared to “” and “” which means that our idealization falls outside the scope of Bell’s theorem. This third result (absorption) appears to be different in nature from the other two results ( and ). The latter two results are dedicated to detected photons. The measurement process then consists of two successive steps, detection with the result “detected” or “not detected” (first step), followed by the assignment of the value “” or “−1“ (second step). For the third result, there is only the first step, “not detected”, and then no result is given for the second step. It’s as if we were halfway through the measurement process. The photon exists outside of measurement process because it is absorbed, but it does not exist in the eyes of the observer because no measurement indicates its existence. From an experimental point of view (for Alice and Bob), the only knowable measurements are and , which wrongly suggests that the condition of realism is satisfied but this is not the case.

It is important to clarify what our result calls into question. It is known and proven that a local realistic model cannot violate the CHSH inequality. This seems contradictory to our simulation, but it is not the case. It is the physical interpretation or definition of locality and realism that needs to be called into question. In our experiment, the principles of realism and locality are “preserved” in a physical sense but the realism in the sense of Bell’s inequalities is no longer preserved because the idealization of physical entities into extended objects (non-linear objects) leads to the keystone of these results: such an object subject to measurement can be indeterminate, a third way that does not exist in a linear idealization. But this third way cannot be known to the experimenter. And it is this ability to detect non-detections that is challenged by our idealization. This extended object is not localized in the same way as in classical approach, and its measurement can mask its reality (by absorption). And these characteristics resonate well with the concepts of quantum mechanics (quanta, measuring changes the system).

In concrete terms, from a mathematical point of view, these extended objects generate a correlation whose dependence between observers is no longer classical. The measurement of individual probabilities remains independent of the observers since Alice’s results do not depend on Bob’s results, and vice versa. But let’s now look at the pooling of Alice and Bob’s results. If we measure vectors (non-extended objects, as with in our simulation), when Alice makes a measurement along a given direction, she always obtains a result of or , and Bob, regardless of the direction chosen, also always obtains a measurement of or . In an ideal experiment, this model would therefore always produce a mutual result. However, as soon as an extended object is measured, when Alice takes a measurement along a given direction, she is not guaranteed to obtain a result, and neither is Bob. Therefore, even if Alice or Bob obtains a result, it is not certain that a mutual result will occur. Thus, a kind of selection occurs when the results are pooled; this is why our simulation does not contradict the principle of violation of the CHSH inequality. But what is most important to understand in our simulation, is first that this selection is independent of the observers, second that Alice or Bob (or any other observer, Eve) has no control over it (this is not a post-selection, which is not a valid operation in Bell’s tests), it is even impossible for the experimenter to avoid or reduce these non-detections (these are not loopholes), and third that this selection is intrinsic to the system being studied and is not random but depends on the extended object being studied (in our case, the sector size). Sorting is not done by Eve but by absorption, and this is where the paradigm shift occurs. Therefore, non-detections are not necessarily loopholes but, on the contrary, structural characteristics of the system being studied. Even with a real-world experiment where photon emissions are known with certainty and which has efficient instruments, there would inevitably be non-detections (which would therefore not be loopholes).

It can be noted in passing that, experimentally, absorption in the measuring device is information that disappears from the statistics (in our simulation, it means “straddling two sectors”); it is a result that is no longer measurable. But our study seems to indicate that the count of unilateral detection could be relevant information that can be extracted from experiments (within the uncertainty of the loopholes). Its value is certainly linked to the value of and would allow us to deduce information about the characteristics of the system being studied, in the same way that a value of and also a percentage of non-detections (unilateral detections) are determined by our parameter .

Our simulation illustrates the tipping point in the interpretation of the principles of locality and realism. In the vectorial case, the absence of a result is purely experimental defect (an ideal experiment would not produce one), and the violation of CSCH inequality can then only be explained by the violation of one of the principle of locality or relalism. In our “sectorial” case, the absence of a result is intrinsic to the system and, on the contrary, cannot be avoided (an ideal experiment would always produce one), and the violation of CSCH inequality precisely indicates that the system is not vectorial but locally extended. The principles of locality and realism thus shift from a “classical point of view” that is infinitely fast transmission (regardless of distance) with properties defined independently of observation to a “quantum interpretable point of view” that would be no transmission at all, but properties dependent on observation (because the extended object may or may not be observable depending on the choice of measurement).

Finally, we demonstrate that there is a physically acceptable interpretation for EPR experiments that falls outside the framework of Bell’s theorem’s hypotheses (which is why our simulation does not contradict Bell’s theorem), but which, from an experimental point of view, seems to yield only two possible outcomes, since the third outcome is the disappearance of the emitted object and then the absence of measurement. The problem is that EPR experiments require a measurement to determine the existence of an object. In these experiments, there is no equivalence between measurement and the existence of the object. The measurement confirms that an object exists, but the measuring device is liable to absorb the object; thus, a measurement implies a measurable object, but a measurable object does not necessarily imply a measurement. Our model is an interpretation of the principles of reality and locality that does not satisfy Bell’s hypotheses and that goes beyond the common interpretation. In particular, with our idealization, if the polarization photon is idealized by a vector , we return to the common interpretation of realism (only two outcomes), the hypothesis of Bell’s inequalities is then satisfied and Bell’s inequality is not violated. The extended object alters the meaning of these principles. This can also be interpreted as a hidden variable. But the key point is that our simulation demonstrates that current real-world EPR experiments could very well be interpreted (regardless of Bell’s theorem) like our idealization. QM would then be compliant with the principles of locality and realism. But, just as with Bell’s theorem, the crucial point is to achieve sufficient detection efficiency to invalidate this interpretation. Another noteworthy point can be added: if we assume a physical process that avoids absorption and whereby photons straddling two sectors are randomly redeployed in one of the two overlapping sectors, the entanglement reverts to a classical correlation, as in the case Eva = 0, since we are once again within the scope of the hypotheses of Bell's theorem (with only two possible outcomes). The question posed by our work is ultimately whether the detection defects are indeed loopholes or legitimate responses of the measured system.

From a theoretical point of view, our study opens a potential avenue for interpreting QM by extended objects because several “strange” interpretations can already take on a common meaning through this simple simulation (quanta, measuring changes the system, reality and locality principles).

It can be noted that our experiment is scale-invariant. Indeed, our objects are defined on angular intervals, so they do not constrain the experiment to a particular scale. Our experiment can be carried out at both the traditional scale of QM and at our macroscopic scale. For example, we just need to draw a sector on both faces of a disk (either in the same position or at ). Then, the procedure is to rotate this disk randomly. Alice and Bob, each positioned on opposite sides of the rotating disk, each possess a measurement disk (divided into four quadrants with a principal axis). They independently choose an orientation for this measurement disk (according to the EPR configurations). When the rotating disk stops, they measure the position of the disk sector with their measurement disk. By iterating this procedure, Alice and Bob reconstruct our simulation and will therefore obtain statistics that violate Bell’s inequalities. If this interpretation proves correct, we can also understand how the transition from the quantum world to the macroscopic world occurs. We have seen that our model is scale-invariant and have proposed a macroscopic experiment. In fact, there is a fundamental difference with EPR experiments which makes this macroscopic experiment not truly quantum: the object being studied is detectable independently of the measurement one seeks to perform. We gain access to the information that the object has been emitted and therefore that it has disappeared. This macroscopic experiment, in order to become truly quantum and obtain violations of Bell’s inequalities, actually requires an additional condition: that Bob and Alice are blind to the object to be measured or that the object being measured are invisible to Alice and Bob. In other words, with our interpretation, an experiment would become quantum as soon as the object to be measured is detectable only through that measurement. That is to say, when “detectable” (in the sense that we can say if the object is detected or not detected) and “measurable” (in the sense that we can say if the object’s property is measured or not measured) are no more equivalent. In the quantum framework, if there is no measurement, we cannot assert whether or not there was an object to measure. This no-invertibility could explain the need for non-linear modeling (using extended, non-vector objects).

8 Conclusion

The modeling proposed in this article demonstrates that experiments can be defined that systematically violate Bell’s inequalities within a context fully interpretable using the known principles of realism and locality physics. Contrary to what one might think, our study does not call into question Bell’s inequality theorem, but it is the interpretation or definition of locality and realism that needs to be questioned. QM could have a physical interpretation with principles of realism and locality consistent with current physics. For example, Bell’s violation in our case seems to reject linearity in object modeling rather than realism, which could be one avenue for interpreting QM, though this is not guaranteed. The interpretation of the results of EPR experiments with regard to Bell’s inequalities may not be so obvious.

It is also shown that all possible theoretical values of these inequalities can be obtained by adjusting the sector size of the emitted photons. Even more emphatically, our modeling is incapable of generating a classical correlation except in the limiting case of zero size, which then corresponds to a classical vector model (no extension of the region around the principal vector axis).

Our study also shows that loopholes could take on a different meaning with our approach. The loopholes in real-world experiments might not be experimental defects but rather an intrinsic physical reality of the systems studied. They participate in defining the intrinsic nature of the object that we wish to measure. This is a property that is incapable of yielding a measurement result, hence the fact that EPR experiments seem to verify the condition of Bell’s theorem hypothesis (actual result or ). But this measurement deficiency is not like a random draw; it corresponds to a precise property (in this case, a set of orientations that correspond to an overlap) that performs a selective sorting with a precise physical meaning. It is the measured object that performs this sorting. It is the system that, by its very nature, imposes this result, which objectively informs us about the system. This internal system selection is relevant and unbiased information and cannot be ignored. With our interpretation, the EPR experiments, by obtaining , would indicate that the reality principle is thwarted not by a strange behavior of physical objects but by the existence of a third property, a kind of third pseudo-result that influences the measurement but whose value lies precisely in not yielding a result. Bell’s theorem is not disproven; it is the loopholes of the EPR experiments that hold the key to physical information, to an inaccessible physical reality that would have essential significance. Given our idealization, EPR experiments cannot avoid this sorting performed by the measured system; therefore, it is impossible for these experiments to satisfy the principle of having only two results (even if explicitly only the two results “” and “” are accessible to our knowledge). It then becomes crucial to take into account the unilateral results.

Among all these simulations, we were able to define experiments whose results are equivalent to the best experimental proof to date of quantum entanglement [12] with a value of and a detection rate of at best . Even better, we also managed to obtain the value of of QM with a detection rate of , higher than what is needed to guarantee a loophole-free EPR experiment. In other words, to reject this realistic and local interpretation of our simulations, real EPR experiments must achieve higher values detection rates than what has been obtained to date. Conversely, if these limits are confirmed, this would be an extremely strong indication for opening the way to an interpretation of QM. To begin a potential interpretation of QM, it is necessary to improve our modeling, but some aspects of QM can already take on a more common meaning through this solution.

Here is how we can summarize our study. If we restrict ourselves to a vector (linear) model, we can affirm that if an experiment consistently yields , this implies strange, non-classical physical behavior (contradicting the assumptions of realism or locality of the Bell inequalities), and the experiment requires a first minimum efficiency. If we now allow an extended object-based (non-linear) model, we can affirm that if an experiment consistently yields , our EPR numerical experiment demonstrates that, with the previous first minimum efficiency, this no longer necessarily implies strange, non-classical physical behavior (limit speed, reality of the object beyond all measurement, etc.). To again find strange, non-classical physical behavior, the experiment requires a second minimum efficiency greater than that of the Bell inequalities. In few words, our simulation shows, on the one hand, that we can have physical behavior consistent with known physics even if , and on the other hand, that to confirm strange physical behavior of QM, EPR experiments require efficiencies greater than that of the Bell inequalities.

Statements

Data availability statement

The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author.

Author contributions

SL: Conceptualization, Validation, Investigation, Methodology, Software, Writing – original draft.

Funding

The author(s) declared that financial support was received for this work and/or its publication. Open access funding by Swiss Federal Institute of Technology in Lausanne (EPFL).

Conflict of interest

The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declared that generative AI was not used in the creation of this manuscript.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fphy.2026.1760244/full#supplementary-material

SUPPLEMENTARY FILE S1

Simulation of an EPR experiment with pairs of correlated photons.

SUPPLEMENTARY FILE S2

Simulation of an EPR experiment with pairs of anti-correlated photons.

References

  • 1.

    FreedmanSJClauserJF. Experimental test of local hidden-variable theories. Phys Rev Lett (1972) 28:93841. 10.1103/PhysRevLett.28.938

  • 2.

    AspectADalibardJRogerG. Experimental test of bell’s inequalities using time-varying analyzers. Phys Rev Lett (1982) 49:18047. 10.1103/PhysRevLett.49.1804

  • 3.

    WeihsGJenneweinTSimonCWeinfurterHZeilingerA. Violation of bell’s inequality under strict einstein locality conditions. Phys Rev Lett (1998) 81:503943. 10.1103/PhysRevLett.81.5039

  • 4.

    RoweMAKielpinskiDMeyerVSackettCAItanoWMMonroeCet alExperimental violation of a bell’s inequality with efficient detection. Nature (2001) 409:7914. 10.1038/35057215

  • 5.

    MatsukevichDNMaunzPMoehringDLOlmschenkSMonroeC. Bell inequality violation with two remote atomic qubits. Phys Rev Lett (2008) 100:150404. 10.1103/PhysRevLett.100.150404

  • 6.

    AnsmannMWangHBialczakRCHofheinzMLuceroENeeleyMet alViolation of bell’s inequality in josephson phase qubits. Nature (2009) 461:5046. 10.1038/nature08363

  • 7.

    ScheidlTUrsinRKoflerJRamelowSMaX-SHerbstTet alViolation of local realism with freedom of choice. Proc Natl Acad Sci (2010) 107:1970813. 10.1073/pnas.1002780107

  • 8.

    HofmannJKrugMOrtegelNGérardLWeberMRosenfeldWet alHeralded entanglement between widely separated atoms. Science (2012) 337:725. 10.1126/science.1221856

  • 9.

    GiustinaMMechARamelowSWittmannBKoflerJBeyerJet alBell violation using entangled photons without the fair-sampling assumption. Nature (2013) 497:22730. 10.1038/nature12012

  • 10.

    ChristensenBGMcCuskerKTAltepeterJBCalkinsBGerritsTLitaAEet alDetection-loophole-free test of quantum nonlocality, and applications. Phys Rev Lett (2013) 111:130406. 10.1103/PhysRevLett.111.130406

  • 11.

    BrunnerNCavalcantiDPironioSScaraniVWehnerS. Bell nonlocality. Rev Mod Phys (2014) 86:41978. 10.1103/RevModPhys.86.419

  • 12.

    HensenBBernienHDréauAEReisererAKalbNBlokMSet alLoophole-free bell inequality violation using electron spins separated by 1.3 kilometres. Nature (2015) 526:6826. 10.1038/nature15759

  • 13.

    ClauserJFHorneMAShimonyAHoltRA. Proposed experiment to test local hidden-variable theories. Phys Rev Lett (1969) 23:8804. 10.1103/PhysRevLett.23.880

  • 14.

    EinsteinAPodolskyBRosenN. Can quantum-mechanical description of physical reality be considered complete?Phys Rev (1935) 47:77780. 10.1103/PhysRev.47.777

  • 15.

    BellJS. On the einstein podolsky rosen paradox. Phys Physique Fizika (1964) 1:195200. 10.1103/PhysicsPhysiqueFizika.1.195

  • 16.

    EberhardPH. Background level and counter efficiencies required for a loophole-free einstein-podolsky-rosen experiment. Phys Rev A (1993) 47:R747R750. 10.1103/PhysRevA.47.R747

  • 17.

    LarssonJ-A. Loopholes in bell inequality tests of local realism. J Phys A: Math Theor (2014) 47:424003. 10.1088/1751-8113/47/42/424003

Summary

Keywords

CHSH-bell inequalities violation, EPR experiments, interpretation of quantum mechanics, principle of locality, principle of realism

Citation

Le Corre S (2026) Loophole-free Bell inequality violation experiments verifying the realism and locality principles. Front. Phys. 14:1760244. doi: 10.3389/fphy.2026.1760244

Received

03 December 2025

Revised

15 February 2026

Accepted

24 February 2026

Published

18 March 2026

Volume

14 - 2026

Edited by

Lev Shchur, National Research University Higher School of Economics, Russia

Reviewed by

Arkady Satanin, HSE University, Russia

N. D. Hari Dass, Institute of Mathematical Sciences, India

Updates

Copyright

*Correspondence: Stephane Le Corre,

Disclaimer

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Outline

Figures

Cite article

Copy to clipboard


Export citation file


Share article

Article metrics