Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Neurosci., 15 January 2026

Sec. Neuromorphic Engineering

Volume 19 - 2025 | https://doi.org/10.3389/fnins.2025.1658490

Overcoming quadratic hardware scaling for a fully connected digital oscillatory neural network

  • NanoComputing Research Lab, Integrated Circuits Group, Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands

Computing with coupled oscillators or oscillatory neural networks (ONNs) has recently attracted a lot of interest due to their potential for massive parallelism and energy-efficient computing. However, to date, ONNs have primarily been explored either analytically or through analog circuit implementations. This paper shifts the focus to the digital implementation of ONNs, examining various design architectures. We first report on an existing digital ONN design based on a recurrent architecture. The major challenge for scaling such recurrent architectures is the quadratic increase in coupling hardware with the network size. To overcome this challenge, we introduce a novel hybrid architecture that balances serialization and parallelism in the coupling elements that shows near-linear hardware scaling, on the order of about 1.2 with the network size. Furthermore, we evaluate the benefits and costs of these different digital ONN architectures in terms of time to solution and resource usage on field programmable gate array (FPGA) emulation. The proposed hybrid architecture allows for a 10.5 × increase in the number of oscillators while using 5-bits to represent the coupling weights and 4-bits to represent the oscillator phase on a Zynq-7020 FPGA board. The near-linear scaling is a major step toward implementing large scale ONN architectures. To the best of our knowledge, this work presents the largest fully connected digital ONN architecture implemented thus far with a total of 506 fully connected oscillators.

1 Introduction

With the growing computational demands and the rise of AI, we are witnessing a significant increase in the power consumption of computing systems, and this growth is becoming unsustainable, particularly with the widespread deployment of smart systems. One of the main reasons for this trend is the computing architecture based on the von Neumann paradigm, which has inherent limitations due to the separation of memory and processing units, leading to memory and power wall bottlenecks. To overcome these limitations, significant efforts in recent years are focused on neuromorphic computing architectures, such as the Oscillatory Neural Network (ONN) (Todri-Sanial et al., 2024; Csaba and Porod, 2020), illustrated in Figure 1, which originates from Hopfield neural networks (HNNs) (Hopfield, 1982; Ramsauer et al., 2021). ONNs are energy-minimizing networks that inherently perform associative memory tasks, such as pattern recognition (Hoppensteadt and Izhikevich, 2000; Abernot et al., 2021, 2023; Delacour et al., 2021; Guo et al., 2025). ONNs have also been adapted and shown promising performance for image classification and other AI tasks (Abernot and Todri-Sanial, 2023; Sabo and Todri-Sanial, 2024; Cai et al., 2025; Miyato et al., 2025). Additionally, the energy minimization of coupled networks of oscillators can be linked to the Ising model. This model applied to ONNs has garnered significant interest in recent years for designing oscillatory Ising machines, which can be utilized to solve various complex optimization problems, such as max-cut, graph coloring, and more (Vaidya et al., 2022; Delacour et al., 2023; Wang and Roychowdhury, 2019; Wang et al., 2021; Bashar and Shukla, 2023; Bashar et al., 2024; Sundara Raman et al., 2024; Cılasun et al., 2024; Nikhar et al., 2024; Bashar et al., 2021; Li et al., 2025; Cılasun et al., 2025; Bardella et al., 2024). Since the number of oscillators in a network required to solve a certain problem is determined by the dataset or exact problem that needs to be embedded, this motivates the need for larger ONNs. For example, in the case of associative memory tasks, each oscillator is typically used to represent one pixel in a pattern. Sparse network topologies such as nearest-neighbor or King's graphs typically facilitate a larger number of nodes, but do not support the embedding of all problems and can require extra problem embedding steps, which are not needed for fully-connected architectures.

Figure 1
Diagram illustrating a process for pattern retrieval using phase evolution. On the left, a corrupted pattern with initialized phases is shown, linked by sine waveforms labeled with a 180-degree phase difference between black and white colors. The corrupted pattern is linked to anetwork of oscillators. The network, following coupling weights based on learning rules, connects to a retrieved pattern on the right. The retrieved pattern and network are also linked by sine waveforms with a 180-degree phase indicating black and white pixels. The process involves initializing, evolving through the network, and measuring to obtain the final pattern.

Figure 1. Illustration of oscillatory neural network for pattern retrieval. Pattern retrieval process where each pixel in the pattern is represented with an oscillator. The coupling weights in the network are determined using a learning rule. A corrupted pattern is injected into a network as initial condition. The oscillator phases naturally evolve in parallel to the memorized pattern.

Typically, ONNs are designed in the analog domain to utilize the complex dynamics of coupled oscillators. However, recently, the first digital ONN implementation was reported (Abernot et al., 2021; Abernot and Aida, 2023; Abernot and Todri-Sanial, 2023), featuring a recurrent ONN architecture that represents the dynamics of digital square-wave oscillators. This recurrent architecture has been implemented in Field Programmable Gate Arrays (FPGA) and demonstrated tasks such as pattern retrieval and edge detection on ONNs. While ONNs have been successfully demonstrated on FPGA for their potential in edge computing, a significant limitation is the quadratic increase in the number of coupling elements as the network size grows. In this work, for the first time, we propose a novel ONN computing architecture that is not recurrent but a hybrid architecture that enables near-linear network scalability instead of quadratic scalability.

The rest of this paper is structured as follows. Sections 2.1-2.3 provide background on the computing principles of ONNs, state-of-the-art implementations and architectures of ONNs, and describes the existing recurrent ONN architecture. Section 2.4 details the proposed hybrid ONN architecture and its implementation details. Sections 2.5-2.7 explain the methodology behind the architecture comparison, followed by the comparison between recurrent and hybrid ONN architectures in Section 3. The discussions are presented Section 4. Finally, Section 5 concludes the paper.

2 Materials and methods

2.1 Oscillatory neural networks

Oscillatory Neural Networks (ONNs) are fully connected networks of coupled oscillators based on Hopfield neural networks (Hopfield, 1982; Ramsauer et al., 2021) and the Ising model (Ising, 1925). The network minimizes the Hamiltonian given by

H=-i,jJijσiσj-μihiσi,    (1)

where Jij is the coupling value between spins σi and σj, which are represented by the oscillators, μ is the magnetic moment, and hi is the external magnetic field acting on spin σi. The system naturally evolves to a configuration of spins σi that minimizes H. The dynamics in an oscillatory neural network can also be described by the dynamics in a network of phase-locked loop coupled oscillators given by (Hoppensteadt and Izhikevich, 2000)

θ.i=ω+V(θi)j=1nWijV(θj-π2),    (2)

where ω is the natural frequency of the oscillator, V(θ) is the waveform function, and Wij is the coupling strength from oscillator j to i. In the case of a digital oscillator V(θ) represents a square signal based oscillator. Similar to the Ising model, the oscillator phases naturally evolve to a state where the energy in the network is minimized.

A common application for ONNs is pattern retrieval using associative memory, where each oscillator represents one pixel in the pattern. Here a dataset of patterns is embedded in the coupling weights of the network using a learning rule, for example the Diederich-Opper I learning rule (Diederich and Opper, 1987). After training the network, a corrupted pattern can be set as the initial condition for the phases of each oscillator. The network will then naturally evolve to the closest learned pattern to reach a ground minima. By measuring the final steady-state phases of the oscillators in relation to each other the retrieved pattern can be determined. Figure 1 shows this process. By leveraging ONNs as oscillatory Ising machines, many other applications and algorithms are possible, including but not limited to: max-cut or max-k-cut (Bashar et al., 2024), traveling salesperson and image segmentation (Sundara Raman et al., 2024), and 3SAT or MaxSAT, also known as boolean satisfiability (Cılasun et al., 2024; Nikhar et al., 2024; Bashar et al., 2021).

The approximate scaling in terms of number of network elements for a given number of oscillators is given in Table 1. In a fully connected architecture, each oscillator is connected to each other oscillator and itself, including self-coupling. Hence, for N oscillators each oscillator has N connections and there will be N2 connections in total. Each connection is represented in hardware by one coupling element and one weight memory value, thus the number of coupling elements and memory cells are N2. The number of memory cells cannot be reduced for a given number of oscillators, because each weight value has to be stored in a memory cell. It can be noted that one can cut the number of memory cells in half by assuming symmetric coupling in the network, but the order of the scaling will remain N2, N22 to be precise. The architectures in this work allow for asymmetric coupling, thus the number of memory cells will be N2. The hardware resource usage of the individual oscillators can also be optimized, but since the coupling elements scale quadratically there is a higher return on investment to optimize the coupling elements compared to optimizing the oscillators themselves. Since the number of memory cells cannot be reduced, that prompts one target for optimization, namely the coupling elements. The coupling elements can be optimized in two ways. One way is to optimize each individual coupling element to reduce the hardware resource usage per element, however this will not reduce the total scaling below N2. The second way is to share hardware resources between multiple coupling elements. Hardware sharing potentially allows for drastic reduction in the number of total hardware elements and to go below quadratic scaling. In this work the latter way of optimization of the coupling elements is explored.

Table 1
www.frontiersin.org

Table 1. Order of number of network elements for N oscillators.

2.2 State-of-the-art oscillatory neural networks and oscillatory Ising machines in hardware

We include an overview of the state of the art in Table 2 of several digital ONNs and oscillatory Ising Machines, along with a few analog oscillator networks. As can be seen from Table 2, there is a large variety of architectures with different number of nodes, number of connections, and network topologies. A trade-off between the number of nodes and the network topology can be observed. Architectures using sparse networks (Nikhar et al., 2024; Liu et al., 2024; Moy et al., 2022; Wang and Roychowdhury, 2019; Wang et al., 2021; Neyaz et al., 2025) typically have a higher number of nodes compared to architectures with all-to-all networks (Abernot et al., 2021, 2023; Abernot and Todri-Sanial, 2023; Jackson et al., 2018; Luhulima et al., 2023; Vaidya et al., 2022; Cılasun et al., 2025) since sparse networks require fewer connections compared to a fully-connected topology. As can be seen, although the hybrid architecture introduced in this work, which is explained in later sections, is not competitive in the number of nodes, the number of connections is competitive and exceeded only by a digital stochastic differential equation (SDE) solver (Bashar et al., 2024), which operates on a different paradigm. Note that the network topology is relevant for ease of problem embedding. Sparse network topologies such as nearest-neighbor or King's graphs do not support the embedding of all problems and can require an additional computational step to map a problem. An example is the embedding of a graph max-cut to a sparse network topology. If one or more nodes of the graph has a degree greater than the number connections supported by the network topology, it cannot be embedded without applying a strategy like node merging, where multiple hardware nodes represent one graph node. Hence, we choose to optimize an existing all-to-all recurrent ONN architecture, which allows for easy problem embedding.

Table 2
www.frontiersin.org

Table 2. Comparison of oscillator-based architectures.

2.3 Recurrent ONN architecture

The architecture optimizations that will be discussed in this work are built on the recurrent digital ONN designed and used in (Abernot et al. 2021, 2023), (Abernot and Todri-Sanial 2023), and (Luhulima et al. 2023). To establish a baseline, this architecture will be briefly reintroduced and explained.

Figure 2 shows the global architecture overview for the digital ONN. The network is divided into two main parts: On the left (in blue), the coupling elements and on the right (in green), the individual oscillators.

Figure 2
Diagram of a system with coupling elements and oscillators. The left section features an N times) weight matrix labeled with components like W(1,1) and (W,2,1). These connect to arithmetic circuits, which in turn link to oscillators. The setup includes N arithmetic circuits and N oscillators, illustrating data flow through the system.

Figure 2. Global architecture overview for the digital ONN. The network is divided into two parts. One part for the coupling elements and one part for the oscillators. The coupling elements contain a N×N weight matrix for the coupling weights and N arithmetic circuits, where N is the number of oscillators in the architecture.

The coupling elements contain an N×N weight matrix, shown on the left side (in red), where N is the number of oscillators, as well as an arithmetic circuit for each oscillator, shown on the right side (in yellow) in the coupling elements block, that computes the weighted sum of output signals of the other oscillators as in Figure 2. The sign of the weighted sum is used to generate a reference signal, positive gives a high signal in the reference signal, negative gives a low signal in the reference signal. If the weighted sum is exactly zero, the reference signal will match the current amplitude of the respective oscillator. An edge detector and a counter measure the phase difference between the reference signal and the signal generated by the oscillator. This phase difference is added to the phase of the respective oscillator to align the oscillator in phase with the reference signal.

Each oscillator is implemented using a circular shift register-based phase-controlled oscillator, as shown in Figure 3. By initializing the first half of the registers with value 1 and the second half with value 0 a square-wave oscillator is created. The clock that drives the circular shift register can be considered to represent the natural frequency of the oscillator. After 2nphase bits clock cycles the oscillator will have completed one period, where nphase bits is the number of bits used to represent the phase of the oscillator. Hence, the period of the oscillator is given by

Toscillator=2nphase bitsTclock.    (3)

The number of phase bits directly defines the number of circular shift register positions by

nregisters=2nphase bits.    (4)

Additionally, the number of phase bits also defines the size of the phase step by

sizephase step=360.nregisters=360.2nphase bits.    (5)

For example, if nphase bits = 4, there will be 16 registers in the loop and the period of the oscillator will be 16 times larger than the clock period and the step size will be 360°/16 = 22.5°. For clarity, Table 3 shows an example of the evolution of the register states over time for an oscillator with nphase bits = 2. Each row is an instance in time, measured in clock cycles, while each column is the value in each register. At each time step the values in the shift registers are shifted to the left. After 4 time steps the initial condition is reached, and one oscillation period has passed. It can be seen that each column is a phase shifted version of the first column by one additional clock cycle. Hence, by selecting a specific register using a multiplexer, a phase shift can be created for the oscillator.

Figure 3
Diagram of a digital circuit featuring D flip-flop registers. They are in a line labeled as “2 to the power of n-phase bits.” A multiplexer receives input from each register and a phase, which is used to select the input. The output of the multiplexer is directed downward.

Figure 3. Phase controlled oscillator architecture. A circular shift registered is multiplexed to create a phase controlled oscillator. The first half of the registers are initialized with 1's and the second half with 0's.

Table 3
www.frontiersin.org

Table 3. Example of circular shift register state over time for nphase bits = 2.

Figure 4a shows a simplified view of the parallel, or combinatorial, implementation of the arithmetic circuits used in the recurrent architecture. On the left (in green) each of the oscillators are shown. The middle section (in red) indicates the multiplication by the weights. However, no actual multiplication is computed since the oscillator amplitude can only have two values, namely 1 and 0, which represent positive and negative amplitude respectively. The weight value or the negative weight value is used for the summation, depending on the amplitude of the respective oscillator. On the right (in yellow) the summation structure is shown. It can be seen that for N input values N−1 adders are required. For a larger number of oscillators the summation structure grows to have a large logic depth, and in turn a long critical path. In other words, as the number of oscillators goes up, so does the number of adders and thus the length of the logic chain, which means there is a larger delay from the input to the output of the structure. Theoretically, a smaller, but faster circuit could be created that exploits this large delay to sequentially compute the same sum of all inputs. This is the inspiration for the hybrid architecture that is discussed in the next section. Combined with the fact that this arithmetic circuit is replicated for each oscillator, it is easily shown that the number of adders scales proportional to N2. Hence, the aim is to optimize the implementation of the arithmetic circuit for less hardware resource usage, which is discussed in the next section.

Figure 4
Diagram comparing two processes. Diagram a: Shows a sequential structure with oscillator inputs, weights multiplication, and accumulation with a deep structure indicating a long critical path. Diagram b: Displays a multiplexed process over time with oscillator inputs, a multiplexer, weights memory, multiplication, accumulation, and a counter mechanism for storing the final sum.

Figure 4. (a) Parallel arithmetic circuit implementation for recurrent architecture. Oscillator amplitudes are multiplied by their respective coupling weight. These values are then accumulated to a total sum. The computation circuitry is implemented completely in combinatorial logic, leading to a deep logic structure with a long path delay. (b) Serial arithmetic circuit implementation for hybrid architecture. Oscillator amplitudes are multiplied by their respective coupling weight. These values are then accumulated to a total sum. The computation circuitry is implemented with one adder to compute the sum serially over multiple cycles.

2.4 Hybrid ONN architecture

The main idea to reduce the hardware resource usage is to utilize a single adder as an accumulator and to serially compute the weighted sum required to update the phase per oscillator. This is in contrast to the fully parallel nature of the arithmetic circuitry in the recurrent architecture. However, each oscillator still computes in parallel. Therefore, this architecture is a hybrid between the recurrent architecture and a fully serial computation, like what would happen in a standard von Neumann architecture. Since each oscillator will reuse a single adder to compute for each of its connections, instead of one adder per connection, the total number of adders is now proportional to N instead of N2, where N is the number of oscillators. This is the key difference in architectures that allows for linear resource scaling. However, each of these adders will increase in size depending on N as well. The minimum size of each adder in terms of bits, is given by

nbits adder=log2(N·nbits weight),    (6)

where nbits weight is the number of weight bits, which is a design parameter that is constant after synthesis. Therefore, it can be argued that the scaling of the network in terms of compute resources should be O(Nlog2(N)). There are N oscillators, each with an adder of size log2(N).

There are a few considerations to maintain equivalent functionality to the recurrent architecture. Firstly, the final value of the summation should be computed at least as fast as in the recurrent architecture. In other words, the weighted sum should be computed before the start of the next phase update. This can be achieved by creating a faster clock domain for the arithmetic logic. The phase is updated every clock cycle, therefore to serially compute the weighted sum for N oscillators the fast clock domain must be at least N times faster than the existing clock, since at each edge of the fast clock one of the coupling values is processed. Additionally, the computation must be synchronized to the phase update the final value of the computation must be available before it is needed to update the phase. The slower clock is used to achieve this. The rising edge of the slower clock is used to trigger the start of the serial computation for the next rising edge of the slower clock, where the phase will be updated. That means, when the transition from low a value to high a value on the slow clock is detected, the serial computation will start. See Figure 5 for a timing diagram of this process. Figure 4b shows the serial implementation of the arithmetic circuit. Again, on the left (in green) the oscillators are pictured. The output signals of the oscillators will be time multiplexed to the arithmetic circuitry using a multiplexer shown in the middle (in purple). Since the weights will now be used one-by-one instead of all at the same time, it is possible to store the weights in addressable memory, shown in the right (in red) section above the multiplier block. During synthesis these memories can be inferred to use the dedicated Block RAM (BRAM) hardware available on the FPGA. A simple counter is used to select the required memory address and to control the multiplexer. When the counter has counted up to the number of oscillators, the output of the adder is stored to hold the final sum value while the other components can be reset for the next cycle. The computation of the sum is now implemented using a single adder that uses feedback from its output to continuously accumulate values. The second input of the adder is output of the multiplication of the currently selected weight and oscillator output value. In short the computation will proceed as follows: At the rising edge of the slower clock the accumulated sum value will be reset to 0. One-by-one the oscillator amplitudes will be read and multiplied by their corresponding coupling weight. These values will then be accumulated until the counter reaches the end of computation. At the end of computation the final value will be stored until the next rising edge of the slower clock, where it will be used to update the phase.

Figure 5
Timing diagram comparing clock signals and computational processes. The top section shows the “clk” and “slow clk” signals. Below, “sum (recurrent)” indicates settling and computation periods, finishing by the next clock edge. “Sum (hybrid)” illustrates computing and reaching final value. Arrows and labels highlight “Start computation,” “Settling time,” “Settled into final value,” “Computing,” “Final value,” and “Finish on time for next edge.”

Figure 5. Timing diagram for the computation of a weighted sum for the phase update of one oscillator. The timing for both architectures is shown.

Together, the adder and multiplier constitute a multiply and accumulate operation, which can be inferred to the dedicated digital signal processing (DSP)-slice hardware available on the FPGA. This frees up more lookup tables and flip-flops to be used for other logic.

2.5 Test platform

For the following comparisons, a PYNQ-Z2 is used as a test platform, because this is a readily available and affordable platform. Additionally, the same platform was used to develop the recurrent architecture (Abernot et al., 2021) and facilitates the comparison to the newly developed hybrid architecture. The results will generalize to higher-end FPGA implementations. Other FPGA families will have different pre-scaling factors, since the logic-block designs can be different, but the overall order of the scaling holds. The same goes for any potential ASIC implementations, since the logic that has to be implemented remains the same, and the logic determines how the hardware will scale in terms of area. In an ASIC, the logic is fixed rather than configurable, which will not affect the scaling order, only pre-scaling factors. It features a Zynq-7020 System on Chip with a dual core ARM Cortex-A9 processor and a programmable logic fabric with 85,000 programmable logic cells. The architectures are written in Verilog using Vivado, which was also used for the hardware synthesis, hardware implementation, and timing analysis. The pattern retrieval benchmarks are automated using the provided PYNQ Python APIs, which give low-level access to control the hardware through an AXI interface. For either architecture, the API calls are exactly the same, since the AXI interfaces are identical, except for an extended address range that allows for the programming of the extra oscillators and synapses. Practically, this means that the size of phase vectors and weight matrices, which are represented as NumPy arrays (Harris et al., 2020), are larger. Additionally, a demonstration setup was made, which features a Python user interface running on a laptop where a user can draw patterns to memorize and retrieve. The user interface can communicate with the PYNQ-Z2 over a network connection to transmit the weight matrix and receive the result of the pattern retrieval.

2.6 Hardware scaling analysis methodology

To obtain trends in terms of hardware scaling both architectures were synthesized at different network sizes and the hardware resource usage in terms of lookup tables and flip-flops was noted, since these are the two components determining the amount of logic hardware the architectures need. For this analysis 5 weight bits and 4 phase bits were used. In order to get a trend of the trade-off between network size and oscillation frequency, both architectures were also implemented on FPGA to obtain the maximum possible logic frequency, which was then converted to the actual oscillation frequency by the frequency division mechanism discussed earlier. Additionally, to get a sense of the trade-off between the area utilization and the oscillation frequency we introduce two aggregate measures. Firstly, the total area used will be defined as the arithmetic mean of the percentages of each FPGA resource used: Flip-Flops, LUTs, DSPs, and BRAMs. This will give an indication of the percentage of total resources, or FPGA area used. Secondly, we determine the the percentage of oscillation frequency at each network size of the maximum oscillation frequency achieved. These two measures will allow us to find a balance point between area and frequency.

For the recurrent architecture the data points for the frequency scaling stop at 48 oscillators due to exceeding the maximum number of lookup tables available on the target FPGA, hence place-and-route could not be completed and by extension timing data could not be extracted. A standard linear regression was fitted on the base 10 logarithm of the data points to obtain the slope and the R2 value in logarithmic scale. The slope in the logarithmic scale equals the order of scaling. A line for the expected analytical scaling will also be added, so a comparison can be made between it and the synthesis results. The y-intercept of this line will be offset so that it starts at the same point as the fit data.

2.7 Pattern retrieval methodology

The primary goal is to show that the new hybrid architecture performs the same as the recurrent architecture for smaller a real world task up to the maximum implementable size for the test platform. Additionally, the secondary goal is to show that the new architecture also shows good results for tasks using a larger network size. If similar performance between the two architectures is shown and high performance can be shown for the larger network sizes, we can be confident that the new architecture does not have significantly different dynamics compared to the original architecture. The scope of this work is to address the scalability of the recurrent architecture designed for pattern retrieval tasks. However, this architecture is not designed to solve combinatorial optimization problems. That would require a different phase update mechanism. For these reasons, this paper is solely focused on ONN architecture for pattern retrieval tasks and solving the scalability problem. Therefore, we leave further benchmarking and exploration of applications to future work. To compare the performance of the two architectures an associative memory task is chosen, namely pattern retrieval from corrupted patterns. Performance will be measured in retrieval accuracy, meaning the percentage of runs that correctly retrieve the input pattern, and the average time in oscillation cycles to come to the correct solution. Additionally, the performance of the hybrid architecture is also benchmarked for larger patterns.

There are five datasets, each with different pattern sizes. The pattern sizes are 3 × 3, 5 × 4, 7 × 6, 10 × 10, and 22 × 22. Note that a 7 × 6 pattern is the largest size that could be implemented for the recurrent architecture on the test platform, while 22 × 22 is the largest pattern that could be implemented for the hybrid architecture on the test platform. Each dataset contains five patterns, representing letters of the alphabet, except the 3 × 3 dataset, which contains two patterns. The latter two pattern sizes exceed the maximum number of oscillators available on the recurrent architecture. To obtain the required coupling weights, each dataset was trained using the Diederich-Opper I learning rule (Diederich and Opper, 1987). The resulting weight matrix was quantized to 5 bits signed and programmed into each architecture. For each dataset, each pattern was corrupted 1,000 different times at three different corruption percentages: 10%, 25%, and 50%. To corrupt a pattern a given percentage of pixels in the pattern was randomly selected and its color was flipped, either from white to black or from black to white. As an example, corrupting a 10 × 10 pattern by 10% means flipping the color on 10 pixels. Each corrupted pattern was then set as the initial condition for the network after quantizing the phase values to 4 bits. The final phases were measured and compared to the original target pattern. A visual example of the process of retrieving a 22 × 22 pattern is shown (Figure 6).

Figure 6
Image showing a comparison of target, corrupted, and output images for letters N, C, and L. Each row displays a target letter on the left, a corrupted version in the middle with varying corruption levels (10%, 20%, 50%), and the output on the right, which resembles the target.

Figure 6. Example of pattern retrieval. In the left column the target images are shown. The center column shows the target images with a given percentage of pixels corrupted. Finally the right column shows the output pattern. It can be seen in the bottom row that the wrong pattern can be retrieved if the input pattern contains too many corrupt pixels.

3 Results

3.1 Same numerical precision

The architectures will be compared at the same level of numerical precision. The number of bits were chosen the same as in (Abernot et al. 2023), as it was determined to be sufficient for pattern retrieval using the recurrent architecture derived from that work. This gives 5 bits for the weight values, including a sign bit, and 4 bits to represent the phase. At this level of numerical precision the recurrent implementation allowed for a maximum of 48 oscillators, while the serialized implementation achieved 506 oscillators, an increase of factor 10.5. Table 4 shows the total resource usage for each of the two designs using 5 weight bits and 4 phase bits when synthesized for the maximum number of oscillators possible with the resources available on a Zynq 7020 FPGA. As can be seen, the limiting factor for the recurrent implementation is the number of lookup tables (LUTs). The hybrid implementation is limited by the number of DSP slices and Block RAMs on the FPGA. 100% utilization is not reported due to place-and-route overhead.

Table 4
www.frontiersin.org

Table 4. Resource usage of each design on a Zynq 7020 FPGA for the maximum feasible number of oscillators using 5 weight bits and 4 phase bits.

The maximum frequencies and maximum number of oscillators for each architecture are shown in Table 5. Power and energy per oscillation as reported by the Vivado power analysis tool is also given in this table. Not that these values are only an approximation based on reported logic activity to give an estimation of the energy used by the ONN part of the FPGA. The values will also be higher compared to other ASIC implementations due to the overhead present in FPGA-based designs. In this table the logic frequency is the frequency the logic runs at, and oscillation frequency is the frequency of the oscillators after taking into account the frequency division and the length of the shift registers that make up the oscillators. The recurrent implementation achieved a lower maximum logic frequency compared to the hybrid implementation, but a higher oscillation frequency since the phase update does not need to operate on a slower clock domain. To allow for the serialization of the computation in the hybrid architecture, this requires additional clock cycles to compute the new weighted sum. Consequently, the oscillator frequency is slowed down to a lower frequency to allow this new weighted sum update. Therefore, a trade-off is created between the number of oscillators and the oscillation frequency of the oscillators. A higher number of oscillators in the hybrid design requires more frequency division to slow down the phase update. The same trade-off appears for the energy per oscillation. Both designs use roughly the same total amount of logic (LUTs), but the hybrid architecture has a much lower effective oscillation frequency. Thus, the energy per oscillation is much higher for the hybrid design. In the next section, the scaling of hardware resources and frequency depending on network size will be discussed.

Table 5
www.frontiersin.org

Table 5. Performance of each design on a Zynq 7020 FPGA for the maximum feasible number of oscillators using 5 weight bits and 4 phase bits.

3.2 Resource scalability

Figure 7 shows the results of the data analysis for the number of lookup tables vs. the number of oscillators and the number of flip-flops vs. the number of oscillators. Figure 8 shows the oscillation frequency vs. the number of oscillators. The colored area in each figure shows the region where the recurrent architecture could not be implemented. The dotted lines around the data points show the 95% confidence intervals.

Figure 7
Two logarithmic graphs display the scalability analysis of data. Graph (a) shows the number of lookup tables versus the number of oscillators, and graph (b) shows the number of flip-flops versus the number of oscillators. In both graphs, data and fitting lines for the recurrent architecture (RA) and hybrid architecture (HA) are depicted, with different annotations representing unimplementable regions and analytical calculations. A shaded green area indicates regions of implementation viability, highlighting system constraints and performance.

Figure 7. (a) Lookup table (LUT) usage at different network sizes for both architectures at 5 weight bits and 4 phase bits. R2 = 0.9988 and slope = 2.0770 for the recurrent architecture (RA). R2 = 0.9946 and slope = 1.2231 for the hybrid architecture (HA). (b) Flip-flop (FF) usage at different network sizes for both architectures at 5 weight bits and 4 phase bits. R2 = 0.9060 and slope = 2.3859 for the recurrent architecture (RA). R2 = 0.9993 and slope = 1.1092 for the hybrid architecture (HA).

Figure 8
Logarithmic graph showing the oscillation frequency in hertz versus the number of oscillators. Data of the recurrent architecture (RA), marked with blue crosses, and Data of the hybrid architecture (HA), marked with orange pluses, are plotted. Fit lines, dashed for RA and dotted for HA, show frequency decreasing with more oscillators. The right section is shaded green, labeled “RA not implementable.”

Figure 8. Oscillation frequency at different network sizes for both architectures at 5 weight bits and 4 phase bits. R2 = 0.8867 and slope = −0.4614 for the recurrent architecture (RA). R2 = 0.9988 and slope = −1.3515 for the hybrid architecture (HA).

Figure 7a shows with a high R2 value that the number of lookup tables scales slightly above quadratic for the recurrent architecture, and slightly above linear for the hybrid architecture. The scaling orders are 2.08 and 1.22 respectively. We also see that the fit closely matches the analytical Nlog(N) scaling. Figure 7b shows similar results for the number of flip-flops, although the R2 for the recurrent architecture is lower, it is still above 0.9. In this case, the fit has a slope lower than the expected analytical model. This is likely because the analysis is based on the scaling of the computational hardware, the LUTs, and not the storage elements, the FFs. There is no one-to-one correspondence between the scaling of the two. Again the recurrent architecture scales above quadratic, having scaling order 2.39. It can be noted that the data point for the recurrent architecture at 16 oscillators appears to be an outlier and that the true slope might be less steep. The hybrid architecture scales with order 1.11, again just above linear. Finally, Figure 8 shows that the oscillation frequency scales on the order of –0.46, roughly the inverse square root, for the recurrent architecture. The oscillation frequency of the hybrid architecture scales with order –1.35, which is slightly faster than inversely linear. Due to the lower number of data points the R2 value for the recurrent architecture is lower, but still relatively high at 0.89. In Figure 9 the percentage of the total area used is shown alongside the percentage of the maximum oscillation frequency achieved. We look for a balancing point, which occurs at the intersection of these two lines. It can be seen in Figure 9 that the intersect is approximately at Noscillators = 65 and 15% area utilization. Figure 9 can also be used to see what network size is feasible at a certain area utilization,or the opposite. Some applications might demand a certain maximum area usage, while others require a certain frequency. For example, if an application requires an oscillation frequency of 160 KHz, which is around 50% of the maximum frequency, we can see that this frequency is achieved at approximately 27 oscillators. At this oscillator count we see that the area usage is roughly 7%.

Figure 9
Graph illustrating the relationship between the number of oscillators and two metrics: area utilization (blue, left y-axis) and frequency as a percentage of maximum (red, right y-axis). Blue crosses and dashed lines represent data and fit for area. Red crosses and dashed lines indicate data and fit for frequency. Intersections occur at X = 64.9911 and Y = 15.1458. The x-axis displays the number of oscillators, ranging from 16 to 512.

Figure 9. Area utilization and percentage of maximum frequency achieved at different networks sizes for the hybrid architecture. R2 = 0.9852 for the area utilization. R2 = 0.9988 for the frequency percentage. Maximum frequency (100%) ≈325 KHz.

3.3 Pattern retrieval results

The results of the benchmarks are shown in Table 6. It can be seen that the retrieval accuracy of each architecture is very close or identical for each pattern size and noise level. The results show that in terms of retrieval accuracy the hybrid architecture performs the same as the recurrent architecture. This means that the oscillator dynamics of the hybrid architecture are the same as the recurrent architecture. For larger pattern sizes the hybrid architecture achieves a high performance of 100% or close to 100% retrieval accuracy for 10% and 25% of pixels corrupted. It can be concluded from this that the oscillator dynamics do not break down for larger network sizes. One exception is the retrieval accuracy of the hybrid architecture for 3 × 3 patterns with 50% of pixels corrupted. The accuracy in this case is much higher. The current hypothesis is that the additional synchronization required in the hybrid architecture slightly changes the system dynamics, which only becomes apparent in the case of a small network with a high noise level. Note that there exists some run-to-run variance within the results. This is probably because the signal that enables computation is not synchronized with the oscillators, so the timing of the enable signal related to the oscillation edges has a minor effect on the solution.

Table 6
www.frontiersin.org

Table 6. Pattern retrieval accuracy and mean time to settle for both architectures using 5 weight bits and 4 phase bits.

Table 6 additionally shows the arithmetic mean of the runtime in terms of oscillation cycles excluding time-outs. Similar to the retrieval accuracy, it can be concluded that the time to settle of both architectures is within margin of error. The settling times for the two larger pattern sizes on the hybrid architecture do not explode compared to smaller sizes. It is noted, that although the time to settle in terms of oscillation cycles is the same, the time to settle in terms of real time is higher for the hybrid architecture, due to the lower oscillation frequency.

To summarize, the hybrid and recurrent architecture perform close to identical in terms of retrieval accuracy and settling time and the hybrid architecture shows good performance for larger patterns. But the proposed architecture achieves these results for less hardware resources used and allows for larger implementation by achieving near-linear hardware scaling.

4 Discussion

This paper has shown a new hybrid architecture based on a previous recurrent digital ONN (Abernot et al., 2021, 2023; Abernot and Todri-Sanial, 2023; Luhulima et al., 2023). It was analyzed in terms of hardware resource scaling, where it showed a near-linear scaling in contrast to the previous quadratic scaling trend. This near-linear scaling allows the hardware to be implemented with much larger ONN sizes. However, this near-linear scaling comes at the cost of oscillation frequency. If a high oscillation frequency is required in a certain application, a user might choose a lower number of oscillators with a higher oscillation frequency.

For the specific test platform used in this paper, the ONN size could be increased by 10.5 × on a single FPGA. In future work, even larger network sizes could be achieved by clustering multiple FPGAs, however synchronizing multiple ONNs across multiple devices will pose a challenge. Larger network sizes can be benchmarked using datasets that require more nodes to be embedded on ONNs, especially combinatorial optimization problems. Furthermore, we expect that other additional architecture refinements will likely be necessary for combinatorial optimization problems, since the original recurrent architecture was designed for associative memory tasks. Additionally, the new architecture was verified against the existing architecture by means of an associative memory task. Here it was shown that the hybrid architecture achieved similar retrieval accuracies as the recurrent architecture across a dataset. The time to settle in terms of oscillation cycles was also similar between the two architectures. Now that this level of scalability has been achieved, the next step is to explore real-world applications. Future work can compare the architecture introduced in this work to other computing paradigms using other benchmarks and applications.

5 Conclusion

A hybrid architecture based on an existing recurrent digital ONN architecture has been presented, which allows for a 10.5 times increase in the number of oscillators to be implemented on FPGA at the same level of numerical precision. In this architecture part of the phase update computation was serialized to save on arithmetic logic. However, serialization of the computation created a trade off between network size and oscillation frequency. The oscillation frequency reduces by an order of 1.35 for the number of oscillators in the network. An analysis of hardware resource scaling was performed that showed that the recurrent architecture has hardware resource usage that scales roughly quadratically. The same analysis showed that the hybrid architecture has hardware resource usage that scales nearly linear, on the order of about 1.2. Furthermore, a pattern retrieval benchmark against the recurrent architecture was performed to verify the performance of the new architecture. We also see a lower reported power consumption for the hybrid architecture, but a higher reported energy per oscillation, due to the lower oscillation frequency. Similar results in terms of retrieval accuracy and retrieval time has been found for both architectures, indicating the presence of similar dynamics in both architectures. Additionally, retrieval accuracies of 100% have been shown for larger pattern retrieval task that are made possible by the more efficient hardware resource usage of the hybrid architecture. To the best of our knowledge, this work presents the largest fully connected digital ONN architecture in terms of number of oscillators implemented thus far and an architecture that achieves nearly linear hardware scaling.

Data availability statement

The raw data supporting the conclusions of this article will be made available by the authors upon reasonable request, for data which has been legally approved.

Author contributions

BFH: Methodology, Formal analysis, Writing – original draft, Data curation, Writing – review & editing, Conceptualization, Visualization, Investigation, Software, Validation, Resources. AT-S: Resources, Writing – review & editing, Project administration, Data curation, Funding acquisition, Visualization, Conceptualization, Supervision, Investigation, Formal analysis, Software, Validation, Methodology.

Funding

The author(s) declared that financial support was received for this work and/or its publication. This work was funded by the Dutch Research Council's AiNed Fellowship research programme, AI-on-ONN project under grant agreement no. NGF.1607.22.016.

Acknowledgments

Colleagues from NanoComputing Research Lab are acknowledged for their help and support in finalizing this manuscript.

Conflict of interest

The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

The author AT-S declared that they were an editorial board member of Frontiers, at the time of submission. This had no impact on the peer review process and the final decision.

Generative AI statement

The author(s) declared that generative AI was not used in the creation of this manuscript.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fnins.2025.1658490/full#supplementary-material

References

Abernot, M., and Aida, T.-S. (2023). Simulation and implementation of two-layer oscillatory neural networks for image edge detection: bidirectional and feedforward architectures. Neuromorphic Comput. Eng. 3:014006. doi: 10.1088/2634-4386/acb2ef

Crossref Full Text | Google Scholar

Abernot, M., Azemard, N., and Todri-Sanial, A. (2023). Oscillatory neural network learning for pattern recognition: an on-chip learning perspective and implementation. Front. Neurosci. 17:1196796. doi: 10.3389/fnins.2023.1196796

PubMed Abstract | Crossref Full Text | Google Scholar

Abernot, M., Gil, T., Jiménez, M., Núñez, J., Avellido, M. J., Linares-Barranco, B., et al. (2021). Digital implementation of oscillatory neural network for image recognition applications. Front. Neurosci. 15:713054. doi: 10.3389/fnins.2021.713054

PubMed Abstract | Crossref Full Text | Google Scholar

Abernot, M., and Todri-Sanial, A. (2023). Training energy-based single-layer Hopfield and oscillatory networks with unsupervised and supervised algorithms for image classification. Neural Comput. Applic. 35, 18505–18518. doi: 10.1007/s00521-023-08672-0

Crossref Full Text | Google Scholar

Bardella, G., Franchini, S., Pani, P., and Ferraina, S. (2024). Lattice physics approaches for neural networks. iScience 27:111390. doi: 10.1016/j.isci.2024.111390

PubMed Abstract | Crossref Full Text | Google Scholar

Bashar, M. K., Li, Z., Narayanan, V., and Shukla, N. (2024). “An FPGA-based Max-K-cut accelerator exploiting oscillator synchronization model,” in 2024 25th International Symposium on Quality Electronic Design (ISQED) (San Francisco, CA: IEEE), 1–8. doi: 10.1109/ISQED60706.2024.10528742

Crossref Full Text | Google Scholar

Bashar, M. K., and Shukla, N. (2023). Designing Ising machines with higher order spin interactions and their application in solving combinatorial optimization. Sci. Rep. 13:9558. doi: 10.1038/s41598-023-36531-4

PubMed Abstract | Crossref Full Text | Google Scholar

Bashar, M. K., Vaidya, J., Mallick, A., Kanthi, R. S. S., Alam, S., Amin, N., et al. (2021). An Oscillator-based MaxSAT solver. arXiv:2109.09897 [physics]. doi: 10.48550/arXiv.2109.09897

Crossref Full Text | Google Scholar

Cai, W., Li, Z., Wang, I., Wang, Y.-N., and Lee, T. H. (2025). OscNet v1.5: energy efficient hopfield network on CMOS oscillators for image classification. arXiv:2506.12610 [cs]. doi: 10.48550/arXiv.2506.12610

Crossref Full Text | Google Scholar

Cılasun, H., Moy, W., Zeng, Z., Islam, T., Lo, H., Vanasse, A., et al. (2025). A coupled-oscillator-based Ising chip for combinatorial optimization. Nat. Electr. 8, 537–546. doi: 10.1038/s41928-025-01393-3

Crossref Full Text | Google Scholar

Cılasun, H., Zeng, Z., S, R., Kumar, A., Lo, H., Cho, W., et al. (2024). 3SAT on an all-to-all-connected CMOS Ising solver chip. Sci. Rep. 14:10757. doi: 10.1038/s41598-024-60316-y

PubMed Abstract | Crossref Full Text | Google Scholar

Csaba, G., and Porod, W. (2020). Coupled oscillators for computing: a review and perspective. Appl. Phys. Rev. 7:011302. doi: 10.1063/1.5120412

Crossref Full Text | Google Scholar

Delacour, C., Carapezzi, S., Abernot, M., Boschetto, G., Azemard, N., Salles, J., et al. (2021). “Oscillatory neural networks for edge AI computing,” in 2021 IEEE Computer Society Annual Symposium on VLSI (ISVLSI) (Tampa, FL: IEEE), 326–331. doi: 10.1109/ISVLSI51109.2021.00066

Crossref Full Text | Google Scholar

Delacour, C., Carapezzi, S., Boschetto, G., Abernot, M., Gil, T., Azemard, N., et al. (2023). A mixed-signal oscillatory neural network for scalable analog computations in phase domain. Neuromorphic Comput. Eng. 3:034004. doi: 10.1088/2634-4386/ace9f5

Crossref Full Text | Google Scholar

Diederich, S., and Opper, M. (1987). Learning of correlated patterns in spin-glass networks by local learning rules. Phys. Rev. Lett. 58, 949–952. doi: 10.1103/PhysRevLett.58.949

PubMed Abstract | Crossref Full Text | Google Scholar

Guo, T., Ogranovich, A., Venkatakrishnan, A. R., Shapiro, M. R., Bullo, F., and Pasqualetti, F. (2025). Oscillatory associative memory with exponential capacity. arXiv:2504.03102. doi: 10.48550/arXiv.2504.03102

Crossref Full Text | Google Scholar

Harris, C. R., Millman, K. J., van der Walt, S. J., Gommers, R., Virtanen, P., Cournapeau, D., et al. (2020). Array programming with NumPy. Nature 585, 357–362. doi: 10.1038/s41586-020-2649-2

PubMed Abstract | Crossref Full Text | Google Scholar

Hopfield, J. J. (1982). Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. U. S. A. 79, 2554–2558. doi: 10.1073/pnas.79.8.2554

PubMed Abstract | Crossref Full Text | Google Scholar

Hoppensteadt, F., and Izhikevich, E. (2000). Pattern recognition via synchronization in phase-locked loop neural networks. IEEE Trans. Neural Netw. 11, 734–738. doi: 10.1109/72.846744

PubMed Abstract | Crossref Full Text | Google Scholar

Ising, E. (1925). Beitrag zur theorie des ferromagnetismus. Zeitschrift für Physik 31, 253–258. doi: 10.1007/BF02980577

Crossref Full Text | Google Scholar

Jackson, T., Pagliarini, S., and Pileggi, L. (2018). “An oscillatory neural network with programmable resistive synapses in 28 Nm CMOS,” in 2018 IEEE International Conference on Rebooting Computing (ICRC) (McLean, VA: IEEE), 1–7. doi: 10.1109/ICRC.2018.8638600

Crossref Full Text | Google Scholar

Li, H., Peerla, R. S., Barrows, F., Caravelli, F., and Sahoo, B. D. (2025). Voltage-controlled oscillator and memristor-based analog computing for solving systems of linear equations. arXiv:2506.09392 [eess]. doi: 10.48550/arXiv.2506.09392

Crossref Full Text | Google Scholar

Liu, Y., Han, Z., Wu, Q., Yang, H., Cao, Y., Han, Y., et al. (2024). A 1024-spin scalable ising machine with capacitive coupling and progressive annealing method for combination optimization problems. IEEE Trans. Circuits Syst. II: Express Briefs 71, 5009–5013. doi: 10.1109/TCSII.2024.3432799

Crossref Full Text | Google Scholar

Luhulima, E., Abernot, M., Corradi, F., and Todri-Sanial, A. (2023). “Digital implementation of on-chip hebbian learning for oscillatory neural network,” in 2023 IEEE/ACM International Symposium on Low Power Electronics and Design (ISLPED) (Vienna: IEEE), 1–6. doi: 10.1109/ISLPED58423.2023.10244501

Crossref Full Text | Google Scholar

Miyato, T., Löwe, S., Geiger, A., and Welling, M. (2025). Artificial Kuramoto oscillatory neurons. arXiv:2410.13821 [cs] version: 3. doi: 10.3902/jnns.32.127

Crossref Full Text | Google Scholar

Moy, W., Ahmed, I., Chiu, P.-W., Moy, J., Sapatnekar, S. S., and Kim, C. H. (2022). A 1,968-node coupled ring oscillator circuit for combinatorial optimization problem solving. Nat. Electr. 5, 310–317. doi: 10.1038/s41928-022-00749-3

Crossref Full Text | Google Scholar

Neyaz, S. Y., Ashok, A., Schiek, M., Grewing, C., Zambanini, A., Waasen, S., et al. (2025). Scalable 28nm IC implementation of coupled oscillator network featuring tunable topology and complexity. arXiv:2505.10248 [cs]. doi: 10.48550/arXiv.2505.10248

Crossref Full Text | Google Scholar

Nikhar, S., Kannan, S., Aadit, N. A., Chowdhury, S., and Camsari, K. Y. (2024). All-to-all reconfigurability with sparse and higher-order Ising machines. Nat. Commun. 15:8977. doi: 10.1038/s41467-024-53270-w

PubMed Abstract | Crossref Full Text | Google Scholar

Ramsauer, H., Schäfl, B., Lehner, J., Seidl, P., Widrich, M., Adler, T., et al. (2021). Hopfield networks is all you need. arXiv:2008.02217 [cs, stat]. doi: 10.48550/arXiv.2008.02217

Crossref Full Text | Google Scholar

Sabo, F., and Todri-Sanial, A. (2024). “ClassONN: classification with oscillatory neural networks using the Kuramoto model,” in 2024 Design, Automation &Test in Europe Conference &Exhibition (DATE) (Valencia: IEEE), 1–2. doi: 10.23919/DATE58400.2024.10546829

Crossref Full Text | Google Scholar

Sundara Raman, S. R., John, L. K., and Kulkarni, J. P. (2024). “SACHI: a stationarity-aware, all-digital, near-memory, ising architecture,” in 2024 IEEE International Symposium on High-Performance Computer Architecture (HPCA) (Edinburgh: IEEE), 719–731. doi: 10.1109/HPCA57654.2024.00061

Crossref Full Text | Google Scholar

Todri-Sanial, A., Delacour, C., Abernot, M., and Sabo, F. (2024). Computing with oscillators from theoretical underpinnings to applications and demonstrators. NPJ Unconvent. Comput. 1, 1–16. doi: 10.1038/s44335-024-00015-z

PubMed Abstract | Crossref Full Text | Google Scholar

Vaidya, J., Surya Kanthi, R. S., and Shukla, N. (2022). Creating electronic oscillator-based Ising machines without external injection locking. Sci. Rep. 12:981. doi: 10.1038/s41598-021-04057-2

PubMed Abstract | Crossref Full Text | Google Scholar

Wang, T., and Roychowdhury, J. (2019). “OIM: oscillator-based ising machines for solving combinatorial optimisation problems,” in Unconventional Computation and Natural Computation, eds. I. McQuillan and S. Seki (Cham: Springer International Publishing), 232–256. doi: 10.1007/978-3-030-19311-9_19

Crossref Full Text | Google Scholar

Wang, T., Wu, L., Nobel, P., and Roychowdhury, J. (2021). Solving combinatorial optimisation problems using oscillator based Ising machines. Natural Comput. 20, 287–306. doi: 10.1007/s11047-021-09845-3

Crossref Full Text | Google Scholar

Keywords: brain-inspired computing, FPGA prototyping, oscillatory neural networks, pattern retrieval, physical computing

Citation: Haverkort BF and Todri-Sanial A (2026) Overcoming quadratic hardware scaling for a fully connected digital oscillatory neural network. Front. Neurosci. 19:1658490. doi: 10.3389/fnins.2025.1658490

Received: 02 July 2025; Revised: 02 December 2025;
Accepted: 03 December 2025; Published: 15 January 2026.

Edited by:

Jiyong Woo, Kyungpook National University, Republic of Korea

Reviewed by:

Pier Luigi Gentili, Università degli Studi di Perugia, Italy
Baibhab Chatterjee, University of Florida, United States

Copyright © 2026 Haverkort and Todri-Sanial. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Aida Todri-Sanial, YS50b2RyaS5zYW5pYWxAdHVlLm5s

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.