Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Astron. Space Sci., 27 January 2026

Sec. Astronomical Instrumentation

Volume 13 - 2026 | https://doi.org/10.3389/fspas.2026.1721412

A software defined aerospace payload testing platform based on AXI bus

Guodong Yin,Guodong Yin1,2Wenjie Zhao,
Wenjie Zhao1,2*Jianing Rao,
Jianing Rao1,2*Wanying ZhangWanying Zhang1Jing KangJing Kang1Yan XieYan Xie1Miao MaMiao Ma1Jiahao DuJiahao Du1Xiuxiu LiuXiuxiu Liu1Yan Zhu,Yan Zhu1,2Junshe An,Junshe An1,2Lianguo Wang,Lianguo Wang1,2
  • 1National Space Science Center, the Chinese Academy of Sciences, Beijing, China
  • 2University of Chinese Academy of Sciences, Beijing, China

This paper presents a software-defined hardware-in-the-loop (HIL) testing platform for aerospace payloads based on a single-chip FPGA, addressing key challenges in ground testing of complex aerospace payloads, including stringent real-time requirements, difficulty in multi-protocol interface adaptation, limited runtime configurability, and insufficient fault injection depth. The platform features an innovative unified System-on-Chip (SoC) architecture centered on the AXI (Advanced eXtensible Interface) bus. By standardizing heterogeneous communication protocol interfaces—such as those for aerospace telemetry, tracking, command, data transmission, and remote sensing—into configurable AXI peripheral modules, the design achieves protocol adaptation layer decoupling and efficient resource scheduling. Experimental results demonstrate that the platform achieves nanosecond-level (≤15 ns) ultra-low latency for real-time data interaction, meeting the stringent timing requirements of aerospace payload testing. The software-defined architecture enables runtime configuration of interface parameters without system restart or FPGA reprogramming, significantly enhancing test scenario flexibility. Furthermore, a white-box design approach exposes internal signals and status interfaces of the device under test, enabling fine-grained fault injection (including bit flips, timing anomalies, and message drops) into critical modules such as protocol stacks and data processing units, thereby overcoming the coverage limitations of traditional black-box testing. The platform has been deployed in multiple aerospace payload ground testing campaigns, where its nanosecond-level response latency and runtime configurability have significantly improved testing efficiency.

1 Introduction

Ground testing of aerospace payloads is a critical procedure for ensuring mission reliability (Zhao et al., 2015). Currently, ground testing solutions for aerospace payloads face significant challenges. First, traditional dedicated test equipment, often customized for specific models, typically lacks versatility and reusability. This results in repetitive research and development efforts across different missions, significantly inflating project cycles and costs. Second, while modular instrumentation platforms represented by Commercial Off-The-Shelf (COTS) technologies, such as PXI, offer improved development flexibility, their distributed architectures rely on system buses for inter-module data exchange. This mechanism introduces microsecond-level, non-deterministic communication latency, failing to meet the stringent nanosecond-level deterministic timing requirements of high-precision real-time closed-loop simulations. Furthermore, the interface functionalities in such commercial solutions are typically encapsulated within high-level APIs. This abstraction restricts users from accessing the underlying protocol layers, thereby hindering deep fault injection and non-standard behavior simulation—capabilities that are critical for the rigorous reliability verification required in aerospace applications.

The demand for high-performance verification is particularly acute within the domain of scientific instrumentation. In contrast to conventional satellite avionics, space-based astronomical instruments impose significantly more stringent requirements on inter-payload coordination. For instance, complex missions exemplified by the Hard X-ray Modulation Telescope (Insight-HXMT) (Zhang et al., 2020), Einstein Probe (EP) (Yuan et al., 2025), and Advanced Space-based Solar Observatory (ASO-S) integrate heterogeneous scientific payloads for synchronized observations (Gan et al., 2023). Such missions mandate rigorous multi-protocol compatibility and precise multi-channel timing synchronization. Parallel challenges persist in ground-based astronomy: digital correlators in radio telescope arrays require nanosecond-level deterministic synchronization across array elements, while adaptive optics systems are intolerant of control loop latency. Addressing these rigorous demands necessitates transcending the limitations of traditional test equipment in favor of architectures that inherently combine deterministic timing, high throughput, and flexible reconfigurability.

Previous ground testing designs for aerospace equipment were predominantly hardware-defined. As illustrated in Figure 1a, these systems primarily relied on interconnected National Instruments (NI) chassis for control. This approach necessitates multiple chassis, resulting in high costs and system complexity (Zhang and Yang, 2009). Figure 1b depicts an improved multi-interface FPGA scheme, which implements various protocol interfaces via FPGA, offering lower costs and better scalability. However, such solutions often lack a unified on-chip interconnect architecture. Consequently, the integration of functional modules and software development must be customized for specific designs, limiting reusability. Moreover, the host interface in these schemes is frequently constrained by the bandwidth of buses like USB, making it difficult to meet the testing demands of modern high-throughput payloads (Zhao et al., 2025a).

Figure 1
Diagram with two sections labeled (a) and (b). Section (a) shows multiple PCs connected to onboard payload electronics. Section (b) details a block diagram linking a PC to a Xilinx VC5SX95T chip, which is connected to various modules like RS422 TX/RX, LVDS TX/RX, DA OUT, AD IN, and others, interfacing with a spacecraft payload.

Figure 1. Traditional ground test equipment systems. (a) Distributed testing architecture based on NI chassis; (b) Multi-interface FPGA-based testing scheme.

To address these issues, this paper proposes a software-defined Hardware-in-the-Loop (HIL) testing platform for aerospace payloads, based on a single-chip FPGA. This platform employs the high-performance AXI (Advanced eXtensible Interface) bus standard (as shown in Figure 2) to construct a unified System-on-Chip (SoC) architecture. In this architecture, the AXI bus serves as the high-speed internal interconnect backbone of the FPGA, facilitating low-latency communication between functional modules. Meanwhile, large-scale data interaction with the host computer is achieved through PCIe-based DMA channels. This design leverages the following advantages of the AXI bus:

Figure 2
Diagram illustrating the AXI master-slave communication. The write channels (AW, W, B) include signals such as AWVALID, AWREADY, WVALID, WREADY, BVALID, BREADY. The read channels (AR, R) include ARVALID, ARREADY, RVALID, RREADY. Each signal flows between the AXI master and slave components.

Figure 2. The five-channel structure of the AXI protocol.

Data transmission efficiency: The AXI bus supports high-bandwidth burst transmission mechanisms, satisfying the real-time throughput requirements of massive aerospace payload data while ensuring rapid dissemination of control commands.

Modular design: The standardized interfaces defined by the AXI protocol allow for the uniform integration of heterogeneous modules (such as MIL-STD-1553B and LVDS protocol controllers, as well as data processing units), significantly enhancing system flexibility and maintainability.

Real time performance: Benefiting from the characteristics of FPGA on-chip interconnection, the AXI bus supports nanosecond-level high-frequency data exchange, meeting the strict requirements for deterministic timing in hardware-in-the-loop testing.

Scalability: The standardized interconnect structure ensures excellent system scalability, allowing for the easy addition of new testing modules to adapt to evolving aerospace testing needs.

Based on the aforementioned design philosophy, the main contributions of this paper are summarized as follows:

1. Unified AXI bus SoC architecture: Unlike the heterogeneous and non-standard internal logic found in traditional test equipment, this design standardizes all interface simulation and control modules as AXI bus peripherals. By constructing a cohesive platform-based SoC within a single FPGA, hardware integration complexity and upper-layer software development difficulty are significantly reduced.

2. Software defined dynamic reconfigurability: Based on a unified AXI address mapping, upper-layer software can dynamically adjust interface protocol parameters, data rates, and timing characteristics during runtime by reading and writing to the configuration registers of each module. This software-defined mode abandons the inefficient traditional approach of modifying HDL code and re-synthesizing the FPGA, enabling the test system to rapidly adapt to different test scenarios.

3. Single chip integrated architecture for ultra-low latency simulation: This design integrates all simulation logic requiring high-speed real-time interaction into a single FPGA chip, with inter-module communication completed via high-speed on-chip routing. Compared to multi-module instrument architectures communicating via external physical buses (e.g., PXI), this scheme eliminates latencies introduced by bus arbitration and protocol overhead, achieving nanosecond-level (≤15 ns) deterministic response.

2 System top architecture design

To address the multiple challenges of high real-time performance, high flexibility, and high scalability in modern aerospace payload testing, this platform abandons traditional multi-board distributed solutions or functionally dispersed logic designs. Instead, it adopts an innovative System-on-Chip (SoC) architecture based on a single-chip FPGA with the AXI bus as its core (Bhoi, 2024). This architecture integrates all testing functions into a cohesive, unified digital system, enabling fine-grained control of testing behavior through a software-defined approach.

The top-level architecture of the platform is illustrated in Figure 3, consisting of three components: the Host Computer, the Hardware Platform, and the Payload Management Unit. The core of the hardware platform is a single-chip FPGA based on the Xilinx Kintex-7 XC7K325TFFG900, which connects to the Payload Management Unit through an External Circuit.

Figure 3
Diagram of a hardware architecture featuring an FPGA. On the left, a host computer connects via PCIe, Ethernet, and RS422 to a Host Interface Subsystem. This subsystem includes PCIe Endpoint, Ethernet MAC, and UART Controller, connecting to modules for configuration, command transmission, and packaging. Two communication lines, AXI-Lite and AXI-Stream, link to various slave controllers like MIL-STD-1553B, CAN Bus, LVDS, and more. These controllers interface with external circuits and a Payload Management Unit.

Figure 3. Top-level system architecture.

Within the FPGA, the Host Interface Subsystem serves as the sole AXI bus master, forming the communication bridge between the host computer and the internal AXI bus domain. This subsystem integrates three host communication interfaces: a PCIe endpoint, an Ethernet MAC (Guoteng et al., 2013), and a UART controller. A Protocol Conversion Unit translates heterogeneous external physical layer protocols into standard AXI bus transactions. The PCIe endpoint is implemented using the Xilinx XDMA IP core, providing high-bandwidth data transfer capability. The Configuration Control Module parses configuration commands issued by the host computer, while the Command Transmit Module and Packaging Module handle data framing and distribution (Montalvo et al., 2023).

The AXI interconnect employs a hybrid bus architecture: the AXI-Lite bus handles low-bandwidth register configuration and status reads, while the AXI-Stream bus provides point-to-point transmission for high-speed data between functional modules, with each data channel operating independently without mutual blocking. The interconnect module is implemented using the Xilinx AXI Interconnect IP core, employing a round-robin arbitration strategy for AXI-Lite transactions to ensure fair bus access among all modules.

All functional modules are mounted on the interconnect bus as AXI slave devices, including: aerospace bus interfaces (MIL-STD-1553B controller, CAN bus controller), high-speed data interfaces (LVDS controller, TLK2711 transceiver, fiber optic controller), general communication interfaces (UART controller), analog signal interfaces (ADC/DAC controller), discrete signal interfaces (OC/GPIO controller), as well as the Scientific Data Function Unit and Command Processing Unit. Each protocol adapter implements the protocol logic control layer within the FPGA (such as frame parsing, timing generation, and CRC verification), while the electrical conversion of physical layer signals is handled by dedicated transceiver chips in the external circuit. For example, the MIL-STD-1553B interface uses a standard-compliant bus transceiver, the LVDS interface uses differential driver/receiver chips, and the RS422 interface uses level conversion chips. This separation of protocol and physical layers ensures signal integrity while facilitating system maintenance and upgrades.

The host computer can access the system through any of the PCIe, Ethernet, or RS422 channels, performing memory-mapped read/write operations on any controller mapped to the AXI address space to achieve runtime configuration of interface parameters. This unified bus architecture not only simplifies hardware integration complexity but also provides convenience for modular system expansion.

3 Multi-mode host interface subsystem design

Building upon the AXI-based software-defined aerospace payload testing platform introduced in the previous section, this design implements a multi-mode host interface subsystem to ensure broad compatibility with diverse host environments. This subsystem serves as the sole communication gateway between external hosts and the internal AXI SoC architecture, bridging multiple host interfaces with vastly different physical characteristics—high-performance PCIe, general-purpose Ethernet, and basic serial interfaces—into a unified internal AXI bus domain.

The internal architecture of this subsystem is illustrated in Figure 4, featuring three parallel full-function host paths, each terminating in an independent AXI master interface. The PCIe path is implemented using the Xilinx XDMA IP core, which directly translates host PCIe bus transactions into AXI bus transactions. This IP core provides two primary interfaces to the system: an AXI4 memory-mapped (MM) master interface dedicated to high-speed DMA data transfers between the host and DDR memory, and an AXI4-Lite master interface derived from PCIe BAR space mapping, which serves as the primary control bus for the entire FPGA system.

Figure 4
Diagram showing a host interface subsystem. A host computer connects to three components: Xilinx DMA IP Core (Master) via PCIe, UART AXI Master via RS422, and Ethernet AXI Master via Ethernet. Each master has components like AXI4 Master IP Core, Host Protocol Adapter, and a specific controller or MAC, linked to AXI interconnects with AXI4-Lite and AXI4-MM interfaces.

Figure 4. The subsystem’s internal architecture.

Unlike the PCIe path, the Ethernet and serial auxiliary paths require additional protocol conversion logic. As shown in Figure 4, each of these paths consists of a three-stage pipeline: a physical layer interface (Ethernet MAC or UART controller), a host protocol adapter, and a bus interface layer (AXI4 Master IP core). All master interfaces from the different paths ultimately connect to the slave ports of the AXI interconnect. This multi-master, multi-slave architecture allows access requests from different physical interfaces to be submitted concurrently to the system bus, where the AXI interconnect performs unified arbitration and routing, thereby achieving physical independence and logical parallelism across interfaces.

The host protocol adapter is the core module that enables the Ethernet and serial interface paths. Its responsibility is to parse unstructured data streams received from the physical layer—such as UDP payloads or serial byte streams—into structured internal bus commands, and to drive the AXI4 Master IP core to execute the corresponding read/write operations on the system bus. This module employs a finite state machine (FSM) design to ensure robust protocol parsing and deterministic timing. This section uses the serial (UART) path as an example to describe the FSM design and workflow in detail.

The command frame parsing FSM architecture is shown in Figure 5 and comprises the following six states:

Figure 5
Diagram of a finite state machine with six states: START, IDLE, EXE_AXI, CHECKSUM, DATA, and HEADER. Arrows indicate transitions between states, each labeled with

Figure 5. The parsing state machine architecture of the instruction frame.

Idle (Idle state): The initial and reset state of the FSM. In this state, the FSM waits for and searches for a predefined frame synchronization sequence. Upon detecting a valid frame start signal, it transitions to the Start state.

Start (Synchronization state): In this state, the FSM completes reception and verification of the full synchronization code. If the synchronization code is correct, it transitions to the Header state to begin frame header parsing; otherwise, it returns to the Idle state to search for the next frame.

Header (Receive header state): The FSM sequentially receives and latches the command, address, and length fields of the instruction frame. If the header content is valid, the FSM transitions to either the Data or Checksum state depending on the command type; if the header is invalid, it returns to the Idle state.

Data (Receive data state): This state is entered only when executing a write command. The FSM receives the number of data bytes specified by the length field obtained in the Header state. Upon successful reception of all data bytes, it transitions to the Checksum state; if a timeout or error occurs, it returns to the Idle state.

Checksum (Checksum verification state): The FSM receives the checksum byte and compares it against a locally computed checksum based on the received data. If the checksums match, the frame is considered valid, and the FSM transitions to the Exe_AXI state; if verification fails, the entire frame is discarded and the FSM returns to the Idle state.

Exe_AXI (executing AXI transaction state): In this state, the FSM transfers the verified instruction information (address, data, and read/write type) to the AXI4 Master IP core and initiates the bus transaction. Upon completion, the FSM constructs a response frame and returns to the Idle state; if a bus error occurs, it logs the error status before returning to Idle.

Through this FSM-based structured design, the system achieves reliable and efficient protocol conversion with minimal resource overhead. This design offers excellent scalability—the Ethernet path protocol adapter employs the same FSM architecture, differing only in the physical layer interface and frame format definitions.

4 Design of software defined controller based on AXI bus

One of the core design principles of this testing platform is to achieve “software-defined” functionality for all peripheral IP cores. The key lies in designing standardized AXI slave interfaces for each functional controller—whether for 1553B, CAN, or LVDS—exposing a set of readable and writable memory-mapped registers to the system bus. Through a unified address access mechanism, host software can dynamically configure and monitor the functional parameters, operating modes, and protocol behaviors of IP cores at runtime. This approach transforms otherwise fixed hardware logic into flexible functional modules that can be programmed via software.

To illustrate this design philosophy, this section presents a detailed analysis of the MIL-STD-1553B bus controller, the most technically complex and representative module in the system.

MIL-STD-1553B is a highly reliable, dual-redundant command/response serial bus widely deployed in aerospace applications (Zhang et al., 2024). Its data transmission architecture is depicted in Figure 6a. A standard 1553B network comprises one Bus Controller (BC) and multiple Remote Terminals (RTs), making simulation of this bus a critical requirement for aerospace payload testing.

Figure 6
Diagram illustrating the 1553B Bus structure and internal architecture of its controller. The top section shows a bus network with a Bus Controller, Bus Monitor, and Remote Terminals connected via main and sub-buses with couplers and termination resistors. The bottom section details the internal architecture, featuring interfaces, RAM Arbiter, Channel Multiplexer, Encoders, Decoders, and a MIL-STD-1553B Transceiver.

Figure 6. 1553B bus architecture and design internal architecture. (a) The structure of 1553B Bus. (b) The internal architecture of the 1553B controller.

Figure 6b illustrates the internal architecture of the 1553B controller. Strictly adhering to the AXI protocol specification, this IP core employs separate control and data paths to achieve an optimal balance between high performance and flexibility.

Implemented on the AXI-Lite bus, the control path constitutes the foundation of the “software-defined” capability. External AXI masters initiate read/write transactions through the AXI-Lite interface to access configuration, status, and fault injection registers within the Register Block (D and P, 2023), with the complete register map presented in Table 1. By manipulating these registers, host software exercises full control over the controller’s operating mode (BC/RT/MT), RT address, response timeout threshold, and other runtime parameters. Additionally, the Fault Injection Register (FAULT_INJECT_REG) enables software-triggered injection of various fault modes, including parity errors, sync type errors, response delays, and message drops, providing fine-grained fault stimulus capability for white-box testing of the protocol stack.

Table 1
www.frontiersin.org

Table 1. The specific control behaviors.

The data path is implemented on the AXI-Stream bus and is dedicated to efficient transfer of high-volume message data. When pre-loading transmit data for BC mode or retrieving received data from RT mode, the host can conduct high-speed streaming data exchange with the IP core’s AXI-Stream interface through the system’s DMA engine (Wang et al., 2024). Transmitted and received data are stored in and retrieved from the Shared RAM via the RAM Arbiter.

Comprising the BC module and RT module, the controller’s core protocol engine retrieves instructions and data from Shared RAM based on register configuration, drives the Channel Mux to select the A/B redundant channel, and controls the encoder/decoder to perform Manchester encoding and decoding. Electrical conversion of physical layer signals is handled by an external MIL-STD-1553B transceiver, while an independent Timer Module provides precise timeout determination for the protocol engine (A G and V, 2022).

5 Design of high speed data path based on AXI stream

Modern aerospace payloads, such as high-resolution imagers and multi-channel radars, exhibit high bandwidth and bursty data characteristics. To support real-time simulation and readback analysis (Eun et al., 2018), the platform must address three critical challenges: resolving bandwidth bottlenecks during concurrent multi-interface operation, decoupling real-time peripheral data from non-real-time host processing, and ensuring resource efficiency.

As illustrated in Figure 7, the proposed high-speed data path is anchored by an AXI Direct Memory Access (DMA) engine and high-bandwidth DDR memory. This architecture leverages the AXI-Stream protocol for peripheral interfacing, utilizing its inherent backpressure flow control (TVALID/TREADY handshaking) to seamlessly bridge modules operating at heterogeneous clock rates. The AXI DMA controller acts as the primary transfer engine, linking the streaming domain with the memory-mapped (AXI-MM) domain via two independent channels: S2MM (Stream-to-Memory-Mapped) for data acquisition and MM2S (Memory-Mapped-to-Stream) for data transmission. Functionally, the host orchestrates these transfers by configuring channel descriptors. For data readback, the source IP streams payload data to the DMA, which automatically packs it into burst transactions for storage in DDR; conversely, for transmission, the DMA fetches staged data from DDR and converts it to the AXI-Stream format for physical layer encoding (Sivaranjani et al., 2024).

Figure 7
Diagram depicting a data flow architecture. The Host Interface Subsystem (Master) connects to an AXI Interconnect. Modules include AXI DMA Controller, TLK2711 Transceiver, LVDS Controller, and DDR Memory Controller, each with respective data flow paths like AXI4 MM Write/Read and various interfaces. Control flows are in red, and data flows are in blue.

Figure 7. AXI-Stream based high-speed data path architecture.

To manage bandwidth contention when multiple independent streams concurrently access the shared DDR resource, the system implements a Threshold-Triggered Round-Robin Arbitration strategy. At the physical ingress level, independent asynchronous FIFOs are instantiated for each channel to perform Cross-Clock Domain (CDC) isolation between the peripheral and system bus clocks, while serving as buffers to absorb data during arbitration latency (Jiang et al., 2022). Building on this, the DMA transfer logic operates in an “on-demand trigger” mode: a bus request is asserted only when the local buffer level (Li) reaches a programmable watermark threshold (Tth) aligned with the maximum burst length. This mechanism enforces all memory interactions to be executed as efficient AXI burst transactions, significantly amortizing the command overhead associated with DDR row switching. Finally, at the interconnect convergence point, a central arbiter executes cyclic scheduling based on channel indices, ensuring equitable bandwidth allocation and preventing channel starvation under full-load (Zhang et al., 2014; Zhao et al., 2025b). The detailed coordination logic is illustrated in Algorithm 1.

Algorithm 1
www.frontiersin.org

Algorithm 1. FSPAS_fspas-2026-1721412_wc_fx1

6 Experiment and discussion

6.1 Experimental setup

A comprehensive suite of quantitative experiments was conducted to validate the functionality, performance, and flexibility of the proposed software-defined testing platform. The system is implemented on a Xilinx Kintex-7 XC7K325TFFG900 FPGA, with the resource utilization details summarized in Table 2.

Table 2
www.frontiersin.org

Table 2. FPGA resource utilization summary.

External signal measurements utilized a Keysight DSO-X 3034A digital storage oscilloscope (350 MHz bandwidth). Simultaneously, internal FPGA signals were captured and analyzed via the Xilinx Integrated Logic Analyzer (ILA) IP core operating at a 200 MHz sampling rate, providing a temporal resolution of 5 ns.

System orchestration, including test control, data logging, and result analysis, was managed by a unified host software application. The host interfaces with the FPGA platform via a high-speed PCIe Gen2 x8 link, ensuring real-time parameter configuration and efficient data exchange. The complete hardware setup of the experimental platform is illustrated in Figure 8.

Figure 8
A laboratory setup with various electronic components on a green surface, including a power supply, circuit boards, and numerous connected wires. The components are interconnected, indicating a complex testing or development process.

Figure 8. Hardware interconnection diagram.

6.2 Multi-interface functional and precision verification

To verify the functional correctness and parameter accuracy of the platform’s key interfaces, tests were conducted on discrete control (OC), serial communication (RS422), and high-speed data (LVDS) interfaces across a wide range of operating conditions. Multiple measurements were performed for each condition to ensure repeatability, with results summarized in Table 3.

Table 3
www.frontiersin.org

Table 3. Experimental results of key interface performance.

OC interface testing covered pulse widths from 10 μs to 500 ms, with relative errors below 0.5% across all conditions.

RS422 interface testing covered baud rates from 9.6 kbps to 2 Mbps, with relative errors below 0.1%. LVDS interface testing covered clock frequencies from 10 MHz to 100 MHz, with relative errors below 0.01%.

Collectively, these quantitative results demonstrate that the platform maintains high-precision output across a broad spectrum of parameter configurations. The consistently low relative errors observed across diverse protocols confirm the stability and accuracy of the underlying software-defined timing logic.

6.3 End-to-end latency statistical analysis

Latency analysis involved 15 separate trials for each of the 10 OC channels using the Xilinx ILA. With a sampling clock of 200 MHz, the setup achieved a temporal resolution of 5 ns. The total propagation delay was defined as the pin to-pin interval between the trigger pulse arrival at the FPGA input and the response pulse generation at the output.

As illustrated in Figure 9, the measured latencies exhibited a discrete distribution corresponding to 10 ns (2 clock cycles), 15 ns (3 clock cycles), and 20 ns (4 clock cycles). The majority of measurements fell within the 10–15 ns range, with sporadic instances of 20 ns observed on channels CH2, CH5, and CH8. The average latency per channel ranged from 12.7 ns to 14.0 ns, resulting in a global average of 13.2 ns (with a quantization uncertainty of ±5 ns). These results confirm that the platform maintains a consistent nanosecond-level deterministic response, satisfying the stringent real-time constraints of aerospace payload testing.

Figure 9
Line graph and heatmap showing latency states in nanoseconds across ten channels. The line graph above indicates mean values ranging from 12.7 to 14.0 nanoseconds. The heatmap below displays event counts, with colors from light yellow (low) to dark blue (high) across three latency states: 10 nanoseconds, 15 nanoseconds, and 20 nanoseconds. Channels are labeled CH1 to CH10.

Figure 9. Histogram of signal propagation delay measurements for multi-channel OC interfaces.

6.4 PCIe DMA throughput statistical analysis

To comprehensively evaluate the stability of the high-speed data path, prolonged measurements of the sustained DMA transfer rate from the FPGA-onboard DDR memory to the host memory were conducted. The tests utilized the Xilinx XDMA driver with a transfer block size of 16 MB, operating under a PCIe Gen2 ×8 configuration.

Figure 10 presents the statistical results from 300 data points collected over a 60-s continuous interval. The system exhibited highly stable performance, achieving a mean throughput of 3.62 GB/s with a standard deviation of approximately 0.01 GB/s (less than 0.3% of the mean). The throughput histogram indicates a tight distribution around the mean, with sporadic transient dips attributable to host operating system scheduling overhead and PCIe link management events.

Figure 10
Line graph showing throughput over time in gigabytes per second (GB/s). The measured rate fluctuates around a mean value of 3.62 GB/s. An inset highlights a section with more pronounced variations. The histogram on the right represents throughput distribution, with given statistics: N equals 300, mean equals 3.62 GB/s, and standard deviation equals 0.019 GB/s.

Figure 10. Real-time measurement results of FPGA-to-Host DMA data transfer rate.

The measured sustained throughput of 3.62 GB/s corresponds to 90.5% of the theoretical bandwidth for PCIe Gen2 x8 (4.0 GB/s, accounting for 8b/10b encoding).

These results validate the platform’s capability to maintain reliable, high-efficiency data transfer under continuous load, fully satisfying the bandwidth requirements for multi-channel aerospace payload data acquisition.

6.5 Runtime reconfigurability verification

To validate the software-defined capability of the platform, dynamic configuration experiments were conducted on communication and discrete control interfaces without FPGA reprogramming.

For the RS422 interface, the baud rate was initially configured to 115,200 bps via the AXI-Lite configuration register. During continuous data transmission, a reconfiguration command was issued to update the baud rate to 921,600 bps. Figures 11a,b show the captured waveforms before and after the transition. The bit period changed from 8.68 μs to 1.09 μs, corresponding to the expected 8× frequency increase. No framing errors or data corruption were observed during the switching process.

Figure 11
Four oscilloscope screenshots labeled as (a), (b), (c), and (d). (a) shows a yellow waveform with a bit rate of 115.1 kbps. (b) has a similar yellow waveform with a bit rate of 916 kbps. (c) and (d) display green waveforms, with (c) having a duration of 79.999 ms and (d) 100.002 ms. Each screenshot features Keysight Technologies branding and measurement settings.

Figure 11. Verification of runtime reconfigurability for RS422 and OC interfaces. (a) RS422 waveform at 115.2 kbps; (b) RS422 waveform at 921.6 kbps; (c) OC output pulse width of 80 ms; (d) OC output pulse width of 100 ms.

Similarly, the OC output controller was tested for timing parameter reconfiguration. The pulse width parameter was dynamically modified from 80 ms to 100 ms while the system remained operational. As shown in Figures 11c,d, the output signal immediately exhibited the new timing characteristics in the subsequent pulse cycle, with no glitches or transient anomalies.

These experiments confirm that the AXI-based SoC architecture enables flexible runtime adjustment of interface parameters through standardized register access, fulfilling the requirements for dynamic test scenario adaptation.

6.6 Fault injection capability verification

Beyond functional correctness, the precision and independence of the white-box fault injection mechanism were quantitatively characterized. This evaluation assesses the system’s ability to simulate deterministic anomalies without introducing unintended artifacts. The experimental results are summarized in Table 4.

Table 4
www.frontiersin.org

Table 4. Quantitative characterization of fault injection performance.

6.6.1 Latency injection precision

The timing granularity of the fault injection module was evaluated by sweeping the configurable delay parameter across a dynamic range of 0 us to 10 ms with a step size of 100 us. As shown in Table 4, the system demonstrated a deterministic resolution of 5 ns, corresponding to exactly one clock cycle at the 200 Mhz system frequency. Verification across the full sweep range confirmed a logic error of exactly 0 cycles, proving that the system maintains strict cycle-accurate control without accumulated jitter or drift during long-delay insertion.

6.6.2 Bit-level masking granularity

To evaluate the isolation capability of the error injection logic, a standard “Walking-1“ pattern was applied to the transmission data, sequentially targeting individual bit indices (Bit 0 through Bit 7) for inversion. A stress test of 10,000 cycles was executed to verify stability. The system achieved 100% bit flip accuracy on the target bits. Throughout the test sequence, zero adjacent errors were recorded, validating that the masking logic achieves precise bit-level addressability with no crosstalk to non-target data fields.

6.6.3 Concurrent fault capabilities

The system’s resilience under complex failure conditions was tested by superimposing multiple fault types. A combined scenario triggering both a Parity Error and a 500 us Transmission Delay was executed over a sample set of 10,000 frames. The receiver successfully identified both the error flag and the timing anomaly in 100% of the samples. Furthermore, the internal processing overhead introduced by this concurrent logic was measured at a fixed deterministic latency of 10 ns. This result confirms the functional independence of the modules, ensuring that timing manipulations do not interfere with data integrity logic.

6.7 Comparison with existing platforms

To validate the architectural advantages, the proposed platform was compared against two representative paradigms identified in Section 1: the Standard COTS PXIe Chassis (represented by models like the JYTEK PXIe-2306) and the Conventional USB-FPGA Scheme (referring to the architecture in Figure 1b, represented by typical USB adapters like AltaDT). As summarized in Table 5, commercial PXIe chassis typically offer a theoretical 4 GB/s shared bandwidth, yet the effective per-slot throughput is restricted by the PCIe x4 interface and backplane contention. The USB-FPGA scheme is further constrained by serial protocol overhead, typically capping effective data rates below 400 MB/s. In contrast, the proposed platform utilizes a dedicated PCIe Gen2 x8 interface to deliver a sustained, stable throughput of 3.62 GB/s. Regarding real-time performance, both the remote-controlled PXIe system (exceeding 10μs latency due to OS scheduling) and USB-based solutions (exceeding 100μs due to host polling) fail to meet the stringent timing requirements of aerospace HIL testing. The proposed AXI SoC architecture eliminates these software bottlenecks, processing protocols directly in hardware to achieve a deterministic response of 15 ns with deep logic-level fault injection capabilities.

Table 5
www.frontiersin.org

Table 5. Performance comparison with existing platforms.

6.8 Architectural extensibility

The preceding comparison demonstrates that this platform offers distinct advantages over existing solutions in terms of latency, throughput, and fault injection depth. These architectural attributes not only fulfill general aerospace payload testing requirements, but also exhibit significant application potential in high-precision scientific domains such as astronomical instrumentation, where timing determinism and interface flexibility are critically demanded.

6.8.1 Ground verification for space astronomical payloads

Space astronomy missions impose stringent requirements on ground testing. For instance, the High-Energy X-ray Telescope on Insight-HXMT requires verification of microsecond-level photon arrival timing (Zhang et al., 2020); the Lobster Eye Imager on the Einstein Probe (EP) necessitates testing of high-speed readout and transient source trigger responses for wide-field imaging (Yuan et al., 2025); and the multi-payload coordination on the Advanced Space-based Solar Observatory (ASO-S) demands precise multi-channel synchronization (Gan et al., 2023). This platform, with its demonstrated 15 ns deterministic response latency (Section 6.3), 3.62 GB/s sustained throughput (Section 6.4), and deep fault injection capabilities (supporting bit flips, timing anomalies, and message drops), provides a comprehensive Hardware-in-the-Loop (HIL) verification environment for such high-sensitivity payloads.

6.8.2 Architectural reference for astronomical digital backends

The design philosophy of this platform offers a valuable reference for astronomical digital backends. In radio astronomy, pulsar timing observations require nanosecond-level time resolution, while interferometric arrays demand deterministic synchronization across baselines. In optical astronomy, the wavefront correction loops of adaptive optics systems are highly sensitive to control latency, and transient capture in high-speed photometers relies on rapid trigger mechanisms. The unified AXI bus architecture, software-defined runtime reconfigurability, and the “Threshold-Triggered Round-Robin Arbitration” strategy (Section 5) presented in this work offer a scalable, on-chip integrated solution for these scenarios.

6.8.3 Resource efficiency for space platforms

For space astronomical missions strictly constrained by SWaP (Size, Weight, and Power), hardware resource efficiency is critical. As shown in Table 2, the proposed platform utilizes less than 26% of the FPGA logic resources while implementing full functionality. This indicates that the core communication and control architecture can be directly migrated to spaceborne computing platforms (e.g., CubeSat controllers), leaving ample logic headroom for on-orbit scientific data preprocessing algorithms, such as event filtering, image compression, and light curve extraction.

7 Conclusion

This paper presents a software-defined hardware-in-the-loop (HIL) testing platform utilizing a unified AXI SoC architecture, designed to address the challenges of real-time determinism and interface adaptability in aerospace payload verification. By integrating functional modules within a single-chip FPGA, the proposed design reduces the non-deterministic latencies and system overheads typically associated with distributed multi-board architectures. Experimental results demonstrate that the platform achieves a deterministic loop response of ≤15 ns and a sustained DMA throughput of 3.62 GB/s, meeting the timing and bandwidth constraints required for high-precision payload simulation. Additionally, the software-defined framework facilitates runtime parameter reconfiguration without hardware re-synthesis and enables deep logic-level fault injection, providing fine-grained control that is often limited in traditional “black-box” instruments. Beyond aerospace payload testing, the architectural attributes demonstrated in this work also hold application value in astronomical instrumentation. The nanosecond-level deterministic timing supports precise synchronization required for pulsar timing and interferometric observations, the high-throughput data path accommodates the data rates of modern wide-field imagers and multi-channel spectrometers, and the resource-efficient single-chip architecture offers a viable pathway for developing compact digital backends in both ground-based telescope systems and resource-constrained space instruments.

Data availability statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Author contributions

GY: Investigation, Visualization, Resources, Validation, Funding acquisition, Supervision, Formal Analysis, Conceptualization, Writing – review and editing, Writing – original draft, Data curation, Methodology, Project administration, Software. WeZ: Writing – review and editing, Resources, Supervision. JR: Supervision, Writing – review and editing. WaZ: Writing – review and editing. JK: Writing – review and editing. YX: Supervision, Writing – review and editing. MM: Software, Writing – review and editing. JD: Writing – review and editing, Software. XL: Validation, Writing – review and editing. YZ: Writing – review and editing. JA: Writing – review and editing. LW: Writing – review and editing.

Funding

The author(s) declared that financial support was received for this work and/or its publication. This work was supported by the National Key Research and Development Program of China under Grant 2022YFF0503903.

Conflict of interest

The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declared that generative AI was not used in the creation of this manuscript.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

A G, T., and V, K. (2022). Testing of AMBA AXI protocol. Int. J. Res. Rev. 9 (11), 86–90. doi:10.52403/ijrr.20221114

CrossRef Full Text | Google Scholar

Bhoi, B. K. (2024). Design of a bus communication architecture using AXI protocol based SoC systems. Int. J. Res. Appl. Sci. Eng. Technol. 12 (6), 355–373. doi:10.22214/ijraset.2024.63078

CrossRef Full Text | Google Scholar

D, P., and P, D. S. (2023). High secure data transactions in AXI system bus using firewall protection. Int. J. Res. Appl. Sci. Eng. Technol. 11 (8), 1388–1397. doi:10.22214/ijraset.2023.55366

CrossRef Full Text | Google Scholar

Eun, Y., Park, S.-Y., and Kim, G.-N. (2018). Development of a hardware-in-the-loop testbed to demonstrate multiple spacecraft operations in proximity. Acta Astronaut. 147, 48–58. doi:10.1016/j.actaastro.2018.03.030

CrossRef Full Text | Google Scholar

Gan, W., Zhu, C., Deng, Y., Zhang, Z., Chen, B., Huang, Y., et al. (2023). The advanced space-based solar observatory (ASO-S). Sol. Phys. 298, 68. doi:10.1007/s11207-023-02166-x

CrossRef Full Text | Google Scholar

Guoteng, P., Li, L., Guodong, O., Qingchao, F., and Han, B. (2013). “Design and verification of a MAC controller based on AXI bus,” in 2013 third international conference on intelligent system design and engineering applications, 558–562. doi:10.1109/isdea.2012.136

CrossRef Full Text | Google Scholar

Jiang, Z., Yang, K., Fisher, N., Gray, I., Audsley, N. C., and Dong, Z. (2022). AXI-IC: towards a real-time AXI-interconnect for highly integrated SoCs. IEEE Trans. Comput. 72 (3), 786–799. doi:10.1109/TC.2022.3179227

CrossRef Full Text | Google Scholar

Montalvo, A., Polo, Ó. R., Parra, P., Carrasco, A., da Silva, A., Martínez, A., et al. (2023). Model-driven engineering for low-code ground support equipment configuration and automatic test procedures definition. Acta Astronaut. 211, 574–591. doi:10.1016/j.actaastro.2023.06.027

CrossRef Full Text | Google Scholar

Sivaranjani, P., Sasikala, S., Lavanya, A., and Keerthana, M. (2024). Design and verification of low latency AMBA AXI4 and ACE protocol for On-Chip peripheral communication. Wirel. Personal. Commun. 136 (3), 1811–1824. doi:10.1007/s11277-024-11362-2

CrossRef Full Text | Google Scholar

Wang, W., He, C., and Shi, J. (2024). A secure SoC architecture design with dual DMA controllers. AIP Adv. 14 (2), 025125. doi:10.1063/5.0195148

CrossRef Full Text | Google Scholar

Yuan, W., Dai, L., Feng, H., Jin, C., Jonker, P., Kuulkers, E., et al. (2025). Science objectives of the einstein probe mission. Sci. China Phys. Mech. Astronomy 68 (3), 239501. doi:10.1007/s11433-024-2600-3

CrossRef Full Text | Google Scholar

Zhang, Y., and Yang, X. (2009). “Research of auto test system of integrative measurement controller based on PXI bus,” in 2009 9th international conference on electronic measurement and instruments. doi:10.1109/ICEMI.2009.5274829

CrossRef Full Text | Google Scholar

Zhang, Y., Zhao, J. H., and Li, F. (2014). Design of automatic test system on flight control electronics set based on PXI bus. Adv. Mater. Res. 898, 855–858. doi:10.4028/www.scientific.net/AMR.898.855

CrossRef Full Text | Google Scholar

Zhang, S., Li, T., Lu, F., Song, L., Xu, Y., Liu, C., et al. (2020). Overview to the hard X-ray modulation telescope (Insight-HXMT) satellite. Sci. China Phys. Mech. Astronomy 63 (4), 249502. doi:10.1007/s11433-019-1432-6

CrossRef Full Text | Google Scholar

Zhang, Y., Yang, Y., Zhang, Y., Wu, L., and Guo, Z. (2024). Research on triplex redundant flight control system based on M1394B bus. Aerospace 11 (11), 909. doi:10.3390/aerospace11110909

CrossRef Full Text | Google Scholar

Zhao, G., Wang, B., and Peng, Y. (2015). “Development of ground test equipment for autonomous system of deep space explorer,” in 2015 12th IEEE international conference on electronic measurement and instruments (ICEMI), 90–94. doi:10.1109/icemi.2015.7494210

CrossRef Full Text | Google Scholar

Zhao, W., Rao, J., Hao, C., Jia, W., Dong, Z., Ma, X., et al. (2025a). A universal ground test equipment design for the Chang’e series spacecraft. Front. Astronomy Space Sci. 12, 1594802. doi:10.3389/fspas.2025.1594802

CrossRef Full Text | Google Scholar

Zhao, W., Rao, J., Ma, M., Zhang, J., Li, S., Jia, W., et al. (2025b). A novel space intelligent computing and data processing architecture--the spacecraft payload health management unit (SPHMU). Front. Astronomy Space Sci. 12, 1657487. doi:10.3389/fspas.2025.1657487

CrossRef Full Text | Google Scholar

Keywords: AXI (advanced eXtensible interface), deep space exploration, ground testing system, modular design, software defined aerospace payload testing platform

Citation: Yin G, Zhao W, Rao J, Zhang W, Kang J, Xie Y, Ma M, Du J, Liu X, Zhu Y, An J and Wang L (2026) A software defined aerospace payload testing platform based on AXI bus. Front. Astron. Space Sci. 13:1721412. doi: 10.3389/fspas.2026.1721412

Received: 09 October 2025; Accepted: 05 January 2026;
Published: 27 January 2026.

Edited by:

Massimiliano Galeazzi, University of Miami, United States

Reviewed by:

Mugundhan Vijayaraghavan, Indian Institute of Technology Kanpur, India
Harsha Avinash Tanti, Indian Institute of Technology Indore, India
Hao Chen, Xi’an University of Technology, China

Copyright © 2026 Yin, Zhao, Rao, Zhang, Kang, Xie, Ma, Du, Liu, Zhu, An and Wang. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Wenjie Zhao, endqb2sxMjNAMTI2LmNvbQ==; Jianing Rao, UmFvamlhbmluZ0Buc3NjLmFjLmNu

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.