Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Robot. AI, 16 December 2025

Sec. Field Robotics

Volume 12 - 2025 | https://doi.org/10.3389/frobt.2025.1694952

This article is part of the Research TopicAI and Robotics for Smart AgricultureView all 6 articles

Custom UAV with model predictive control for autonomous static and dynamic trajectory tracking in agricultural fields

  • 1 Department of Biological Systems Engineering, School of Computing, University of Nebraska–Lincoln, Lincoln, NE, United States
  • 2 Department of Biological Systems Engineering, University of Nebraska–Lincoln, Lincoln, NE, United States
  • 3 School of Computing, University of Nebraska–Lincoln, Lincoln, NE, United States

Introduction: This study introduces a custom-built uncrewed aerial vehicle (UAV) designed for precision agriculture, emphasizing modularity, adaptability, and affordability. Unlike commercial UAVs restricted by proprietary systems, this platform offers full customization and advanced autonomy capabilities.

Methods: The UAV integrates a Cube Blue flight controller for low-level control with a Raspberry Pi 4 companion computer that runs a Model Predictive Control (MPC) algorithm for high-level trajectory optimization. Instead of conventional PID controllers, this work adopts an optimal control strategy using MPC. The system also incorporates Kalman filtering to enable adaptive mission planning and real-time coordination with a moving uncrewed ground vehicle (UGV). Testing was performed in both simulation and outdoor field environments, covering static and dynamic waypoint tracking as well as complex trajectories.

Results: The UAV performed figure-eight, curved, and wind-disturbed trajectories with root mean square error values consistently between 8 and 20 cm during autonomous operations, with slightly higher errors in more complex trajectories. The system successfully followed a moving UGV along nonlinear, curved paths.

Discussion: These results demonstrate that the proposed UAV platform is capable of precise autonomous navigation and real-time coordination, confirming its suitability for real-world agricultural applications and offering a flexible alternative to commercial UAV systems.

1 Introduction

Uncrewed Aerial Vehicles (UAVs) (Austin, 2011; Gundlach, 2014; Garg, 2021) have become transformative tools in agriculture, reshaping traditional practices through enhanced data collection, targeted input application, and improved efficiency. As agriculture shifts toward data-driven decision-making, UAVs play a critical role in remote sensing by providing high-resolution spatial and temporal data for assessing crop health, field variability, and resource management (Zhang and Zhu, 2023). Their ability to rapidly survey large areas enables near-real-time monitoring of vegetation indices, soil conditions, and plant development stages.

Within precision agriculture, UAVs facilitate site-specific management by enabling the precise delivery of water, fertilizers, and agrochemicals. This targeted approach minimizes resource waste, reduces environmental impact, and improves crop productivity. UAVs are also used extensively for crop scouting, plant classification, and monitoring of agronomic indicators such as biomass, canopy cover, and chlorophyll levels (Bendig et al., 2014; Kalischuk et al., 2019). When equipped with multispectral, thermal, or hyperspectral sensors, UAVs can detect early signs of water stress, pest infestations, weed intrusion, and plant diseases (Zhang et al., 2019; Gašparović et al., 2020), enabling timely interventions that reduce potential yield loss. The integration of advanced technologies, such as machine learning, deep learning (Muvva et al., 2021), and the Internet of Things (IoT) has further expanded UAV capabilities by enabling adaptive mission planning, automated decision-making, and seamless connection with farm management systems (Rejeb et al., 2022). Recent work even demonstrates autonomous landing on static and moving platforms (Muvva et al., 2022), underscoring the potential of cooperative aerial–ground systems.

Beyond the functionality of individual aerial platforms, the integration of UAVs with Uncrewed Ground Vehicles (UGVs) establishes a cooperative system that exploits the complementary capabilities of both agents. While UAVs deliver rapid, large-scale sensing and can also support repetitive tasks such as payload refilling or relay operations more efficiently, UGVs perform high-precision tasks on the ground, including targeted spraying, soil sampling, and mechanical weeding. Extensive research has focused on developing frameworks and coordination strategies for such UAV–UGV collaboration. Ou et al. (2024) developed a decentralized UAV–UGV collaboration framework that integrates an information consensus filtering approach with CBF–CLF control principles, enhancing cooperative localization accuracy and operational safety. Rao et al. (2025) presented a cooperative localization strategy that fuses deep learning–based object detection with Kalman filtering, achieving sub-meter positioning accuracy for UAV–UGV teams even in conditions where GNSS performance is limited. However, realizing the full potential of such UAV–UGV cooperative systems requires flexible and research-oriented aerial platforms with capabilities often constrained in commercial solutions.

Operating UAVs in agricultural fields is difficult due to strong winds, uneven terrain, and crop canopy effects that affect stable flight. This demands adaptive controllers that respond to disturbances instead of relying on fixed gains. Simulation is crucial for safe, repeatable testing before deployment. For example, Ahmed and Xiong (2024) proposed an adaptive impedance control method in a Unity-based simulator (CoppeliaSim), adjusting parameters like stiffness in real time to improve disturbance rejection. Their tests across various trajectories and motion speeds showed better performance than PID and MRAC controllers. Similarly, Goldschmid and Ahmad (2025) demonstrates in simulation that a quadratic MPC significantly outperforms traditional PID-based controllers through step-response, circular, figure-eight, and obstacle-avoidance trajectory experiments on the Parrot Anafi platform, achieving lower tracking errors and smoother control performance. However, simulation alone is not enough. Field experiments are needed to capture real environmental effects and validate system performance. Bridging the gap between controlled simulation and complex field conditions often requires hardware-level customization and flexible platforms, which commercial UAVs struggle to provide.

Despite these advances, commercial UAVs (e.g., DJI Inspire 2, Phantom series, Matrice platforms) present limitations for research and specialized applications (Seidaliyeva et al., 2024). While they offer robustness and ease of operation, their proprietary software and hardware ecosystems restrict low-level control access (Mohsan et al., 2023), payload customization, and algorithmic flexibility capabilities essential for academic research and development of novel autonomy frameworks (Yang et al., 2023). Furthermore, the rapid evolution of commercial UAV ecosystems with frequent hardware revisions, API changes, and discontinued support—creates challenges for long-term research and system integration (Nex et al., 2022). Oguz et al. (2024) developed a streamlined, open-source UAV framework for swarm robotics research, demonstrating effective performance in controlled indoor environments. Their work, while valuable for establishing a reproducible experimental baseline, remains limited to laboratory conditions with constrained environmental variability. In contrast, our research extends this framework to real-world outdoor scenarios, addressing the increased challenges of environmental uncertainty, wind disturbances, and larger operational areas inherent to field testing.

Custom UAV development, in contrast, offers an open and adaptable framework that allows researchers to select flight controllers, sensors, computing modules, and airframes tailored to specific mission needs and budgets. It also enables rapid prototyping and iterative testing of novel control algorithms, perception systems, and navigation architectures. The use of open-source autopilot software such as ArduPilot (Peksa and Mamchur, 2024) further enhances the value of custom platforms by supporting advanced flight modes, parameter tuning, and the integration of experimental strategies such as Model Predictive Control (MPC) or reinforcement learning.

Developing a custom UAV, however, introduces engineering challenges in areas such as component selection, integration, power management, and software configuration (Bhat et al., 2024). These challenges require interdisciplinary expertise spanning mechanical design, electronics, and control systems. Nevertheless, many of these barriers can be mitigated through modular design principles and the adoption of community-supported hardware. Strategic planning allows researchers to streamline the development cycle, reducing integration effort and enabling focus on experimentation.

Another important consideration is regulatory compliance. UAV standards vary across regions, influencing design choices related to flight altitudes, weight limits, communication protocols, and operational safety. For example, UAVs used near urban areas or mountainous terrain may require different sensing and communication strategies compared to those operating in open farmland. Custom development provides the flexibility to adapt systems to local regulations while incorporating mission-specific features (Muthukumar, 2023).

Given these factors, this research presents the design of a custom UAV platform with modular communication, onboard intelligence, and support for advanced control strategies such as Model Predictive Control (MPC). The system accommodates both off-the-shelf and custom sensors, balancing adaptability, scalability, and cost-effectiveness. Although custom UAV development demands greater initial effort, the resulting platform offers resilience, flexibility, and research-oriented functionality, making it a compelling alternative where commercial UAVs fall short. To address this, the present work implements a MPC framework designed for real-time trajectory generation and tracking. This controller enables the UAV to proactively plan its motion, intercept the UGV’s route, and maintain a hovering position relative to the ground vehicle throughout its operation (Budiyanto et al., 2015; Falanga et al., 2019; 2020).

Validation of the system was carried out through both simulated and real-world testing. A software-in-the-loop (SITL) environment was constructed using AirSim (Shah et al., 2018), ArduPilot (Audronis, 2017), and ROS (Quigley et al., 2015), allowing for algorithm verification in a controlled, high-fidelity simulation setup prior to deployment. Furthermore, a physical test platform was built using the CubePilot flight controller, facilitating extensive outdoor testing in a 300-acre agricultural field. The UAV successfully tracked the UGV in real-time, demonstrating the applicability and robustness of the proposed solution.

The principal contributions of this research are summarized as follows:.

• Development of a Fully Autonomous UAV Platform–A custom UAV was built from the ground up, integrating both the physical hardware and on-board software stack for autonomy.

• MPC-Based Trajectory Tracking–Implementation of a Model Predictive Controller for generating smooth, real-time control inputs to follow predefined or dynamically changing trajectories.

• UAV–UGV Coordination Using Sensor Fusion–A cooperative localization and control framework was developed, integrating Kalman Filtering with MPC to enable aerial-ground vehicle tracking.

• Field-Level Experimental Verification–Real-world deployment and testing of the UAV-UGV system in a large agricultural setting to establish its operational reliability and accuracy (refer Figures 1, 2).

Figure 1
A drone hovers over a landscape with muddy brown terrain and scattered green foliage. Patches of water are visible across the ground, creating a swamp-like appearance.

Figure 1. A scene from simulator.

Figure 2
Aerial view of a grassy field with a small drone and a wheeled robot. The image features two insets, each magnifying a device. The left inset shows a yellow, wheeled robot, while the right inset displays a quadcopter drone with blue and red propellers. There's a paved area visible in the bottom corner.

Figure 2. A scene from field test.

2 Materials

2.1 Physical plant

2.1.1 Flight controller

The Cube Blue flight controller (Cube-Blue-FC, 2025) functions as the primary control unit of the UAV, chosen for its modular architecture, advanced performance, and reliability across diverse uncrewed aerial applications. As part of the CubePilot ecosystem, it is designed for seamless integration with a wide range of sensors and peripherals, allowing flexible configuration to meet different mission objectives. One of its defining features is the inclusion of triple redundant Inertial Measurement Units (IMUs) and dual barometers. This redundancy improves fault tolerance and ensures accurate state estimation, which is essential for stable flight under variable environmental conditions. The controller also supports multiple communication interfaces, such as CAN and I2C, facilitating efficient connectivity with GNSS receivers, telemetry links, and other onboard modules. With high-speed processing and robust sensor fusion capabilities, the Cube Blue provides precise real-time navigation and control. Its rugged build and adaptability make it particularly suited for complex UAV missions, including precision agriculture, where dependable performance and rapid decision-making are critical. The main hardware components of the custom UAV and their arrangement are shown in Figure 3.

Figure 3
A quadcopter drone sits on a wooden table. Labeled components include the flight controller, companion computer, GNSS, ESC, motors, propeller, and battery. The drone is inside an office environment.

Figure 3. Components of custom UAV.

2.1.2 Sensors

2.1.2.1 IMU

The Cube Blue flight controller integrates three redundant Inertial Measurement Units (IMUs), which are essential for estimating the UAV’s orientation, angular velocity, and linear acceleration. Each IMU contains a 3-axis gyroscope and a 3-axis accelerometer, providing six degrees of motion sensing. The redundancy improves fault tolerance by allowing the system to switch to alternate IMUs if one unit fails or delivers inconsistent readings. This multi-IMU setup significantly enhances reliability and stability, especially during aggressive maneuvers or in environments with vibration and potential sensor drift.

2.1.2.2 Barometer

Alongside the IMUs, the Cube Blue incorporates dual barometric pressure sensors for altitude estimation. These sensors measure atmospheric pressure to infer relative altitude, which is critical for stable takeoff, landing, and maintaining designated flight levels. The dual-barometer setup allows cross-checking of measurements, minimizing errors caused by sensor faults or environmental variations such as pressure fluctuations. This redundancy is especially valuable in low-altitude missions, where precise vertical control is required for applications like precision agriculture and close-proximity inspection.

2.1.2.3 GNSS

This study employed the HereLink GNSS module (Here3-GNSS, 2025) to deliver accurate and reliable positioning for the UAV. The receiver supports multiple satellite constellations, including GPS (United States), GLONASS (Russia), Galileo (EU), and BeiDou (China), collectively referred to as GNSS. Leveraging signals from several constellations simultaneously improves satellite visibility, shortens the time to first fix (TTFF), and increases overall positioning precision. Such multi-constellation capability is particularly advantageous in agricultural fields, where signals may be obstructed by trees, structures, or uneven terrain. Under optimal conditions and with correction services, the HereLink GNSS achieves centimeter-level accuracy, making it well-suited for high-precision UAV applications. Its rugged design ensures consistent performance across varying environmental conditions, including temperature shifts, wind, and dust exposure. This level of reliability is essential for autonomous missions requiring precise waypoint tracking, systematic area coverage, and repeatable flight paths.

2.1.3 Actuators

2.1.3.1 Motors

The UAV’s propulsion system is powered by four Readytosky 2212 920 KV brushless DC (BLDC) motors, chosen for their balance of thrust, efficiency, and mechanical reliability. With a KV rating of 920, each motor produces approximately 920 revolutions per minute (RPM) per volt, making them well-suited for stable flight in medium-lift quadcopter configurations. Paired with standard 10 × 4.5 propellers, the motors provide a sufficient thrust-to-weight ratio to support autonomous operations with onboard sensing and computing payloads. The 2,212 series motors measure 28 mm in diameter and 24 mm in height (excluding the shaft) and are optimized for operation at a nominal 12 V, consistent with 3S LiPo battery systems. Both clockwise (CW) and counter-clockwise (CCW) variants are included, which counteract torque imbalance and enhance yaw stability. Pre-installed 3.5 mm bullet connectors simplify wiring by eliminating soldering requirements during integration.

2.1.3.2 ESCs

Motor speed regulation and power distribution are managed by QWinOut 30 A brushless electronic speed controllers (ESCs), each paired with a corresponding motor. Designed for 2S–4S LiPo compatibility, these ESCs support a continuous current of 30 A with short-duration bursts up to 40A. An integrated 5 V 3A Battery Elimination Circuit (BEC) supplies regulated power for the flight controller or auxiliary electronics. Pre-installed with SimonK firmware, the ESCs provide rapid throttle response and optimized signal timing tailored to multirotor platforms. This improves motor actuation precision, contributing to smoother dynamics and stable autonomous flight. For integration, each ESC includes 3.5 mm bullet connectors, allowing quick, solder-free connection to both the motors and the power distribution system. Reliability is further enhanced through built-in safety functions, including low-voltage cutoff, overheat protection, and throttle signal loss detection. Users can also adjust parameters such as motor timing, braking mode, startup profile, and cutoff thresholds, enabling fine-tuned performance for specific mission needs. With a lightweight design (25 g per unit), strong current-handling capacity, and straightforward installation, these ESCs are well suited for UAVs deployed in field applications such as precision agriculture and environmental monitoring.

2.1.3.3 Propellers

The UAV employs 10 × 4.5 inch (1,045) two-blade propellers from Readytosky, chosen for their compatibility with 2212-size brushless motors and suitability for medium-class multirotor platforms. Manufactured from durable ABS plastic, the propellers provide an effective balance of strength and lightweight construction. Their aerodynamic profile ensures efficient thrust production with minimal vibration, enhancing overall flight stability and responsiveness. Each set includes both clockwise (CW) and counter-clockwise (CCW) variants, enabling proper pairing with opposing motors and reducing net torque effects during flight. The “1,045” specification denotes a 10-inch diameter and 4.5-inch pitch, a combination well-suited for UAVs with 450–550 mm frames and moderate KV motors (800–1,100 KV). This size-pitch configuration offers a strong balance between lift capacity and efficiency, supporting reliable hovering while still permitting agile maneuvering.

2.1.4 Companion computer

In this study, the Raspberry Pi 4 served as the onboard companion computer, offering a compact and cost-efficient platform for autonomous UAV operations. Powered by a quad-core ARM Cortex-A72 processor and equipped with 8 GB of RAM, it provided sufficient computational capacity for real-time processing, sensor interfacing, and data logging. The board’s connectivity options—including USB 3.0/2.0, Gigabit Ethernet, Bluetooth, and Wi-Fi—enabled seamless integration with onboard peripherals and wireless ground systems. While multimedia outputs such as dual 4K HDMI were not utilized, the device’s processing resources were dedicated to executing autonomy scripts, handling sensor streams, and communicating with the Cube Blue flight controller via MAVLink protocols. Its lightweight form factor, low power requirements, and strong community support made the Raspberry Pi 4 particularly suitable for field-deployable UAVs, ensuring reliable performance in autonomous flight missions.

2.1.5 Other components

2.1.5.1 Chassis

An off-the-shelf quadcopter frame (29 × 18 × 6 cm, 454 g) made of PCB carbon fiber composite was used, offering strength, low weight, and stability. Its central PCB plate provides integrated solder points for seamless connection of the Cube Blue flight controller, PDB, and Raspberry Pi 4 companion computer, enabling autonomous control and real-time optimization. The frame also supports gimbal systems like a 2-axis brushless GoPro mount, allowing high-resolution imaging and FPV for monitoring and precision agriculture applications.

2.1.5.2 Power distribution board

The custom UAV’s power distribution board (65 × 65 mm, 4 mounting holes) manages power delivery with high reliability and flexibility. It supports up to 60 V (14S), 400 A loads, and provides 12 power pad pairs for up to 12 motors, enabling quad, hexa, or octo configurations. Integrated voltage/current sensors enable real-time monitoring, while dual DC-DC converters (5 V/5 A and 12 V/5 A) supply stable power to peripherals. Compatible with SmartAP and other controllers, the PDB ensures robust and adaptable performance for autonomous UAV missions.

2.1.5.3 Battery

To power the custom UAV, we selected an 11.1 V 5200 mAh LiPo battery (50C) that balances high energy density, compact size, and reliable power delivery. Measuring 5.16 × 1.65 × 1.14 inches and weighing 334.7 g, it adds minimal payload while enabling extended flight times for large-area missions. The 50C rating supports high current demands for rapid maneuvers, autonomous control, and sensor payloads. Seamlessly integrated with the UAV’s power system, it ensures stable voltage supply to all components, critical for reliable operation across varied conditions.

2.2 Software in the loop

2.2.1 ArduCopter

ArduCopter (Audronis, 2017), part of the open-source ArduPilot ecosystem, functions as the primary flight control firmware in this study. It offers a comprehensive autopilot stack designed for multirotor platforms, supporting both manual and autonomous modes such as stabilized flight, waypoint navigation, and guided missions. Its modular structure enables seamless integration with sensors, GNSS modules, companion computers, and actuators, making it adaptable across diverse UAV configurations.

In this work, ArduCopter is executed on the Cube Blue flight controller, where it handles low-level control loops, sensor fusion, and real-time trajectory execution. Features such as custom control tuning, mission scripting, and telemetry support were leveraged to implement and validate advanced strategies including optimal control. Moreover, its compatibility with MAVLink ensures reliable communication with the Raspberry Pi companion computer and ground control stations, enabling real-time monitoring, data logging, and command updates during both simulation and field trials.

2.2.2 AirSim

AirSim (Shah et al., 2018) is utilized in this research as the primary simulation platform for designing and validating autonomous UAV control strategies under controlled, repeatable conditions. Developed by Microsoft on the Unreal Engine, it offers high-fidelity physics, photorealistic rendering, and configurable sensor models, making it well-suited for aerial robotics studies. The platform supports multiple vehicle types and provides an extensible API that enables integration of custom control algorithms, perception pipelines, and mission frameworks.

For this work, AirSim is configured to simulate open-field environments and variable wind conditions relevant to agricultural UAV operations. The autonomous stack is deployed within these virtual scenarios prior to field experiments, ensuring a consistent software-in-the-loop (SITL) setup by linking AirSim with ROS 2 and the same control framework used on the physical UAV. This approach facilitates early-stage testing, debugging, and parameter optimization while avoiding the risks and logistical overhead of flight trials. Overall, AirSim significantly accelerates the development cycle and supports safe, reliable controller validation.

2.2.3 ROS2

Robot Operating System 2 (ROS 2) (Rico, 2022) serves as the middleware framework for high-level autonomy, data exchange, and modular integration within the UAV platform. Its distributed architecture supports communication among system components, including sensor drivers, perception modules, control algorithms, and logging utilities. Compared to ROS 1, ROS 2 offers enhanced real-time performance, which is particularly beneficial for latency-critical aerial robotics tasks.

In this work, ROS 2 is deployed on the Raspberry Pi companion computer, coordinating data between the control system, GNSS, IMU streams, and simulation environments such as AirSim. Custom ROS 2 nodes manage UAV state publishing, control command execution, and links to external monitoring interfaces. The framework’s scalability and modularity make it well-suited for future extensions, such as integrating additional sensors or enabling multi-robot coordination.

3 Methodology

In motion control literature, trajectory tracking refers to the problem where a vehicle is required to follow a reference path that is explicitly parameterized in time, i.e., both the spatial coordinates and their corresponding timing information are predefined. Conversely, path following focuses on converging to and moving along a desired geometric path without any preassigned timing law, allowing the vehicle to adjust its speed autonomously (He et al., 2025).

In our system (Figure 4), these two modes are distributed between agents: the UGV executes a waypoint-based path following task, where it sequentially follows predefined spatial references without explicit timing constraints, while the UAV performs real-time trajectory tracking, generating and tracking a time-varying trajectory based on the live UGV position data. The UAV maintains a hovering offset above the UGV, adapting its motion dynamically as the ground vehicle progresses along its path.

Figure 4
Diagram illustrating the operation of uncrewed vehicles. The left panel shows an uncrewed ground vehicle with an onboard computer, sensors, and a controller using ArduRover firmware. It communicates odometry and GNSS data via WiFi. The right panel depicts an uncrewed aerial vehicle with sensors, a flight controller using ArduCopter firmware, and the MAVLink protocol. Both systems utilize onboard computers and actuators for data processing and movement control.

Figure 4. Block diagram of the system.

3.1 Control

This section details the design and implementation of the optimal control strategy, namely, Model Predictive Control (MPC). It begins with the state-space formulation, specifying the selected system states and inputs and the mathematical model used to represent the UAV dynamics. Building on this representation, the controller derivation is presented, including the definition of the cost function and the finite-horizon optimization procedure used to compute control inputs over the prediction horizon.

UAV state contains linear displacement along front-back ( x ) , left-right ( y ) , up-down axis ( z ) , and angular displacement ( ψ ) along yaw axis.

X = x , y , z , ψ T ( 1 )

UAV input contains linear velocity along front-back ( x ̇ ) , left-right ( y ̇ ) , up-down ( z ̇ ) axis, and angular velocity ( ψ ̇ ) along yaw axis

u = x ̇ , y ̇ , z ̇ , ψ ̇ ( 2 )

The continuous state space equations can be seen in Equations 3, 4. These equations are shown shorthand in Equation 5

x ̇ y ̇ z ̇ ψ ̇ = 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 x y z ψ + 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 x ̇ y ̇ z ̇ ψ ̇ ( 3 )
x y z ψ = 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 x y z ψ + 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 x ̇ y ̇ z ̇ ψ ̇ ( 4 )
X ̇ = A c X + B c u y = C c X + D c u ( 5 )

Continuous to discrete state space conversion equations are shown in Equation 6

A d = e A c T I + A c T B d = e A c τ τ = 0 τ = T d τ B c B c T C d = C c D d = D c ( 6 )

The converted discrete state space equations can be seen in Equations 7, 8. The short form of these equations can be seen in Equation 9

x y z ψ t + 1 = 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 x y z ψ t + δ t 0 0 0 0 δ t 0 0 0 0 δ t 0 0 0 0 δ t x ̇ y ̇ z ̇ ψ ̇ t ( 7 )
x y z ψ t = 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 x y z ψ t + 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 x ̇ y ̇ z ̇ ψ ̇ t ( 8 )
X t + 1 = A d X t + B d u t y t = C d X t + D d u t ( 9 )

Accurate trajectory tracking benefits greatly from optimal control strategies, which provide advantages over conventional non-optimal methods such as PID controllers. For this reason, the present work employs an optimal control framework. In particular, MPC approach was designed due to its robustness in trajectory tracking and its capability to adjust the vehicle’s motion by anticipating future waypoints.

X a t = A X a t 1 + B u t 1 Y a t = C X a t + D u t ( 10 )

Let n be the horizon window of the MPC. The MPC framework seeks to minimize a quadratic cost function, as shown in Equation 11. Let r t be the reference point and y a t be the current state of the UAV, then the error can be expressed as e t = r t y a t e t = r t C d X a t . This state cost function consists of two components Running cost (t = 1 from t = n-1) and terminal cost (t = n). In addition to minimizing the state error, it is also important to minimize the control effort. Excessive control input can result in overly aggressive or unstable behavior. To address this, a control input cost term is incorporated into the objective function. Q and R represents the state and control costs. High state cost tries to reach the target location as soon as possible, on the otherhand Its formulation incorporates penalties on trajectory deviation, control effort, and variations in control inputs. By balancing these terms, the controller adapts dynamically to changes in the position, enabling smooth and reliable trajectory tracking throughout the mission.

J = 1 2 e t + n T S e t + n + 1 2 i = 1 n 1 e t + i T Q e t + i + 1 2 i = 0 n 1 u t + i T R u t + i ( 11 )

In this formulation, X a denotes the UAV state, while X g represents the UGV state. For path following, the UAV is directly assigned predefined waypoints. In contrast, for dynamic trajectory tracking, the UAV’s reference path is generated in real time based on the UGV’s current position, enabling continuous online tracking. The reference generator, expressed in Equation 12, uses the UGV state as input to compute the UAV’s target position.

To maintain consistent altitude, the UAV is commanded to fly at a fixed vertical offset relative to the UGV. The desired altitude z f is set to 8 m above the UGV’s current position, ensuring line-of-sight visibility and stable flight during cooperative operations.

x r y r z r = x g y g z g + 0 0 z f ( 12 )

3.2 Kalman filtering

Kalman filtering was applied exclusively for dynamic trajectory tracking of the UGV. Although the UGV continuously transmits its positional data to the UAV over the network, a Kalman Filter (KF) was implemented on the UAV side to improve robustness and address variations in update frequency. In practice, GNSS data rates may fluctuate due to environmental conditions or communication delays. To maintain smooth and reliable tracking under such circumstances, the KF estimates the UGV’s state, generating a continuous and stable reference trajectory for the UAV to follow.

For state estimation, the UGV’s motion model was linearized and expressed in discrete-time state-space form, as shown in Equation 13. The state vector consists of positional coordinates [ x ; y ; z ] T . While the UGV moves on flat terrain, the altitude component z was included to enforce a constant vertical offset (typically 8 m) between the UAV and UGV, thereby accounting for terrain irregularities and ensuring consistent altitude control during tracking. Process and observation uncertainties are represented through the covariance matrix in Equation 14. This formulation encodes the confidence associated with each state variable, enabling the KF to properly weigh measurements and achieve improved filtering and smoothing in real-time operation.

x g y g z g t = 1 0 0 0 1 0 0 0 1 x g y g z g t 1 + δ t 0 0 0 δ t 0 0 0 δ t x g ̇ y g ̇ z g ̇ t 1 ( 13 )
P t = σ x g 2 ρ x g y g ρ x g z g ρ x g y g σ y g 2 ρ y g z g ρ x g z g ρ y g z g σ z g 2 ( 14 )

σ x 2 , σ y 2 , σ z 2 represents the variance. ρ x y , ρ y z , ρ x z represents the covariance. Typically ρ x y = ρ y z = ρ x z = 0 .

3.2.1 Prediction

During the prediction step, the Kalman Filter estimates the UGV’s future state using its current position and velocity. This step is particularly important when GNSS updates are delayed or briefly unavailable, as it allows the system to propagate the state forward through the motion model and maintain a reasonable estimate of the UGV’s position. The corresponding state prediction equation is provided in Equation 15.

Simultaneously, the state covariance matrix is updated to represent the growing uncertainty over time. As shown in Equation 16, the process noise covariance Q accounts for uncertainties in the UGV’s dynamics and potential external disturbances. Together, these predictive updates enable smoother UAV tracking performance and preserve stability even when GNSS reception is intermittent.

X G t = A X G t 1 + + B u t 1 ( 15 )
P t = A P t 1 + A T + Q t ( 16 )

3.2.2 Update

The update step of the Kalman Filter is executed whenever new GNSS measurements from the UGV are received. This stage refines the predicted state, improving localization accuracy. Since the UGV employs RTK-enabled GNSS, the associated measurement uncertainty is low, allowing for highly reliable corrections to the estimated state.

The state correction is expressed in Equation 17, where K is the Kalman gain, X G t represents the GNSS-derived position, and H is the observation matrix mapping the predicted state to the measurement domain. The corresponding covariance update is given in Equation 18, representing the confidence in the incoming GNSS data. By incorporating these measurements, the filter reduces estimation error and ensures that the UAV maintains an accurate, real-time estimate of the UGV’s position, thereby supporting reliable coordinated tracking.

X G t + = X G t + K t X G t H t X G t ( 17 )
P t + = I K t H t P t ( 18 )

3.3 Trajectory generator

To assess the effectiveness of the proposed MPC strategy, two static and one dynamic trajectories were designed: Row-Crop Pattern, Figure-Eight Trajectory and UGV Trajectory.

3.3.1 Row-Crop Pattern

The Row-Crop Pattern was chosen because of its direct relevance to agricultural operations, where ground and aerial vehicles frequently navigate in linear passes between crop rows. This path is generated by alternating X- and Y-axis values across multiple iterations, producing the characteristic back-and-forth motion typical of field coverage tasks. The primary objective of the MPC controller is to track this predefined path with high accuracy, demonstrating its suitability for structured agricultural missions. The row crop pattern generation trajectory is shown in Algorithm 1.

Algorithm 1

Algorithm 1. Row Crop Pattern Trajectory.

3.3.2 Figure-eight trajectory

Alongside the Row-Crop Pattern, a Figure-Eight trajectory was developed to evaluate the UAV’s performance under more demanding flight conditions. This path introduces continuous turns and frequent directional changes, serving as a benchmark for assessing the controller’s responsiveness and stability during complex maneuvers. By requiring smooth transitions across curved segments, the Figure-Eight trajectory provides a rigorous test of the MPC framework’s ability to maintain accurate tracking. The eight pattern trajectory generation is shown in Algorithm 2.

Algorithm 2

Algorithm 2. Figure Eight Trajectory.

3.3.3 UGV trajectory

The UGV is configured with a Cube Orange controller with ArduRover firmware that stores a predefined row-crop trajectory. Once initiated through Mission Planner, the vehicle autonomously follows this path without requiring manual control. In practice, the UGV velocity is roughly 1–2 m/s. In parallel, a companion computer executes a script that continuously retrieves state information—including position and velocity, from the flight controller. These data are transmitted over a Wi-Fi link, allowing the UAV to access the UGV’s real-time coordinates for tracking and cooperative navigation. All of this is shown in Algorithm 3.

Algorithm 3

Algorithm 3. UGV Operation: Autonomous Navigation and State Transmission.

4 Results

4.1 Autonomous takeoff and landing

This sub-section presents the results of the autonomous takeoff and landing trials conducted with our custom UAV. Oftentimes, to perform operations such as spraying and inspection, the vehicle needs to be precisely positioned at a specific point and must be capable of landing on that particular location. Therefore, we conducted this autonomous takeoff and landing test to evaluate how accurately our custom-built UAV can hover and land at a given location. To assess the system’s precision and repeatability, the vehicle was tasked with performing autonomous takeoff and landing maneuvers across five trials. These tests were designed to evaluate the consistency of the UAV’s performance in returning to a designated landing position after each flight. Figure 5 (left) illustrates one of such flights, X-axis illustrates the Front-Back axis, Y-axis shows the Left-Right axis and Z-axis shows the Up-Down axis. The figure shows autonomous takeoff and landing.

Figure 5
An image displaying three charts related to autonomous takeoff and landing. The first chart is a 3D trajectory plot showing flight paths with colored markers for takeoff, hover, and landing. The second chart, titled

Figure 5. Autonomous takeoff and landing performance showing flight sequence (left) and deviation with RMSE (right).

The Table 1 summarizes the findings, highlighting deviations along the X-axis (left-right movement) and Y-axis (front-back movement). Figure 5 (right) the left subplot shows the deviation in the UAV’s landing position across five trials in the X-Y plane, where each point (T1–T5) represents the deviation from the intended landing spot along the lateral (X) and longitudinal (Y) axes. This highlights the spatial accuracy of the UAV in autonomous return-to-land operations. The right subplot presents the Root Mean Square Error (RMSE) for each trial, measuring the overall deviation magnitude. For these trials, we excluded values along the Up–Down (vertical) axis, as standard GNSS sensors typically do not provide reliable altitude information at ground level, leading to inconsistencies in the measurement of vertical deviations upon landing. Additionally, we computed the total deviation as the root mean square (RMS) of the deviations along the X and Y-axes. This RMS value provides a comprehensive measure of the overall precision of each landing relative to the designated target. The results indicate that the UAV achieved a maximum deviation of 28 cm and a minimum deviation of 10 cm from its initial location. These findings underscore the UAV’s capability for precise autonomous takeoff and landing, reflecting the effectiveness of the vehicle back to the original landing zone with minimal error.

Table 1
www.frontiersin.org

Table 1. Autonomous takeoff and landing results.

4.2 Autonomous takeoff, navigation, landing and return to launch

In this scenario, the vehicle was programmed to perform an autonomous takeoff, navigate along either the X or Y-axis, and then land at a predetermined position. Following this landing, the vehicle took off again and autonomously returned to its initial position to perform a final landing. This test was conducted to evaluate the UAV’s capability to execute precise autonomous takeoffs, navigate accurately along designated axes, and return to a specified landing point. Additionally, the vehicle’s ability to reliably return to its original position after completing its mission was assessed. To verify repeatability and robustness, the test was conducted five times. Figure 6 (left) illustrates one of the tests where the vehicle took off, navigate along Y-axis, land and performs return to landing operation. The flight path is shown in blue color, and the scatter point illustrates each phase of the operation.

Figure 6
Three-panel graphic showing flight path and error analysis for autonomous drone operations. Left: 3D plot of the drone's flight path with labeled stages, such as takeoff, hover, and landing. Middle: Scatter plot depicting deviation in X-Y space with points T1 to T5. Right: Line graph illustrating RMSE results over five trials, showing a fluctuating trend from eight to seventeen centimeters.

Figure 6. Autonomous takeoff, navigation, landing, and return-to-land performance showing mission sequence (left) and deviation with RMSE (right).

The results of these trials are presented in Table 2, which includes the deviations observed along the X-axis, Y-axis, and the root mean square error (RMSE) calculated over both axes. The data indicate that, upon returning to the original position after each trial, the UAV was at most 20 cm and at least 8 cm from the initial location. Figure 6 (right) visualizes the deviation in X-Y space for five autonomous flight trials (T1–T5), where the UAV performed takeoff, navigated to a target, landed, and returned to the original location. The scatter points represent the horizontal deviation from the intended return position, showing high repeatability and spatial accuracy across trials. The right subplot illustrates the Root Mean Square Error (RMSE) for each trial, providing a quantitative measure of landing precision. These results highlight the robustness and precision of the UAV’s autonomous navigation and positioning capabilities, demonstrating its effectiveness in maintaining spatial accuracy when returning to its starting point following mission completion.

Table 2
www.frontiersin.org

Table 2. Autonomous takeoff, navigation and landing results.

4.3 Autonomous row crop pattern tracking

This subsection presents MPC performance on the Row-Crop Pattern. Figure 7 (right) illustrates MPC-based tracking of the Row-Crop Pattern. Red dots mark trajectory waypoints, and the UAV’s actual path is shown in blue. The UAV follows the planned trajectory with only minor deviations, attributed to external disturbances or transient control effects, confirming the effectiveness of the MPC approach. The figure also captures the UAV’s autonomous takeoff and return-to-launch (RTL), illustrating the complete end-to-end mission.

Figure 7
Two graphs illustrate autonomous row crop pattern tracking. The left chart shows a 2D path with a blue line representing the UAV path and red circles for waypoints. The right 3D graph displays the same paths with similar elements, highlighting error vectors and tracking success.

Figure 7. Autonomous row crop pattern tracking showing 2D view (left) and 3D view (right).

Figure 7 (left) depicts the deviation of the UAV’s trajectory from the planned waypoints. The X and Y-axes denote Left–Right and Front–Back directions (in meters). Red dots mark waypoints, the blue line shows the actual path, and green arrows indicate the direction of deviation. This visualization highlights both the accuracy of trajectory tracking and the locations where minor deviations occurred.

To assess the repeatability of the MPC-based trajectory tracking, five experimental trials were conducted. The outcomes of these trials are presented in Table 3, which reports the mean deviations along the X- and Y-axes together with the overall Root Mean Square Error (RMSE). Across all trials, the largest average deviation from the reference path was approximately 16 cm, while the smallest was about 9 cm. These findings highlight the reliability and consistency of the MPC controller in maintaining accurate trajectory tracking.

Table 3
www.frontiersin.org

Table 3. Autonomous RowCrop pattern navigation results.

4.4 Autonomous eight pattern tracking

This experiment aimed to assess the UAV’s capability to autonomously follow complex trajectories with high precision. For this purpose, a series of waypoints was arranged in a figure-eight pattern, creating a challenging path that demanded continuous changes in direction and coordinated motion across all axes. The UAV was required to track this trajectory autonomously using the implemented control framework.

The corresponding results are presented in Figure 8 (right). This figure provides a three-dimensional view of one representative trial, depicting the UAV’s trajectory across the X, Y, and Z-axes, where the red points denote the predefined waypoints and the blue curve illustrates the actual flight path. A comparison of the planned and executed trajectories indicates that the UAV closely adhered to the reference path. These results confirm the system’s ability to manage intricate navigation tasks, demonstrating smooth and accurate path following in real time.

Figure 8
Graphs showing a figure-eight trajectory. Left graph: 2D plot of actual paths (red) and waypoints (green) with error vectors (blue). Right graph: 3D plot of UAV path in blue with red waypoints.

Figure 8. UAV eight pattern trajectory tracking showing 2D trajectory (left) and 3D trajectory (right).

Figure 8 (left) shows the Root Mean Square Error (RMSE) analysis of the UAV’s trajectory relative to the reference waypoints during the figure-eight test. The planned trajectory is shown in green, the actual flight path in red, and blue arrows indicate error vectors pointing from the UAV’s path to the intended waypoints. These vectors highlight both the magnitude and direction of tracking deviations along the trajectory. For most of the path, the error vectors remain short, reflecting accurate tracking. Larger deviations appear near the upper and lower lobes of the figure-eight, where sharper curvature requires rapid control adjustments, increasing the chance of transient errors. This visualization clearly illustrates the spatial distribution of tracking error, demonstrating overall precision while pinpointing the most challenging segments. Such analysis provides useful feedback for refining control strategies in scenarios involving complex maneuvers.

Following the same procedure as in earlier experiments, the figure-eight trajectory was flown five times to assess repeatability and reliability. Table 4 summarizes the results, reporting deviations along the X- and Y-axes together with the Root Mean Square Error (RMSE) derived from these horizontal offsets. The data indicate that the UAV consistently tracked the prescribed figure-eight path with good spatial accuracy. Among the five trials, the largest deviation from the reference waypoints was approximately 35 cm, while the smallest was about 20 cm. These results demonstrate stable performance and the ability of the system to negotiate complex trajectories within moderate error bounds. In summary, the findings confirm the robustness of the autonomous navigation framework in handling curved, non-linear paths. The UAV effectively accommodated the demands of figure-eight maneuvers, reinforcing the validity of the implemented MPC-based control strategy for real-world navigation tasks.

Table 4
www.frontiersin.org

Table 4. Autonomous eight pattern navigation results.

4.5 Autonomous UAV tracking autonomous UGV

This section reports the results of real-time autonomous tracking, where both the aerial and ground vehicles operated independently. Unlike a simple rectangular route, the UGV traverses a curved row-crop path resembling a figure-eight, which adds complexity due to continuous directional and curvature changes. The UAV effectively adjusted its motion in real time, maintaining successful tracking of the UGV across the full trajectory.

Figure 9 shows the 3D results of the UAV autonomously tracking a UGV. The axes correspond to Front–Back (X), Left–Right (Y), and Up–Down (Z) directions, with all units in meters. In the plot, the UGV’s trajectory is represented in orange and the UAV’s in blue. The UGV follows a predefined row-crop path with figure-eight curvature, introducing sharp turns and frequent directional changes. The UAV continuously adapts its motion in real time to align with the UGV’s updates. The results confirm that the UAV maintained close spatial alignment with the UGV for the duration of the mission, successfully completing the tracking task. Figure 10 illustrates the test field.

Figure 9
3D graph showing UAV and UGV trajectories. The UAV trajectory is depicted with blue circles connected by lines, while the UGV trajectory is marked with red arrows. Axes are labeled Pose X, Pose Y, and Pose Z in meters.

Figure 9. 3D trajectories of an autonomous UAV (blue) and UGV (red) with markers at distance intervals. Arrows show motion direction, and green dashed lines indicate UAV altitude relative to ground.

Figure 10
Aerial view highlighting a yellow looped trajectory on a landscape with fields and buildings. An enlarged section shows detail, with a legend indicating Lincoln Sky Knights and UGV Trajectory. A compass rose is visible for orientation.

Figure 10. The UGV path at the Lincoln Sky Knights site (40.931627° N, 96.537558° W).

4.6 Ablation study

The ablation study primarily highlights two aspects: 1. the comparison between simulation and real-world performance, and 2. the effect of prediction horizon length on the MPC controller.

Figure 11 presents the comparison between the simulator and real-world results. It includes the root mean square error (RMSE) values for autonomous takeoff and landing, as well as for figure-eight trajectory tracking. The x-axis represents the test type, and the y-axis denotes the RMSE. As expected, the simulator results exhibit lower errors than those observed in real-world experiments. This discrepancy arises because the simulator operates under idealized conditions, whereas real-world scenarios are influenced by various factors such as wind gusts, sensor noise, actuator imperfections, and environmental disturbances.

Figure 11
Bar chart comparing simulation and real-world RMSE for UAV tests. For takeoff and landing, simulation is 0.30 cm, real world is 10.97 cm. For trajectory tracking, simulation is 26.81 cm, real world is 34.98 cm.

Figure 11. Simulation vs. Real-World RMSE for UAV Tests.

Figure 12 illustrates the impact of varying the horizon period on the MPC controller. The x-axis represents the horizon period length, and the y-axis shows the corresponding RMSE. As the horizon period increases, the RMSE decreases, indicating that a longer prediction horizon enables the controller to anticipate future states more effectively and mitigate errors. However, this improvement comes at the cost of increased computational load. Additionally, a saturation effect can be observed—beyond a certain horizon length, the reduction in RMSE becomes marginal, suggesting diminishing returns from further increases in the horizon period.

Figure 12
Line graph showing the variation of RMSE with MPC Horizon Period. The x-axis represents the MPC Horizon Period marked at 1, 5, 10, and 20. The y-axis represents Eight-Pattern RMSE in centimeters, ranging from 24.75 to 26.75. The graph shows a descending trend from an RMSE of about 26.75 cm at horizon period 1 to about 24.75 cm at horizon period 20.

Figure 12. Variation of RMSE with MPC horizon period.

5 Discussions

5.1 Trajectory

The figure-eight pattern was designed and tested with varying numbers of waypoints as well as different lengths and widths of the pattern. The number of waypoints varied between 20 and 50. As the number of waypoints increased, the trajectory became smoother and more closely resembled the figure-eight shape. Conversely, a lower number of waypoints resulted in a trajectory that deviated from the eight-like appearance. Additionally, different lengths and widths ranging from 5 m to 10 m were tested. While all sizes enabled the vehicle to achieve the figure-eight pattern, longer patterns were more visually distinct and pronounced compared to smaller ones.

5.2 Control cost

The state cost and control input cost play crucial roles in ensuring smooth trajectory tracking and achieving the goal pose efficiently. A high state cost penalizes errors, ensuring that the vehicle reaches each waypoint as closely as possible. In such cases, the vehicle generates higher velocities to minimize error quickly, resulting in more aggressive behavior. Conversely, a high control input cost constrains velocities, enabling the vehicle to approach waypoints more smoothly, albeit at the cost of increased time to reach each waypoint. The choice between these approaches depends on the mission objective—whether prioritizing rapid target achievement or conserving energy. In our case, we opted for a high control input cost to ensure the vehicle tracks the target smoothly rather than with aggressive maneuvers. Nonetheless, we experimented with both approaches.

5.3 Effects of wind

Wind conditions significantly influenced the experimental outcomes. During the testing period, wind speeds ranged between 21 km/h and 29 km/h, with occasional sudden gusts. These wind disturbances caused deviations in the intended figure-eight trajectory, resulting in a distorted or irregular pattern. The effect of wind was more pronounced under the high state cost configuration compared to the high control input cost configuration. In the high state cost scenario, the vehicle prioritized minimizing error by rapidly reaching each waypoint and transitioning to the next. This aggressive approach made the trajectory more prone to deviations, as the vehicle was less capable of compensating for sudden wind gusts, leading to a pattern with noticeable dips and irregularities. Conversely, under the high control input cost configuration, the vehicle moved at a slower pace, taking more time to reach each waypoint. This additional time allowed the vehicle to better adjust to wind disturbances, resulting in a smoother trajectory and greater resilience to gust-induced perturbations.

6 Conclusion

This study demonstrated the design, development, and validation of a custom UAV platform tailored for precision agriculture. The system was built with modularity, adaptability, and cost-effectiveness in mind, providing an open alternative to proprietary commercial solutions. A Cube Blue flight controller managed the low-level dynamics, while a Raspberry Pi 4 companion computer executed Model Predictive Control (MPC), enabling efficient trajectory planning and autonomous navigation.

In contrast to conventional autopilot frameworks that rely primarily on PID control and static waypoint missions, the proposed architecture integrates MPC with Kalman filtering to achieve dynamic, real-time tracking of ground vehicles navigating complex, curved paths. The UAV’s capacity to autonomously follow challenging trajectories—such as row-crop and figure-eight patterns—was validated through both simulation (AirSim) and real-world field trials.

Experimental findings confirmed reliable performance, with Root Mean Square Error (RMSE) ranging from 8 to 20 cm in standard takeoff, navigation, and landing tasks, and 20–35 cm for more demanding trajectories. These results underscore the platform’s precision, robustness, and suitability for agricultural operations.

Data availability statement

The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.

Author contributions

VM: Conceptualization, Data curation, Formal Analysis, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review and editing. KJ: Formal Analysis, Validation, Data curation, Visualization, Writing – review and editing. YC: Data curation, Visualization, Validation, Writing – original draft, Writing – review and editing. SP: Supervision, Funding acquisition, Writing – review and editing. MW: Supervision, Writing – review and editing.

Funding

The authors declare that financial support was received for the research and/or publication of this article. This research was supported by: The United States Department of Agriculture AFRI-NIFA Grant Award No. 2021-67021-34411.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The authors declare that Generative AI was used in the creation of this manuscript. Generative AI was used only for grammar verification and language checking.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Ahmed, Z., and Xiong, X. (2024). Adaptive impedance control of multirotor uav for accurate and robust path following. Machines 12, 868. doi:10.3390/machines12120868

CrossRef Full Text | Google Scholar

Audronis, T. (2017). Designing purpose-built drones for ardupilot pixhawk 2.1: build drones with ardupilot. Packt Publishing Ltd.

Google Scholar

Austin, R. (2011). Unmanned aircraft systems: UAVS design, development and deployment. John Wiley and Sons.

CrossRef Full Text | Google Scholar

Bendig, J., Bolten, A., Bennertz, S., Broscheit, J., Eichfuss, S., and Bareth, G. (2014). Estimating biomass of barley using crop surface models (csms) derived from uav-based rgb imaging. Remote Sens. 6, 10395–10412. doi:10.3390/rs61110395

CrossRef Full Text | Google Scholar

Bhat, G. R., Dudhedia, M. A., Panchal, R. A., Shirke, Y. S., Angane, N. R., Khonde, S. R., et al. (2024). Autonomous drones and their influence on standardization of rules and regulations for operating: a brief overview. Results Control Optim. 14, 100401. doi:10.1016/j.rico.2024.100401

CrossRef Full Text | Google Scholar

Budiyanto, A., Cahyadi, A., Adji, T. B., and Wahyunggoro, O. (2015). “Uav obstacle avoidance using potential field under dynamic environment,” in 2015 international conference on control, electronics, renewable energy and communications (ICCEREC) (IEEE), 187–192.

CrossRef Full Text | Google Scholar

Cube-Blue-FC (2025). Cube pilot. Australia: Geelong.

Google Scholar

Falanga, D., Kim, S., and Scaramuzza, D. (2019). How fast is too fast? the role of perception latency in high-speed sense and avoid. IEEE Robotics Automation Lett. 4, 1884–1891. doi:10.1109/lra.2019.2898117

CrossRef Full Text | Google Scholar

Falanga, D., Kleber, K., and Scaramuzza, D. (2020). Dynamic obstacle avoidance for quadrotors with event cameras. Sci. Robotics 5, eaaz9712. doi:10.1126/scirobotics.aaz9712

PubMed Abstract | CrossRef Full Text | Google Scholar

Garg, P. K. (2021). Unmanned aerial vehicles: an introduction (mercury learning and information).

Google Scholar

Gašparović, M., Zrinjski, M., Barković, , and Radočaj, D. (2020). An automatic method for weed mapping in oat fields based on uav imagery. Comput. Electron. Agric. 173, 105385. doi:10.1016/j.compag.2020.105385

CrossRef Full Text | Google Scholar

Goldschmid, P., and Ahmad, A. (2025). Integrated multi-simulation environments for aerial robotics research. arXiv. doi:10.48550/arXiv.2502.10218

CrossRef Full Text | Google Scholar

Gundlach, J. (2014). Designing unmanned aircraft systems. Reston, VA, USA: American Institute of Aeronautics and Astronautics.

Google Scholar

He, L., Xie, M., and Zhang, Y. (2025). A review of path following, trajectory tracking, and formation control for autonomous underwater vehicles. Drones 9, 286. doi:10.3390/drones9040286

CrossRef Full Text | Google Scholar

Here3-GNSS (2025). Cube pilot. Australia: Geelong.

Google Scholar

Kalischuk, M., Paret, M. L., Freeman, J. H., Raj, D., Da Silva, S., Eubanks, S., et al. (2019). An improved crop scouting technique incorporating unmanned aerial vehicle–assisted multispectral crop imaging into conventional scouting practice for gummy stem blight in watermelon. Plant Dis. 103, 1642–1650. doi:10.1094/pdis-08-18-1373-re

PubMed Abstract | CrossRef Full Text | Google Scholar

Mohsan, S. A. H., Othman, N. Q. H., Li, Y., Alsharif, M. H., and Khan, M. A. (2023). Unmanned aerial vehicles (uavs): practical aspects, applications, open challenges, security issues, and future trends. Intell. Serv. Robot. 16, 109–137. doi:10.1007/s11370-022-00452-4

PubMed Abstract | CrossRef Full Text | Google Scholar

Muthukumar, A. (2023). “Custom and design of agri drone,” in Proceedings of ICCCI 2023. doi:10.1109/ICCCI56745.2023.10128266

CrossRef Full Text | Google Scholar

Muvva, K., Bradley, J. M., Wolf, M., and Johnson, T. (2021). “Assuring learning-enabled components in small unmanned aircraft systems,” in AIAA scitech 2021 forum. 0994.

Google Scholar

Muvva, V. V. R. M. K., Li, G., and Wolf, M. (2022). “Autonomous uav landing on a moving uav using machine learning,” in AIAA SCITECH 2022 forum. 0789.

Google Scholar

Nex, F., Armenakis, C., Cramer, M., Cucci, D. A., Gerke, M., Honkavaara, E., et al. (2022). Uav in the advent of the twenties: where we stand and what is next. ISPRS J. Photogrammetry Remote Sens. 184, 215–242. doi:10.1016/j.isprsjprs.2021.12.006

CrossRef Full Text | Google Scholar

Oguz, S., Heinrich, M. K., Allwright, M., Zhu, W., Wahby, M., Garone, E., et al. (2024). An open-source uav platform for swarm robotics research: using cooperative sensor fusion for inter-robot tracking. IEEE Access PP, 1. doi:10.1109/ACCESS.2024.3378607

CrossRef Full Text | Google Scholar

Ou, B., Liu, F., and Niu, G. (2024). Distributed localization for uav–ugv cooperative systems using information consensus filter. Drones 8, 166. doi:10.3390/drones8040166

CrossRef Full Text | Google Scholar

Peksa, J., and Mamchur, D. (2024). A review on the state of the art in copter drones and flight control systems. Sensors 24, 3349. doi:10.3390/s24113349

PubMed Abstract | CrossRef Full Text | Google Scholar

Quigley, M., Gerkey, B., and Smart, W. D. (2015). Programming robots with ROS: a practical introduction to the robot operating system. Sebastopol, CA: O’Reilly Media, Inc.

Google Scholar

Rao, V. V. R. M. K., Chawla, Y., Joseph, K. T., Pitla, S., and Wolf, M. (2025). “Cooperative localization of uavs in multi-robot systems using deep learning-based detection,” in AIAA science and technology forum and exposition, AIAA SciTech forum 2025 (American Institute of Aeronautics and Astronautics Inc, AIAA).

Google Scholar

Rejeb, A., Abdollahi, A., Rejeb, K., and Treiblmaier, H. (2022). Drones in agriculture: a review and bibliometric analysis. Comput. Electron. Agric. 198, 107017. doi:10.1016/j.compag.2022.107017

CrossRef Full Text | Google Scholar

Rico, F. M. (2022). A concise introduction to robot programming with ROS2. Chapman and Hall/CRC.

CrossRef Full Text | Google Scholar

Seidaliyeva, U., Ilipbayeva, L., Taissariyeva, K., Smailov, N., and Matson, E. T. (2024). Advances and challenges in drone detection and classification techniques: a state-of-the-art review. Sensors 24, 125. doi:10.3390/s24010125

PubMed Abstract | CrossRef Full Text | Google Scholar

Shah, S., Dey, D., Lovett, C., and Kapoor, A. (2018). “Airsim: high-fidelity visual and physical simulation for autonomous vehicles,” in Field and service robotics: results of the 11th international conference (Springer), 621–635.

Google Scholar

Yang, Y., Xiong, X., and Yan, Y. (2023). Uav formation trajectory planning algorithms: a review. Drones 7, 62. doi:10.3390/drones7010062

CrossRef Full Text | Google Scholar

Zhang, Z., and Zhu, L. (2023). A review on unmanned aerial vehicle remote sensing: platforms, sensors, data processing methods, and applications. Drones 7, 398. doi:10.3390/drones7060398

CrossRef Full Text | Google Scholar

Zhang, L., Zhang, H., Niu, Y., and Han, W. (2019). Mapping maize water stress based on uav multispectral remote sensing. Remote Sens. 11, 605. doi:10.3390/rs11060605

CrossRef Full Text | Google Scholar

Keywords: autonomous UAV, model predictive control, Kalman filter, trajectory tracking, drones

Citation: Muvva VVRMKR, Joseph KT, Chawla Y, Pitla S and Wolf M (2025) Custom UAV with model predictive control for autonomous static and dynamic trajectory tracking in agricultural fields. Front. Robot. AI 12:1694952. doi: 10.3389/frobt.2025.1694952

Received: 29 August 2025; Accepted: 20 October 2025;
Published: 16 December 2025.

Edited by:

Johann Laconte, INRAE, France

Reviewed by:

Xiaofeng Xiong, University of Southern Denmark, Denmark
Christophe Cariou, INRAE, France

Copyright © 2025 Muvva, Joseph, Chawla, Pitla and Wolf. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Veera Venkata Ram Murali Krishna Rao Muvva, a3Jpc2huYUBodXNrZXJzLnVubC5lZHU=

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.