Aerial autonomy stack (AAS) is a software stack to:
- Develop end-to-end drone autonomy with ROS2
- Simulate vision and control in software-in-the-loop, with YOLOv8 and PX4/ArduPilot
- Deploy in real drones with NVIDIA Orin/JetPack
For the motivation behind AAS and how it compares to similar projects, read
RATIONALE.md
aerial-autonomy-stack.mp4
- Support for multiple quadrotors and VTOLs based on either PX4 or ArduPilot
- ROS2-based autopilot interfaces (via XRCE-DDS and MAVROS)
- Support for YOLOv8 (with ONNX GPU Runtimes) and LiDAR Odometry (with KISS-ICP)
- Dockerized simulation based on
nvcr.io/nvidia/cuda:12.8.1-cudnn-runtime-ubuntu22.04
- Dockerized deployment based on
nvcr.io/nvidia/l4t-jetpack:r36.4.0
- 3D worlds for PX4 and ArduPilot software-in-the-loop (SITL) simulation
- Zenoh inter-vehicle ROS2 bridge
- Support for PX4 Offboard mode (e.g. CTBR/
VehicleRatesSetpoint
for agile, GNSS-denied flight) - Support for ArduPilot Guided mode (i.e.
setpoint_velocity
,setpoint_accel
references) - Logs analysis with
flight_review
(.ulg
), MAVExplorer (.bin
), and PlotJuggler (rosbag
) - Steppable simulation interface for reinforcement learning
- Support for Gazebo's
WindEffects
(except PX4 VTOL)
AAS leverages the following frameworks: (expand)
ROS2 Humble (LTS, EOL 5/2027), Gazebo Sim Harmonic (LTS, EOL 9/2028), PX4 1.16 interfaced via XRCE-DDS, ArduPilot 4.6 interfaced via MAVROS, YOLOv8 on ONNX Runtime 1.22 (latest stable releases as of 8/2025), L4T 36 (Ubuntu 22-based)/JetPack 6 (for deployment only, latest major release as of 8/2025)
Important
This stack is developed and tested using a Ubuntu 22.04 host (penultimate LTS, ESM 4/2032) with nvidia-driver-575
and Docker Engine v28 (latest stable releases as of 7/2025) on an i9-13 with RTX3500 and an i7-11 with RTX3060—note that an NVIDIA GPU is required
To setup the requirements: (i) Ubuntu 22, Git LFS, (ii) NVIDIA driver, (iii) Docker Engine, (iv) NVIDIA Container Toolkit, and (v) NVIDIA NGC API Key, read PREINSTALL.md
# Clone this repo
mkdir -p ~/git
git clone git@github.com:JacopoPan/aerial-autonomy-stack.git ~/git/aerial-autonomy-stack
cd ~/git/aerial-autonomy-stack
Warning
The build script creates two ~20GB images (including lots of tools and artifacts for development)
Building from scratch requires a good/stable internet connection (Ctrl + c
and restart if necessary)
# Clone external repos (in github_clones/) and build the Docker images
cd ~/git/aerial-autonomy-stack/scripts
./sim_build.sh # The first build takes ~25', subsequent ones take seconds to minutes
# Run a simulation (note: ArduPilot STIL takes ~40s to be ready to arm)
cd ~/git/aerial-autonomy-stack/scripts
DRONE_TYPE=quad AUTOPILOT=px4 NUM_DRONES=2 WORLD=swiss_town ./sim_run.sh # Check the script for more options
# `Ctrl + b`, then `d` in each terminal once done
On a low-mid range laptop (i7-11 with 16GB RAM and RTX3060), AAS can simulate 3 drones with camera and LiDAR at 70-80% of the wall-clock
Once "Ready to Fly", one can takeoff and control from QGroundControl's "Fly View"
Available WORLD
s:
apple_orchard
, a GIS world created using BlenderGISimpalpable_greyness
, (default) an empty world with simple shapesshibuya_crossing
, a 3D world adapted from cgtraderswiss_town
, a photogrammetry world courtesy of Pix4D / pix4d.com
To advance the simulation in discrete time steps, e.g. 1s, from a terminal on the host, run:
# Pause the simulation
docker exec simulation-container bash -c "gz service -s /world/\$WORLD/control --reqtype gz.msgs.WorldControl --reptype gz.msgs.Boolean --req 'multi_step: 250, pause: true'" # Adjust multi_step based on the value of max_step_size in the world's .sdf
To add or disable a wind field, from a terminal on the host, run:
# Note that a positive X blows from the West, a positive Y blows from the South
docker exec simulation-container bash -c "gz topic -t /world/\$WORLD/wind/ -m gz.msgs.Wind -p 'linear_velocity: {x: 0.0 y: 3.0}, enable_wind: true'"
docker exec simulation-container bash -c "gz topic -t /world/\$WORLD/wind/ -m gz.msgs.Wind -p 'enable_wind: false'"
Tip
Tmux and Docker Shortcuts (expand)
- Move between Tmux windows with
Ctrl + b
, thenn
,p
- Move between Tmux panes with
Ctrl + b
, thenarrow keys
- Enter copy mode to scroll back with
Ctrl + [
, thenarrow keys
, exit withq
- Split a Tmux window with
Ctrl + b
, then"
(horizontal) or%
(vertical) - Detach Tmux with
Ctrl + b
, thend
tmux list-sessions # List all sessions
tmux attach-session -t [session_name] # Reattach a session
tmux kill-session -t [session_name] # Kill a session
tmux kill-server # Kill all sessions
Docker hygiene:
docker ps -a # List containers
docker stop $(docker ps -q) # Stop all containers
docker container prune # Remove all stopped containers
docker images # List images
docker image prune # Remove untagged images
docker rmi <image_name_or_id> # Remove a specific image
docker builder prune # Clear the cache system wide
cd ~/git/aerial-autonomy-stack/scripts
DRONE_TYPE=quad AUTOPILOT=px4 NUM_DRONES=1 ./sim_run.sh
# In aircraft 1's terminal
ros2 run mission mission --conops yalla --ros-args -r __ns:=/Drone$DRONE_ID -p use_sim_time:=true
# This mission is a simple takeoff, followed by an orbit, and landing
# Works for all combinations of AUTOPILOT=px4 or ardupilot, DRONE_TYPE=quad or vtol
# Finally, in the simulation's terminal
/simulation_resources/patches/plot_logs.sh # Analyze the flight logs
Read the banner comment in the autopilot_interface
headers for command line examples (takeoff, orbit, reposition, offboard, land):
ardupilot_interface.hpp
: ArduPilot actions and servicespx4_interface.hpp
: PX4 actions and services
Once flown from CLI, implemented your mission in MissionNode.conops_callback()
Launching the sim_run.sh
script with MODE=dev
, does not start the simulation and mounts folders simulation_resources
, aircraft_resources
, and ros2_ws/src
as volumes to more easily track, commit, push changes while building and testing them within the containers
# Develop within live containers
cd ~/git/aerial-autonomy-stack/scripts
MODE=dev ./sim_run.sh # Images are pre-built but the ros2_ws/src/ and *_resources/ folders are mounted from the host
Note
Project Structure
aerial-autonomy-stack
│
├── aircraft
│ ├── aircraft_ws
│ │ └── src
│ │ ├── autopilot_interface # Ardupilot/PX4 high-level actions (Takeoff, Orbit, Offboard, Land)
│ │ ├── mission # Orchestrator of the actions in `autopilot_interface`
│ │ ├── offboard_control # Low-level references for the Offboard action in `autopilot_interface`
│ │ ├── state_sharing # Publisher of the `/state_sharing_drone_N` topic broadcasted by Zenoh
│ │ └── yolo_inference # GStreamer video acquisition and publisher of YOLO bounding boxes
│ │
│ └── aircraft.yml.erb # Aircraft docker tmux entrypoint
│
├── scripts
│ ├── docker
│ │ ├── Dockerfile.aircraft # Docker image for aircraft simulation and deployment
│ │ └── Dockerfile.simulation # Docker image for Gazebo and SITL simulation
│ │
│ ├── deploy_build.sh # Build `Dockerfile.aircraft` for arm64/Orin
│ ├── deploy_run.sh # Start the aircraft docker on arm64/Orin
│ │
│ ├── sim_build.sh # Build both dockerfiles for amd64/simulation
│ └── sim_run.sh # Start the simulation
│
└── simulation
├── simulation_resources
│ ├── aircraft_models
│ │ ├── alti_transition_quad # ArduPilot VTOL
│ │ ├── iris_with_ardupilot # ArduPilot quad
│ │ ├── sensor_camera
│ │ ├── sensor_lidar
│ │ ├── standard_vtol # PX4 VTOL
│ │ └── x500 # PX4 quad
│ └── simulation_worlds
│ ├── apple_orchard.sdf
│ ├── impalpable_greyness.sdf
│ ├── shibuya_crossing.sdf
│ └── swiss_town.sdf
│
├── simulation_ws
│ └── src
│ └── ground_system # Publisher of topic `/tracks` broadcasted by Zenoh
│
└── simulation.yml.erb # Simulation docker tmux entrypoint
Important
These instructions are tested on a Holybro Jetson Baseboard kit that includes (i) a Pixhawk 6X autopilot and (ii) an NVIDIA Orin NX 16GB computer connected via both serial and ethernet
To setup (i) PX4's DDS UDP client, (ii) ArduPilot serial MAVLink bridge, (iii) JetPack 6, (iv) Docker Engine, (v) NVIDIA Container Toolkit, and (vi) NVIDIA NGC API Key on Orin, read AVIONICS.md
The Holybro Jetson Baseboard comes with an (i) integrated 4-way (Orin, 6X, RJ-45, JST) Ethernet switch and (ii) two JST USB 2.0 that can be connected to ASIX Ethernet adapters to create additional network interfaces
Make sure to configure Orin, 6X's XRCE-DDS, IP radio, Zenoh, etc. consistently with your network setup; the camera acquisition pipeline should be setup in yolo_inference_node.py
, the LiDAR should publish on topic /lidar_points
for KISS-ICP (if necessary, discuss in the Issues)
# On Jetson Orin NX, build for arm64 with TensorRT support
mkdir -p ~/git
git clone git@github.com:JacopoPan/aerial-autonomy-stack.git ~/git/aerial-autonomy-stack
cd ~/git/aerial-autonomy-stack/scripts
./deploy_build.sh # The first build takes ~1h (mostly to build onnxruntime-gpu from source)
# On Jetson Orin NX, start and attach the aerial-autonomy-stack (e.g., from ssh)
DRONE_TYPE=quad AUTOPILOT=px4 DRONE_ID=1 CAMERA=true LIDAR=false ./deploy_run.sh
docker exec -it aircraft-container tmux attach
You've done a man's job, sir. I guess you're through, huh?