论文学习–Learning High-Speed Flight in the Wild

文章目录

Git

Git: https://github.com/uzh-rpg/agile_autonomy
论文: Learning High-Speed Flight in the Wild

子文链接

后续写了额外三篇博客,可供参考:
仿真环境中生成专家轨迹
仿真器采集无人机仿真数据
网络训练生成飞行轨迹

代码运行

编译环境

Ubuntu18.04 + gcc6 + cuda11.3 + ROS melodic + anaconda3(conda 4.10.1) + python3.8 + open3d v0.10.0

编译步骤

参考git源码的Readme.md

【可选】

可以选择修改下载依赖包的方式为https. 即修改 src/agile_autonomy/dependencies.yaml 的内容如下:

repositories:
  catkin_boost_python_buildtool:
    type: git
    url: https:
    version: master
  catkin_simple:
    type: git
    url: https:
    version: master
  eigen_catkin:
    type: git
    url: https:
    version: master
  eigen_checks:
    type: git
    url: https:
    version: master
  gflags_catkin:
    type: git
    url: https:
    version: master
  glog_catkin:
    type: git
    url: https:
    version: master
  mav_comm:
    type: git
    url: https:
    version: master
  minimum_jerk_trajectories:
    type: git
    url: https:
    version: master
  minkindr:
    type: git
    url: https:
    version: master
  minkindr_ros:
    type: git
    url: https:
    version: master
  numpy_eigen:
    type: git
    url: https:
    version: master
  rpg_common:
    type: git
    url: https:
    version: main
  rotors_simulator:
    type: git
    url: https:
    version: master
  rpg_mpc:
    type: git
    url: https:
    version: feature/return_full_horizon
  rpg_quadrotor_common:
    type: git
    url: https:
    version: master
  rpg_quadrotor_control:
    type: git
    url: https:
    version: devel/elia
  rpg_single_board_io:
    type: git
    url: https:
    version: master
  rpg_flightmare:
    type: git
    url: https:
    version: main
  rpg_mpl_ros:
    type: git
    url: https:
    version: master
  assimp_catkin:
    type: git
    url: https:
    version: master

[1] 下载源码

mkdir agile_autonomy_ws
cd agile_autonomy_ws
mkdir -p agile_autonomy_ws/src
cd agile_autonomy_ws
catkin init
catkin config --extend /opt/ros/melodic
catkin config --merge-devel
catkin config --cmake-args -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_FLAGS=-fdiagnostics-color
cd src
git clone https:
vcs-import < agile_autonomy/dependencies.yaml
cd rpg_mpl_ros
git submodule update --init --recursive

#install extra dependencies (might need more depending on your OS)
sudo apt-get install libqglviewer-dev-qt5

Install external libraries for rpg_flightmare
sudo apt install -y libzmqpp-dev libeigen3-dev libglfw3-dev libglm-dev

Install dependencies for rpg_flightmare renderer
sudo apt install -y libvulkan1 vulkan-utils gdb

[2] 先安装Open3D

源码安装C++版本的open3D, v0.10.0. 由于采用gcc6编译,因此不用最新版本v0.13.0的open3D。

git clone --recursive https:

You can also update the submodule manually
git submodule update --init --recursive

切换到指定版本
git checkout  v0.10.0

On Ubuntu
util/install_deps_ubuntu.sh

mkdir build
cd build
cmake -DCMAKE_INSTALL_PREFIX=<open3d_install_directory> ..

On Ubuntu
make -j$(nproc)
make install

[3] 修改Open3D的相关路径

由于Open3D安装到指定的路径,因此,需要修改源码里面相应的open3D的查找路径。
修改下面三个文件中的Open3D查找路径:

  • mpl_test_node/CMakeLists.txt
  • open3d_conversions/CMakeLists.txt
  • agile_autonomy/data_generation/traj_sampler/CMakeLists.txt

对应行 find_package(Open3D HINTS *:

find_package(Open3D HINTS /home/yourname/open3d_install/lib/cmake/)

[4] 开始编译

catkin build

这里还是会报找不到Open3D, 如下:

CMake Error at /home/wxm/Documents/codes/agile_automomy_wxm/devel/share/open3d_conversions/cmake/open3d_conversionsConfig.cmake:173 (message):
  Project 'mpl_test_node' tried to find library 'Open3D'.  The library is
  neither a target nor built/installed properly.  Did you compile project
  'open3d_conversions'? Did you find_package() it before the subdirectory
  containing its code is included?

解决方法: 修改 agile_automomy_ws/devel/share/open3d_conversions/cmake/open3d_conversionsConfig.cmake 的第157行,增加Open3D的路径:

foreach(path  *;/home/yourname/open3d_install/lib)

然后再继续编译,

Build and re-source the workspace
catkin build

[5] 报错2

error: missing template arguments before 'timing_spline'

解决方法: 修改文件 agile_automomy_wxm/src/agile_autonomy/data_generation/traj_sampler/src/traj_sampler.cpp 第266行,如下:

Timing<double> timing_spline;

[6] 报错3

error: missing template arguments before '(' token
       Eigen::Quaternion(std::cos(-cam_pitch_angle_ / 2.0), 0.0,

解决方法:
修改 agile_automomy_wxm/src/agile_autonomy/data_generation/agile_autonomy/src/agile_autonomy.cpp的第461行为:

Eigen::Quaterniond(std::cos(-cam_pitch_angle_ / 2.0), 0.0,
                        std::sin(-cam_pitch_angle_ / 2.0), 0.0);

[7] 运行中报错

运行时,可能会报找不到一些python库,根据提示,直pip安装即可。

运行时如果报cv_bridge报如下错误,这是需要源码编译一个cv_bridge.

from cv_bridge.boost.cv_bridge_boost import getCvType
ImportError: dynamic module does not define module export function (PyInit_cv_bridge_boost)

源码安装cv_bridge

conda deactivate
mkdir  cv_bridge_ws
cd cv_bridge_ws
mkdir src
cd src
git clone https:
Find version of cv_bridge in your repository
apt-cache show ros-melodic-cv-bridge | grep Version
    Version: 1.13.0-0bionic.20210921.205941
git checkout 1.13.0
catkin config  --install
catkin build
source  install/setup.bash  --extend

[8] 配置学习环境

Create your learning environment
roscd planner_learning
conda create --name tf_24 python=3.8
conda activate tf_24
conda install tensorflow-gpu
pip install rospkg==1.2.3,pyquaternion,open3d,opencv-python

[9] 下载flighemare渲染环境

Now download the flightmare standalone available at this link, extract it and put in the flightrender folder.

运行

1.打开一个终端,输入

cd agile_autonomy_ws
source devel/setup.bash
roslaunch agile_autonomy simulation.launch
  1. 再打开另一个终端,输入
cd agile_autonomy_ws
source devel/setup.bash
source  cv_bridge_ws/install/setup.bash  --extend
conda activate tf_24
python test_trajectories.py --settings_file=config/test_settings.yaml
  1. 运行界面如下,一共包含三个窗口。
    论文学习--Learning High-Speed Flight in the Wild

代码梳理

test

程序运行一共执行两条主要指令,第一条,用于启动仿真环境,生成仿真数据。第二条,将仿真传感器数据输入训练好的网络中,生成无人机轨迹,控制无人机运动,并统计实际运动情况。

roslaunch agile_autonomy simulation.launch
python test_trajectories.py --settings_file=config/test_settings.yaml

仿真数据采集

仿真环境中sprawn一个无人机,Unity通过给定无人机的位姿以及图像长宽等参数,可以获取得到仿真RGB双目图像,以及深度图像。

轨迹测试

默认一共测试50次。
其中一次的测试log如下:

==========================
     RESET SIMULATION
==========================
It worked well. (Arrived at 285 / 324)
Giving a stop from python

Unpausing Physics...

Placing quadrotor...

success: True
status_message: "SetModelState: set model state done"
Received call to Clear Buffer and Restart Experiment

Resetting experiment

Done Reset
Doing experiment 8
Reading pointcloud from ../data_generation/data/rollout_21-11-16_14-36-51/pointcloud-unity.ply
min max pointcloud
[29.90007973 29.90000534  6.71293974]
[-29.90000534  10.09995842  -1.48705816]
Reading Trajectory from ../data_generation/data/rollout_21-11-16_14-36-51/reference_trajectory.csv
Loaded traj ../data_generation/data/rollout_21-11-16_14-36-51/reference_trajectory.csv with 324 elems
Starting up!

Starting up!

Starting up!

Starting up!

Starting up!

Starting up!

Starting up!

Starting up!

Starting up!

Starting up!

Starting up!

Starting up!

Starting up!

Starting up!

Starting up!

Starting up!

Starting up!

Crashing into something!

Crashing into something!

It worked well. (Arrived at 263 / 324)
Giving a stop from python
experiment done
Rollout dir is ../data_generation/data/rollout_21-11-16_14-36-51

[ INFO] [1637053941.578871631, 1024.850000000]: [/hummingbird/autopilot] Switched to BREAKING state
[ INFO] [1637053942.599715262, 1025.360000000]: [/hummingbird/autopilot] Switched to HOVER state
[ INFO] [1637053944.262924978, 1025.628000000]: [/hummingbird/autopilot] OFF command received
[ INFO] [1637053944.263037491, 1025.628000000]: [/hummingbird/autopilot] Switched to OFF state
[ INFO] [1637053944.263474245, 1025.628000000]: Received off command, stopping maneuver execution!

[ INFO] [1637053944.263595630, 1025.628000000]: Switching to kOff
[ INFO] [1637053946.541124498, 1026.765000000]: Received off command, stopping maneuver execution!

[ INFO] [1637053946.541246332, 1026.766000000]: Switching to kOff
[ INFO] [1637053947.542290912, 1027.019000000]: [/hummingbird/rpg_rotors_interface] Interface armed
[ INFO] [1637053947.542323540, 1027.019000000]: [/hummingbird/autopilot] START command received
[ INFO] [1637053947.542396557, 1027.019000000]: [/hummingbird/autopilot] Absolute state estimate available, taking off based on it
[ INFO] [1637053947.542428793, 1027.019000000]: [/hummingbird/autopilot] Switched to START state
[ INFO] [1637053952.204707651, 1029.023000000]: Solving MPC with hover as initial guess.

[ INFO] [1637053960.008035347, 1032.918000000]: [/hummingbird/autopilot] Switched to BREAKING state
[ INFO] [1637053960.148160505, 1032.988000000]: [/hummingbird/autopilot] Switched to HOVER state
[ INFO] [1637053972.565306967, 1039.187000000]: Initiated Logging, computing reference trajectory and generating point cloud!

[ INFO] [1637053972.565415887, 1039.187000000]: Starting maneuver computation
Initiated acrobatic sequence
Creating directories in [/home/wxm/Documents/codes/agile_autonomy_ws/src/agile_autonomy/data_generation/agile_autonomy/../data]
[ INFO] [1637053974.602435105, 1040.204000000]: Find elevation
[ INFO] [1637053974.602561021, 1040.205000000]: Place trees
[ INFO] [1637053976.604247155, 1041.204000000]: Spawning [2255] trees, poisson mode is [2209].

[ INFO] [1637053976.604346633, 1041.205000000]: Incrementing seed to 70.

[ INFO] [1637053977.906371979, 1041.855000000]: Start creating pointcloud
[ INFO] [1637053977.906433051, 1041.855000000]: Scale pointcloud: [60.00, 20.00, 10.00]
[ INFO] [1637053977.906451500, 1041.855000000]: Origin pointcloud: [-0.00, 20.00, 1.89]
[ INFO] [1637054004.909340006, 1055.338000000]: Pointcloud saved
Opened new odometry file: /home/wxm/Documents/codes/agile_autonomy_ws/src/agile_autonomy/data_generation/agile_autonomy/../data/rollout_21-11-16_17-12-52/odometry.csv
Saving trajectory to CSV.

Trajectory filename: /home/wxm/Documents/codes/agile_autonomy_ws/src/agile_autonomy/data_generation/agile_autonomy/../data/rollout_21-11-16_17-12-52/reference_trajectory.csv
Saved trajectory to file.

[ INFO] [1637054004.926331444, 1055.347000000]: Gogogo!

[ INFO] [1637054004.926386841, 1055.347000000]: Switching to kAutopilot
[ INFO] [1637054005.026160529, 1055.397000000]: Maneuver computation successful!

[ INFO] [1637054005.026284428, 1055.397000000]: Maneuver computation took 16.2100 seconds.

[ INFO] [1637054005.152199453, 1055.460000000]: Selected trajectory #0
[ INFO] [1637054005.162462001, 1055.465000000]: Solving MPC with hover as initial guess.

[ INFO] [1637054005.163936578, 1055.466000000]: [/hummingbird/autopilot] Switched to COMMAND_FEEDTHROUGH state
[ INFO] [1637054005.283365602, 1055.525000000]: Selected trajectory #0
[ INFO] [1637054005.418361186, 1055.592000000]: Selected trajectory #0
[ INFO] [1637054005.546865663, 1055.656000000]: Selected trajectory #0
[ INFO] [1637054005.682420173, 1055.724000000]: Selected trajectory #0
[ INFO] [1637054005.814040633, 1055.790000000]: Selected trajectory #0
[ INFO] [1637054005.948414538, 1055.857000000]: Selected trajectory #1
[ INFO] [1637054006.080307769, 1055.923000000]: Selected trajectory #0
[ INFO] [1637054006.212933667, 1055.989000000]: Selected trajectory #0
[ INFO] [1637054006.349705622, 1056.057000000]: Selected trajectory #0
[ INFO] [1637054006.484988932, 1056.125000000]: Selected trajectory #0
[ INFO] [1637054006.615398050, 1056.190000000]: Selected trajectory #0
[ INFO] [1637054006.749158400, 1056.257000000]: Selected trajectory #0
[ INFO] [1637054006.884676191, 1056.325000000]: Selected trajectory #0
[ INFO] [1637054007.014193683, 1056.389000000]: Selected trajectory #0
[ INFO] [1637054007.150219775, 1056.457000000]: Selected trajectory #0
[ INFO] [1637054007.285815896, 1056.525000000]: Selected trajectory #0
[ INFO] [1637054007.416023906, 1056.590000000]: Selected trajectory #0
[ INFO] [1637054007.549148150, 1056.657000000]: Selected trajectory #0
[ INFO] [1637054007.683953972, 1056.724000000]: Selected trajectory #0
[ INFO] [1637054007.819368611, 1056.791000000]: Selected trajectory #0
[ INFO] [1637054007.949866909, 1056.857000000]: Selected trajectory #0
[ INFO] [1637054008.085218053, 1056.924000000]: Selected trajectory #0
[ INFO] [1637054008.216562474, 1056.990000000]: Selected trajectory #0
[ INFO] [1637054008.350844266, 1057.057000000]: Selected trajectory #0
[ INFO] [1637054008.485598554, 1057.123000000]: Selected trajectory #0
[ INFO] [1637054008.618634733, 1057.190000000]: Selected trajectory #0
[ INFO] [1637054008.751611357, 1057.257000000]: Selected trajectory #0
[ INFO] [1637054008.885042986, 1057.324000000]: Selected trajectory #0
[ INFO] [1637054009.016160422, 1057.389000000]: Selected trajectory #0
[ INFO] [1637054009.153358343, 1057.458000000]: Selected trajectory #0
[ INFO] [1637054009.284585798, 1057.523000000]: Selected trajectory #0
[ INFO] [1637054009.417833454, 1057.590000000]: Selected trajectory #0
[ INFO] [1637054009.554909065, 1057.658000000]: Selected trajectory #0
[ INFO] [1637054009.684010018, 1057.723000000]: Selected trajectory #0
[ INFO] [1637054009.822174455, 1057.792000000]: Selected trajectory #0
[ INFO] [1637054009.954099619, 1057.857000000]: Selected trajectory #0
[ INFO] [1637054010.087778279, 1057.924000000]: Selected trajectory #0
[ INFO] [1637054010.216160307, 1057.989000000]: Selected trajectory #0
[ INFO] [1637054010.355049595, 1058.058000000]: Selected trajectory #1
[ INFO] [1637054010.493161090, 1058.127000000]: Selected trajectory #0
[ INFO] [1637054010.617860860, 1058.189000000]: Selected trajectory #0
[ INFO] [1637054010.752438215, 1058.257000000]: Selected trajectory #0
[ INFO] [1637054010.888457315, 1058.324000000]: Selected trajectory #0
[ INFO] [1637054011.022363300, 1058.391000000]: Selected trajectory #0
[ INFO] [1637054011.152986033, 1058.457000000]: Selected trajectory #0
[ INFO] [1637054011.287158368, 1058.524000000]: Selected trajectory #0
[ INFO] [1637054011.419963912, 1058.590000000]: Selected trajectory #1
[ INFO] [1637054011.551984385, 1058.656000000]: Selected trajectory #0
[ INFO] [1637054011.689502372, 1058.725000000]: Selected trajectory #1
[ INFO] [1637054011.820987537, 1058.791000000]: Selected trajectory #1
[ INFO] [1637054011.955258672, 1058.858000000]: Selected trajectory #0
[ INFO] [1637054012.086914492, 1058.923000000]: Selected trajectory #0
[ INFO] [1637054012.222633486, 1058.991000000]: Selected trajectory #0
[ INFO] [1637054012.354479495, 1059.057000000]: Selected trajectory #0
[ INFO] [1637054012.488525237, 1059.124000000]: Selected trajectory #0
[ INFO] [1637054012.621122964, 1059.190000000]: Selected trajectory #0
[ INFO] [1637054012.753466361, 1059.256000000]: Selected trajectory #0
[ INFO] [1637054012.892174870, 1059.325000000]: Selected trajectory #0
[ INFO] [1637054013.024166737, 1059.391000000]: Selected trajectory #0
[ INFO] [1637054013.155947755, 1059.457000000]: Selected trajectory #0
[ INFO] [1637054013.290224788, 1059.524000000]: Selected trajectory #0
[ INFO] [1637054013.421112235, 1059.590000000]: Selected trajectory #0
[ INFO] [1637054013.559066533, 1059.658000000]: Selected trajectory #0
[ INFO] [1637054013.692403028, 1059.725000000]: Selected trajectory #0
[ INFO] [1637054013.822663259, 1059.790000000]: Selected trajectory #0
[ INFO] [1637054013.958871348, 1059.858000000]: Selected trajectory #0
[ INFO] [1637054014.090127192, 1059.923000000]: Selected trajectory #0
[ INFO] [1637054014.219357813, 1059.988000000]: Selected trajectory #0
[ INFO] [1637054014.354345334, 1060.055000000]: Selected trajectory #0
[ INFO] [1637054014.491573089, 1060.124000000]: Selected trajectory #0
[ INFO] [1637054014.625786710, 1060.191000000]: Selected trajectory #0
[ INFO] [1637054014.759140338, 1060.257000000]: Selected trajectory #1
[ INFO] [1637054014.892043398, 1060.324000000]: Selected trajectory #0
[ INFO] [1637054015.023581521, 1060.390000000]: Selected trajectory #0
[ INFO] [1637054015.158324608, 1060.457000000]: Selected trajectory #0
[ INFO] [1637054015.293084738, 1060.524000000]: Selected trajectory #0
[ INFO] [1637054015.423784131, 1060.589000000]: Selected trajectory #0
[ INFO] [1637054015.557652264, 1060.656000000]: Selected trajectory #0
[ INFO] [1637054015.692585591, 1060.724000000]: Selected trajectory #0
[ INFO] [1637054015.823870935, 1060.789000000]: Selected trajectory #0
[ INFO] [1637054015.959402968, 1060.857000000]: Selected trajectory #0
[ INFO] [1637054016.096347502, 1060.925000000]: Selected trajectory #0
[ INFO] [1637054016.231029848, 1060.992000000]: Selected trajectory #1
[ INFO] [1637054016.365640987, 1061.059000000]: Selected trajectory #0
[ INFO] [1637054016.493614867, 1061.123000000]: Selected trajectory #0
[ INFO] [1637054016.627199198, 1061.190000000]: Selected trajectory #0
[ INFO] [1637054016.761573804, 1061.257000000]: Selected trajectory #0
[ INFO] [1637054016.895532532, 1061.324000000]: Selected trajectory #0
[ INFO] [1637054017.029675794, 1061.391000000]: Selected trajectory #0
[ INFO] [1637054017.160851167, 1061.457000000]: Selected trajectory #0
Close Odometry File.

[ INFO] [1637054017.259214657, 1061.506000000]: Switching to kComputeLabels
[ WARN] [1637054017.467031923, 1061.610000000]: [/hummingbird/autopilot] Did not receive control command inputs anymore but last thrust command was high, will switch to hover

代码逻辑

  • PlaNet为网络模型,模型输入接收当前无人机在仿真器中采集的RGB图像,IMU_state(odometry),深度图像,参考轨迹。模型预测输出轨迹,通过 /hummingbird/trajectory_predicted 发布出去。
  • AgileAutonomy 订阅 trajectory_predicted, 保存为 reference_trajectory。然后,通过 MpcController 生成控制指令quadrotor_common::ControlCommand,再把控制指令通过ROS消息发布出去。同时把控制指令也传送到状态估计器 state_predictor_ 中,用于估计无人机的实时位姿 odometry 。
  • 仿真器通过无人机当前的位姿(odometry)以及设置的图像分辨率FOV等参数,可以生成无人机采集的RGB双目图像,深度图像,这里获取的深度图像作为真值深度图像。对应的代码 getImageFromUnity() ; 同时,根据获取的RGB双目图像,采用SGB算法(sgm_gpu::SgmGpu)生成深度图像。

论文阅读

摘要

摘要:传统的自主导航需要:感知,建图和规划三个步骤。串行步骤在低速时比较适用,但是对于高速,分离的步骤会导致处理延时和误差累积。因此文章提出一种”直接映射”的方法,直接输入带噪声的传感器数据,通过神经网络,输出一条规划的轨迹。这种”直接映射”的方法可以显著减小处理延时,且对噪声的鲁棒性增强。网络通过带噪声的仿真数据进行训练,训练完成后,可以直接应用到真实场景中。实验证明,这种端到端的方法优于传统方法。

方法

MPC:采用privileged learning 训练。
仿真环境:Flightmare + RotorS Gazebo + Unity

论文学习--Learning High-Speed Flight in the Wild

; Privileged Expert

采用传统的离线规划方法来计算一组无障碍轨迹。
输入:参考轨迹,飞行器位姿和环境点云。
采用Metropolis-Hasting Sampling方法,选择前三个轨迹作为后续网络训练的约束轨迹。

Sendorimotor Agent

网络输入:深度图像 + 飞机的速度和姿态 + 期望的飞行方向
网络输出: 一组运动假设(一组包含碰撞风险的轨迹)

Actions

把网络输入的轨迹,投影到多边形轨迹空间并根据他们的预计碰撞代价排名。最低碰撞代价的轨迹送到MPC进行无人机控制。

方法详述

The privileged expert

专家轨迹是一个基于采样的运动规划算法。已知平台的状态和环境点云,生成一条无碰撞的轨迹,表示无人机的期望状态。碰撞概率在远离障碍物且逼近参考轨迹时,轨迹的概率越大。
采用M-H算法进行采样,三次B样条和均匀节点采样,来进行高计算效率插值。
专家轨迹无法做到实时,采样引入的大量计算负载。

The student policy

不同于专家轨迹,the student policy产生一个实时的无碰撞轨迹,且仅需要板载传感器测量。
测量包含SGM生成的深度图像,平台的速度和姿态,期望的飞行方向。没有输入点云的情况下仍然能够获取得到轨迹。主要有两个挑战:一是仅可以观测到部分环境,且传感器有噪声;二是分布是多模态的。高概率的可能有多个运动假设,但是进行平均之后,概率又很低。
我们设计了一个神经网络来解决这两个问题。网络有两个分支,能够产生一个视觉,惯导,和参考信息的潜在编码,输出3条轨迹和对应的碰撞代价。我们采用MobileNet-V3架构来有效从深度图像上提取特征,然后进行1维卷积来生成32的特征向量。当前平台的速度和姿态被串联到期望的参考方向上,且经过一个四层感知隐藏节点和LeakyReLU处理。我们再次使用1维卷积来生成32维的特征向量。视觉和姿态信息被串联通过另一个四层感知隐藏节点和LeakyReLU激活。最后预测每个节点,一条轨迹和其碰撞代价。总之,我们的架构接收一个深度图像,平台速度,飞机姿态(表示为旋转矩阵),和参考的方向。从这些输入,来预测一个含有碰撞成本的轨迹。不同于专家学习,神经网络预测的轨迹不能描述全状态,仅仅描述当前位置。这个表示比B样条更通用,且不需要大量的插值计算。
我们利用从专家轨迹中生成的三条轨迹进行有监督学习。考虑到多假设预测,我们采用R-WTA来处理。

Training environment

我们在Flightmare仿真器中构建场景来采集训练数据。所有的环境通过在不平镇的空旷地面丢物体。我们生成的对象有两种类型:仿真树,一组凸形的物体。这些物体随机分布。
我们生成了850个场景。对于每个场景,我们计算一个全局无碰撞的轨迹,从起点到距离40米的前方。全局轨迹在student policy中不可观,仅仅在expert可观。
为了保证状态空间的有效收敛,我们采用DAgger。
仿真环境中,采用给定的双目图像,用SGM生成深度图,且采用真值状态。在实际测试中,采用Inter RealSense 435,自带深度图像和位姿估计。

Original: https://blog.csdn.net/wxm__/article/details/121220816
Author: SLAM On the Road
Title: 论文学习–Learning High-Speed Flight in the Wild

原创文章受到原创版权保护。转载请注明出处:https://www.johngo689.com/517976/

转载文章受原作者版权保护。转载请注明原作者出处!

(0)

大家都在看

亲爱的 Coder【最近整理,可免费获取】👉 最新必读书单  | 👏 面试题下载  | 🌎 免费的AI知识星球