With the advent of connected vehicle technology and widespread long-distance communication, leveraging data from real-world operational cases to identify and characterize vehicle dynamics and control is now possible. However, there exists a limited number of studies in the field of off-road vehicle identification as compared to on-road vehicle identification. This study explores the applications of Dynamic Mode Decomposition with Control (DMDc) for linear vehicle dynamics, linear human-driver-controller identification, and possible use for off-road vehicle identification problems. An implementation is tested using real-world data collected from the University of Illinois Modified R-Gator(UIMR), an autonomy-capable, drive-by-wire, 6x4, Ackerman Steer, wheeled off-road vehicle platform. The UIMR, partially developed by the University of Illinois at Urbana-Champaign’s Autonomous and Unmanned Vehicle Systems Lab, part of the Industrial and Systems Engineering department, is presented for the first time.
Modern mobile robots require precise and robust localization and navigation systems to achieve mission tasks correctly. In particular, in the underwater environment, where Global Navigation Satellite Systems (GNSSs) cannot be exploited, the development of localization and navigation strategies becomes more challenging. Maximum A Posteriori (MAP) strategies have been analyzed and tested to increase navigation accuracy and take into account the entire history of the system state. In particular, a sensor fusion algorithm relying on a MAP technique for Simultaneous Localization and Mapping (SLAM) has been developed to fuse information coming from a monocular camera and a Doppler Velocity Log (DVL) and to consider the landmark points in the navigation framework. The proposed approach can guarantee to simultaneously locate the vehicle, thanks to the onboard sensors, and map the surrounding environment with the information extracted from the images acquired by a bottom-looking optical camera. Optical sensors can provide constraints between the vehicle poses and the landmarks belonging to the observed scene. The DVL measurements have been employed to solve the unknown scale factor and to guarantee the correct vehicle localization even in absence of visual features. After validating the solution through realistic simulations, an experimental campaign at sea was conducted in Stromboli Island (Messina), Italy. In conclusion, an algorithm, which works with the Poisson surface reconstruction method to obtain a smooth seabed surface, for mesh creation has been developed.
The nuclear energy sector can benefit from mobile robots for remote inspection and handling, reducing human exposure to radiation. Advances in cyber-physical systems have improved robotic platforms in this sector through digital twin technology. Digital twins enhance situational awareness for robot operators, crucial for safety in the nuclear energy sector, and their value is anticipated to increase with the growing complexity of cyber-physical systems. We propose a multimodal immersive digital twin platform for cyber-physical robot fleets based on ROS-Unity3D. The system design integrates building information models, displays mission parameters, visualizes robot sensor data, and provides multi-modal user interaction through traditional and virtual reality interfaces. This enables real-time monitoring and management of fleets with heterogeneous robotic platforms. We performed a heuristic evaluation of our digital twin interface with robot operators from leading nuclear research institutions: Sellafield Ltd and Japan Atomic Energy Agency, performing a simulated robot inspection mission. The three usability themes that emerged from this evaluation and inspired our design recommendations include the flexibility of the interface, the individuality of each robot, and adaptation to expanding sensor visualization capabilities.
This paper presents the development and implementation of a multi-UAV system focused on coverage path planning on multiple separated areas capable of re-planning the collective mission in case of unexpected events. For this purpose, we present a distributed-centralized architecture that uses heuristic and computationally efficient methods to perform the planning/re-planning and decision-making tasks during the control of the mission execution. We performed a computational evaluation of the algorithms, comparing them with other proposals, together with experiments in simulated and real flights. The results show that the system can distribute tasks equitably among the aircraft in an efficient way, even in the middle of the flight, when facing unexpected events; and show a higher computational efficiency when compared to multiple proposals in the state of the art.
Point cloud registration is a vital task in 3D perception, with several different applications in robotics. Recent advancements have introduced neural-based techniques that promise enhanced accuracy and robustness. In this paper, we thoroughly evaluate well-known neural-based point cloud registration methods using the Point Clouds Registration Benchmark, which was developed to cover a large variety of use cases. Our evaluation focuses on the performance of these techniques when applied to real-complex data, which presents a more challenging and realistic scenario than the simpler experiments typically conducted by the original authors. With few exceptions, the results reveal a significant performance gap, with most neural-based methods performing poorly on real-complex data. However, amidst the underwhelming results, 3DSmoothNet stands out as an exception, demonstrating excellent performance across the majority of benchmark sequences. Remarkably, it outperforms a state-of-the-art feature extractor, namely FPFH, showcasing its effectiveness in capturing informative and discriminative features from point clouds. Given these results, we assert that while neural-based techniques could have advantages over conventional methods, further research is necessary to fully unlock their practical utility. We advocate for more extensive studies employing unbiased and realistic testing procedures, akin to the rigorous evaluation framework employed in this work. This paper represents a crucial step towards establishing a more robust literature in the field of point cloud registration. By exposing the limitations and advantages of existing neural-based methods, we aim to inspire future research and drive the development of more effective techniques for real-world applications.
Unmanned aerial vehicles (UAVs) offer many advantages over ground vehicles, including quadruped robots, based on high maneuverability when performing exploration in complex and unknown environments. However, due to their limited computational capability, UAVs require light-weight but accurate state estimation algorithms for reliable exploration. In this paper, we propose an segmented map based exploration system based on LiDAR-based state estimation for UAVs. The proposed system includes capabilities such as exploration, obstacle avoidance, and object detection with localization using 3D dense maps generated by tightly coupled LiDAR Inertial Odometry (LIO). Our proposed system is a hybrid system that can switch between guided and exploration modes, making it practical for search and rescue missions in disaster scenarios. The proposed LIO algorithm adapts to its surroundings, allowing for fast and accurate state estimation in complex environments. The proposed exploration algorithm is designed to cover specific regions in the 3D dense map generated by proposed LIO, with the UAV determining if map points are included within the coverage area. We tested the proposed system in both simulation and real-world environments and validated that proposed system outperforms state-of-the-art algorithms in various aspects such as localization accuracy and exploration efficiency in complex environments.
Multi-sensor fusion-based localization technology has achieved high accuracy in autonomous systems. How to improve the robustness is the main challenge at present. The most commonly used LiDAR and camera are weather-sensitive, while the FMCW Radar has strong adaptability but suffers from noise and ghost effects. In this paper, we propose a heterogeneous localization method called Radar on LiDAR Map (RoLM), which aims to enhance localization accuracy without relying on loop closures by mitigating the accumulated error in Radar odometry in real time. Our approach involves embedding the data from both Radar and LiDAR sensors into a density map. We calculate the spatial vector similarity with an offset to determine the corresponding place index within the candidate map and estimate the rotation and translation. To refine the alignment, we utilize the Iterative Closest Point (ICP) algorithm to achieve optimal matching on the LiDAR submap. We conducted extensive experiments on the Mulran Radar Dataset, Oxford Radar RobotCar Dataset, and our dataset to demonstrate the feasibility and effectiveness of our proposed approach.
This article presents a study on the world's first unmanned machine designed for autonomous forestry operations. In response to the challenges associated with traditional forestry operations, we developed a platform equipped with essential hardware components necessary for performing autonomous forwarding tasks. Through the use of computer vision, autonomous navigation, and manipulator control algorithms, the machine is able to pick up logs from the ground and manoeuvre through a range of forest terrains without the need for human intervention. Our initial results demonstrate the potential for safe and efficient autonomous extraction of logs in the cut-to-length harvesting process. We achieved a high level of accuracy in our computer vision system, and our autonomous navigation system proved to be highly efficient. This research represents a significant milestone in the field of autonomous outdoor robotics, with far-reaching implications for the future of forestry operations. By reducing the need for human labour, autonomous machines have the potential to increase productivity and reduce labour costs, while also minimizing the environmental impact of timber harvesting. The success of our study highlights the potential for further development and optimization of autonomous machines in the forestry industry.
Projections for future air mobility envisage intensely utilised airspace that does not simply scale up from existing systems with centralised air traffic control. This paper considers the implementation and test of a software and hardware framework for decentralised control of aerial vehicles within intensely used airspace. Up to 10 rotary wing vehicles of maximum all up mass of 1 kg are flown in an outdoor volume with length scale of 100 m with GPS and WiFi connectivity. Flight control is implemented using a Pixhawk 4 flight controller running the PX4 firmware with guidance algorithms run on a separate onboard companion computer. Deconfliction is implemented using a simple elastic repulsion model with a guidance update rate of 10 Hz. Traffic structures are constructed from a path of directed waypoints and associated cross sectional geometry. Junctions are implemented when two paths converge into one or when one path diverges into two. Agents engage with structures through execution of flow, merge and swirl velocity rules. Calibration experiments showed that the worst case latency in agents sharing position information was of the order of 0 .5 s made up from delays due to finite guidance update rate, WiFi processing and centralised message processing. A choice of vehicle cruise speed of 2 m/s and conflict radius of 2 .5 m provided an acceptable compromise between experiment time efficiency (speed) and spatial efficiency (resolution) within the test volume. Results from recirculating junction experiments show that peak deconfliction activity occurs at the junction node, however biased distribution of agents within a corridor means the peak intensity is pushed ahead of the node. Use of meshed helical junction structures significantly reduces the intensity of conflict at the expense of reduced junction time efficiency.
This paper presents the design and development of a funnel-shaped Sparus Docking Station (SDS) intended for the non-holonomic torpedo-shaped Sparus II Autonomous Underwater Vehicles (AUV). The SDS is equipped with sensors and batteries, allowing for a stand-alone long-term deployment of the AUV. An inverted Ultra Short BaseLine (USBL) system is used to locate the Docking Station (DS) as well as to provide long-term drift-less AUV navigation. The SDS is able to observe the ocean currents using a Doppler Velocity Log (DVL), being motorized to allow its self-alignment with the current. Moreover, a docking algorithm accounting for the current is used to guide the robot during the docking maneuver. The paper reports experimental results of the docking maneuver in sea trials.
This paper fuses ideas from Reinforcement Learning (RL), Learning from Demonstration (LfD), and Ensemble Learning into a single paradigm. Knowledge from a mixture of control algorithms (experts) are used to constrain the action space of the agent, enabling faster RL refining of a control policy, by avoiding unnecessary explorative actions. Domain-specific knowledge of each expert is exploited. However, the resulting policy is robust against errors of individual experts, since it is refined by a RL reward function without copying any particular demonstration. Our method has the potential to supplement existing RLfD methods when multiple algorithmic approaches are available to function as experts. We illustrate our method in the context of a Visual Servoing (VS) task, in which a 7-dof robot arm is controlled to maintain a desired pose relative to a target object. We explore four methods for bounding the actions of the RL agent during training. These methods include using a hypercube and convex hull with modified loss functions, ignoring actions outside the convex hull, and projecting actions onto the convex hull. We compare the training progress of each method with and without using the expert demonstrators. Our experiments show that using the convex hull with a modified loss function significantly improves training progress. Furthermore, we demonstrate faster VS error convergence while maintaining higher manipulability of the arm, compared to classical image-based VS, position-based VS, and hybrid-decoupled VS.
A new form of waypoint navigation controller for a skid-steer vehicle is presented, which consisted of a multiple input-single output nonlinear fuzzy angular velocity controller. The mem- bership functions of the fuzzy controller employed a trape- zoidal structure with a completely symmetric rule-base. No- tably, Hierarchical Rule-Base Reduction (HRBR) was incorpo- rated into the controller to select only the rules most influen- tial on state errors. This was done by selecting inputs/outputs and generating a hierarchy of inputs using a Fuzzy Relations Control Strategy (FRCS). Similar to some traditional fuzzy con- trollers, the system provided coverage for the global operat- ing environment. However, a rule for every possible combi- nation of variables and states was no longer necessary. Con- sequently, HRBR fuzzy controllers effectively increase both the number of inputs and their associated fidelity without the rule-base dramatically increasing. To contextualize the performance of the controller, a background on vehicle dy- namic modeling methodologies and an in-depth explanation of the related simulation model are provided. An examina- tion of the proposed controller is then completed employing test courses. The test courses examine the effects of steer- ing disturbance, phase lag, and overshoot as expressed in Root Mean Square Error (RMSE), Max Error (ME), and Course Completion Time (CCT). Finally, simulation and experimental results for the controller’s performance were compared with a state-of-the-art waypoint navigation vehicle controller, ge- ometric pure pursuit. The fuzzy was found to outperform the pure pursuit experimentally by 52.1 percent in RMSE, 26.8 percent in ME, and 1.07 percent in CCT, on average, validat- ing the viability of the controller.
Earthquakes, fire, and floods often cause structural collapses of buildings. The inspection of damaged buildings poses a high risk for emergency forces or is even impossible, though. We present three recent selected missions of the Robotics Task Force of the German Rescue Robotics Center, where both ground and aerial robots were used to explore destroyed buildings. We describe and reflect the missions as well as the lessons learned that have resulted from them. In order to make robots from research laboratories fit for real operations, realistic test environments were set up for outdoor and indoor use and tested in regular exercises by researchers and emergency forces. Based on this experience, the robots and their control software were significantly improved. Furthermore, top teams of researchers and first responders were formed, each with realistic assessments of the operational and practical suitability of robotic systems.
Simultaneous localization and mapping (SLAM) technology is ubiquitously employed in ground robots, unmanned aerial vehicles, and autonomous cars. This paper presents LTA-OM: an efficient, robust, and accurate LiDAR SLAM system. Employing FAST-LIO2 and Stable Triangle Descriptor as LiDAR-IMU odometry and the loop detection method, respectively, LTA-OM is implemented to be functionally complete, including loop detection and correction, false positive loop closure rejection, long-term association mapping, and multi-session localization and mapping. One novelty of this paper is the real-time long-term association (LTA) mapping, which exploits the direct scan-to-map registration of FAST-LIO2 and employs the corrected history map to constrain the mapping process globally. LTA leads to more globally consistent map construction and drift-less odometry at revisit places. We exhaustively benchmark LTA-OM and other state-of-the-art LiDAR systems with 18 data sequences. The results show that LTA-OM steadily outperforms other systems regarding trajectory accuracy, map consistency, and time consumption. The robustness of LTA-OM is validated in a challenging scene - a multi-level building having similar structures at different levels. Besides, a multi-session mode is designed to allow the user to store current session’s results, including the corrected map points, optimized odometry, and descriptor database for future sessions. The benefits of this mode are additional accuracy improvement and consistent map stitching, which is helpful for life-long mapping. Furthermore, LTA-OM has valuable features for robot control and path planning, including high-frequency and real-time odometry, drift-less odometry at revisit places, and fast loop closing convergence. Moreover, LTA-OM is versatile as it is applicable to both multi-line spinning and solid-state LiDARs, mobile robots and handheld platforms.
This paper presents a precise two-robot collaboration method for 3D self-localization relying on a single rotating camera and onboard accelerometers used to measure the tilt of the robots. This method allows for localization in GPS-denied environments and in the presence of magnetic interference or relatively (or totally) dark and unstructured unmarked locations. One robot moves forward on each step while the other remains stationary. The tilt angles of the robots obtained from the accelerometers and the rotational angle of the turret, associated with the video analysis, make it possible to continuously calculate the location of each robot. We describe a hardware setup used for experiments and provide a detailed description of the algorithm that fuses the data obtained by the accelerometers and cameras and runs in real-time on onboard micro-computers. Finally, we present 2D and 3D experimental results, which show that the system achieves 2% accuracy for the total travelled distance.
Glacial moulins (cylindrical meltwater drainage shafts) provide valuable insights into glacier dynamics, but are inaccessible and hazardous environments for humans to study. Exploring using passive sensor probes has revealed the complex geometry of moulins, however, exploration has been limited. To overcome these challenges, we propose a tethered robot capable of autonomously exploring and capturing data in glacial moulins. Our novel robot is equipped with a tether to support its motion. Combined with novel estimation and control algorithms, the tethered robot is able to safely and efficiently maneuver in confined, chimney-like structures such as moulins. Laboratory and field experiments confirm the feasibility of the proposed design, showing successful localization in environments with no access to positional measurements. Field trials on the Mer de Glace glacier demonstrate the robot’s capabilities, descending into the largest moulin to depths of 25m and using onboard sensors to reconstruct the moulin shape. Two sampling mechanisms are presented and evaluated to extract samples from the icy surface of the moulin. Our results show promising potential for future exploration of moulins, demonstrating the effectiveness of our tethered robot for safely gathering data from these hazardous environments.