Multi Robot Coordination for Efficient Search and Rescue through Deep Image Processing and Communication

dc.contributor.authorHassan, M Mahmudul
dc.contributor.departmentfi=Tietotekniikan laitos|en=Department of Computing|
dc.contributor.facultyfi=Teknillinen tiedekunta|en=Faculty of Technology|
dc.contributor.studysubjectfi=Tietotekniikka|en=Information and Communication Technology|
dc.date.accessioned2025-08-08T21:05:13Z
dc.date.available2025-08-08T21:05:13Z
dc.date.issued2025-07-29
dc.description.abstractThis thesis leveraging the capabilities of Robot Operating System (ROS 2 Humble) and Gazebo Fortress presents the design, implementation, and evaluation of decentralized multi-robot coordination in autonomous search and rescue operations utilizing the deep image processing, parallel communication, and Nav2 navigation in a semi structured environment. The core components include four TurtleBot3 Waffle robots each equipped with 2D LiDAR and RGB-D cameras deployed in a maze-like environment to mimic real-world challenges encountered in search and rescue operations. The aim of this research is to enable autonomous exploration of unknown environment, precise detection and localization of the targets which is red cylindrical structure, and efficient task allocation across these multi robots. Two localization techniques, Simultaneous Localization and Mapping (SLAM) and Adaptive Monte Carlo Localization are implemented and evaluated rigorously. While SLAM localization generates dynamic mapping in a real-time manner of an unknown environment, AMCL loads static pre-defines map from the Nav2 map server. Each localization method offers distinct advantages such as AMCL delivers faster initialization, reduces CPU usage and deterministic navigation in familiar space while SLAM provides dynamic adaptability at higher computational load with uncertain localization initially. Feature detection is obtained using a pre-trained YOLOv8 model integrated via the ultralytics library. Realtime RGB images from simulated Intel RealSense R200 cameras undergo deep learning based processing for target identification while depth images, intrinsic calibration, and odometry data are fused to correctly compute gazebo world coordinates which is obtained leveraging the quaternion-to-Euler conversions and yaw-based rotation matrices for reliable camera-frame detections to the global environment in the custom DepthToWorldConverter ROS 2 node. Target detections are shared across all robots via the /global_cylinder_detections topic. The system demonstrated accurate object detection approximate to 95%. A decentralized coordination manager evaluates each robot’s shortest path only in the AMCL localization technique using the ComputePathToPose and selects the closest robot resulting in issuing NavigateToPose action. Detection and allocation states are visualized using a GUI Tkinter and monitored through rqt_graph, rviz2, and ROS 2 logs. Utilizing the ROS 2 Data Distribution Service (DDS) standard Wi-Fi shows low message latency (0.05-0.24s) while Bluetooth like environments shows significant delays (up to 1.1s) which at times impacts the coordination. Performance benchmark evaluated under both SLAM and AMCL configuration and focuses on critical metrics such as CPU load, target detection, obstacle avoidance, communication latency, localization accuracy, map coverage, task completion time, and system robustness under varying network conditions. To sum up, this thesis provides a rigorous and technically robust, scalable solution for autonomous multi-robot systems in SAR contexts. Integrating advance deep learning perception, real-time coordinate transformation, decentralized decision making, situational awareness, and resilient navigation mimicking real world deployment in SAR and other high-risk environments. The research not only demonstrates the feasibility of decentralized autonomous agents for complex operations leveraging state of the art image processing method, communication framework, localization techniques but also establishes a foundation which paves the way for a more safe and efficient SAR operations.
dc.format.extent69
dc.identifier.olddbid199709
dc.identifier.oldhandle10024/182737
dc.identifier.urihttps://www.utupub.fi/handle/11111/11170
dc.identifier.urnURN:NBN:fi-fe2025080781418
dc.language.isoeng
dc.rightsfi=Julkaisu on tekijänoikeussäännösten alainen. Teosta voi lukea ja tulostaa henkilökohtaista käyttöä varten. Käyttö kaupallisiin tarkoituksiin on kielletty.|en=This publication is copyrighted. You may download, display and print it for Your own personal use. Commercial use is prohibited.|
dc.rights.accessrightsavoin
dc.source.identifierhttps://www.utupub.fi/handle/10024/182737
dc.subjectMRS, SAR, Situational Awareness, AMCL, SLAM, Object detection, Decentralized task allocation, Parallel Communication, MARL
dc.titleMulti Robot Coordination for Efficient Search and Rescue through Deep Image Processing and Communication
dc.type.ontasotfi=Diplomityö|en=Master's thesis|

Tiedostot

Näytetään 1 - 1 / 1
Ladataan...
Name:
Hassan_MMahmudul_Thesis.pdf
Size:
1.54 MB
Format:
Adobe Portable Document Format