Please use this identifier to cite or link to this item:
http://cmuir.cmu.ac.th/jspui/handle/6653943832/73605
Title: | Development of an autonomous mobile manipulator system to perform fetch and carry tasks |
Other Titles: | การพัฒนาระบบหุ่นยนต์เคลื่อนที่แบบอัตโนมัติเพื่อการหยิบจับและเคลื่อนย้ายวัตถุ |
Authors: | Isira Uthpala Eashwara Naotunna |
Authors: | Theeraphong Wongratanaphisan Isira Uthpala Eashwara Naotunna |
Issue Date: | May-2021 |
Publisher: | Chiang Mai : Graduate School, Chiang Mai University |
Abstract: | Traditional robots are widely used in the industry as they are reliable, fast, and has precise motions to perform various tasks in factory environment. Especially, mobile robot and manipulators play a significant role in enhancing productivity, reducing human involvement as well as in improving safety in the industrial workspace. In order to carry out tasks in a more human-involved environment, a robot should be able to perceive its surroundings, plan its motions to avoid obstacles, manipulate objects and use specific tools to achieve its goals. Yet, perception has become difficult due to the limitations in technology and hardware that is required to process the collected data. To explore the limits of the open-source Robot Operating System (ROS) software for mobile manipulation systems in object handling, this thesis has developed a ROS-based mobile manipulator robot that exhibits autonomous and intelligent behavior to perform a fetch- and-carry task in an indoor environment. The system has been developed using the knowledge gathered on robot kinematics, computer vision, visual servoing, and Simultaneous Localization and Mapping (SLAM), navigation, and object manipulation. The robot's autonomous navigation system has been developed using the ROS navigation stack with real-time appearance-based mapping and localization techniques. The object detection and the manipulation techniques have been developed using the Aruco marker-based visual servoing technique. This work uses the Intel RealSense T265 tracking camera information to develop the robot's transformation (TF) tree instead of using the common approach of ROS for minimizing the odometry errors. Instead of using multiple sensor sources or laser scanner data such as in many existing systems to perform the autonomous navigation tasks, this research has set up the ROS navigation stack by using only the Intel RealSense D435i camera with the help of the transformation tree developed based on the Intel RealSense T265 tracking camera. The developed system was tested and validated by several experiments. |
URI: | http://cmuir.cmu.ac.th/jspui/handle/6653943832/73605 |
Appears in Collections: | ENG: Theses |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
610631134 ISIRA UTHPALA EASHWARA NAOTUNNA.pdf | 9.62 MB | Adobe PDF | View/Open Request a copy |
Items in CMUIR are protected by copyright, with all rights reserved, unless otherwise indicated.