By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy. We’ll occasionally send you promo and account related email
No need to pay just yet!
About this sample
About this sample
Words: 1078 |
Pages: 2|
6 min read
Published: Apr 11, 2019
Words: 1078|Pages: 2|6 min read
Published: Apr 11, 2019
Articulated Robot faces difficulty in simple tasks of reaching and grasping an object. This could be due to:
a) error in object identification
b) its distance calculation
c) inaccurate or no calibration of camera's extrinsic and intrinsic parameters
d) uncertainty in the robot's mechanical system.
Failing in any of the above four can cause complete failure of the task. This thesis expressed an approach to track the robotic arm and reduce the error occurred in localization of the object based on computer vision techniques with an objective of making the robot more accurate. The algorithm uses vision system as a sensor for the manipulator arm performing simultaneous localization and mapping in robot's configuration space. This approach uses two webcams making a stereo vision system for calculation of depth /distance of object instead of using any depth sensing device such as LiDAR, Kinect or ZED camera, which makes this system cheaper. Using the created vision system, the 3D reconstruction of the environment is done to identify the object to be grasped, according to which the path of the manipulator is planned.
Robotic manipulators require sensors to sense the environment around it and move accordingly. So, it is necessary that the robot uses its sensors (Lidar, cameras, kinect, gyroscope, accelerometer etc.) and actuators (motors etc) efficiently to perform the task. For accomplishing this, robot should have a prior knowledge of its destination point or next state to be reached , and the path to be taken to reach that position. Also it should know how the actuators have to be changed according to the environment and how the world has changed according to the sensors. If this prediction step is even slightly inaccurate then there is no possible way of identifying and correcting the errors in its task due to which robot is unable to complete task correctly.
Let us consider the easy task of identifying and locating the object from the area around it. The robot will use its cameras to identify and locate the object. It will the data obtained from cameras then plan the path according to the data obtained after which it commands its actuators to move accordingly. Small error in identification, path planning or actuator's movement can lead to complete task failure. The following can be the cause of errors in the failure of task.
The identification of these errors is most complicated task, but to make a more robust system these errors should not only identified but also corrected efficiently. Errors can be identified while the robot is in online state or in offline. Measurement of error can be done by firstly collecting data from sensors and checking if the data is consistent or not, and identifying its state with respect to the collected data.
Then correction or compensation of these errors is the next task to accomplish. Error compensation is giving the feedback of the error obtained to the controller which compensates its state according to the error obtained to reach the goal. Offline calibration involves the first collection of data from sensors and then the robot changes its state according to the collected data while online calibration involves simultaneous collection of data from sensor and robot's state change. Offline calibration is good and simple to implement but has some cons also, like once collection of data is done it does not account new errors when new data is collected, while this is advantage in online calibration systems. Its runtime complexity is thus more than the offline calibration. Also online calibration is fast compared to offline.
This thesis concentrates on online calibration of the robotic arm as it simultaneously captures data from camera and updates the robots state accordingly.
SLAM is an algorithm which simultaneously localizes the robot and create the map of the environment. Basic steps involved in a SLAM problem:
Hence, using the SLAM system we can reduce the error in the robotic system. Also this can be helpful in calibration of robotic arms. Tracking and calibration of arm is challenging due to its high complexity. Robotic arms are most inconsistent due to its structure and number of degree of freedoms. Also the shape of Arm changes every time with motion of actuators. The vision sensor can be mounted on the Arm or outside arm to completely view the every joint of the arm. In this thesis I explained how the two cameras can be used as for 3D construction of world and capturing the depth data instead of expensive 3D sensing devices.
Browse our vast selection of original essay samples, each expertly formatted and styled