By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy. We’ll occasionally send you promo and account related email
No need to pay just yet!
About this sample
About this sample
Words: 831 |
Pages: 2|
5 min read
Published: Jul 15, 2020
Words: 831|Pages: 2|5 min read
Published: Jul 15, 2020
When getting past a crowd to go wherever they want to, people normally navigate the distance safely without giving a thought to what they are doing. They learn from the actions of other people and take note of any obstacles to avoid. On the other hand, robots, unlike people, have a hard time dealing with such navigational concepts.
Motion-planning algorithms will generate a tree of possible decisions that branches out until it locates good paths for navigation. A robot that has to walk through a room to reach a door, for instance, will have to produce a step-by-step search tree of possible movements and then decide the best path to the door, considering various obstacles. One disadvantage is that these algorithms rarely learn: Robots cannot leverage information about how they or other agents acted previously in similar environments.
Andrei Barbu, one of the researchers and affiliated with the Computer Science and Artificial Intelligence Laboratory (CSAIL), likened the robots’ situation to playing a game of chess. The decision trees will branch out until the robots find the optimum way to navigate. But unlike chess players, the robots will explore what the future looks like without learning much about their environment and other agents, according to Barbu. It is always complicated for the robots, whether they are going through the same crowd for the first time or for the thousandth time. They will always be exploring, but rarely observing, and never using what’s happened in the past, claimed Barbu.
What the MIT researchers did was merge a motion-planning algorithm with a neural network, which then learns to recognize paths that could lead to the best results, and uses that information in guiding the robot’s movement in a certain environment. They have proven the effectiveness of their model in two scenarios: navigating through rooms laden with traps and narrow passages, and navigating areas while steering clear of collisions with other agents.
Yen-Ling Kuo, Barbu’s research colleague and a doctoral student at CSAIL, said the purpose of their research is to incorporate into the search space a new machine-learning model that knows how to make planning more efficient based on past experience. Existing motion planning algorithms explore an environment by rapidly expanding a tree of decisions that eventually covers an entire space. The robot then looks at the tree to find a way to reach its goal, for instance, a door. On the other hand, the model devised by the researchers offers a tradeoff between exploring the environment and making use of past experiences, Kuo claimed.
The learning process begins with a few examples. A robot using the model is trained to navigate similar environments in several ways. The neural network learns what makes these examples succeed by interpreting the environment around the robot, such as the shape of the walls, the actions of other agents, and features of the goals. In short, the model learns that when it is stuck in an environment, and it sees a doorway, it will think that it is probably a good idea to go through the door to get out, Barbu pointed out. The model unifies the exploration behavior from earlier methods with this learned information. The motion planner, called RRT*, was created by MIT professors Sertac Karaman and Emilio Frazzoli. It is derived from a popular motion-planning algorithm known as Rapidly-exploring Random Trees or RRT. The planner creates a search tree while the neural network makes probabilistic forecasts about where the robot should go next. When the network makes a prediction with high confidence, based on learned information, it guides the robot on a new path. If the network does not have a high level of confidence, it will allow the robot to explore the environment instead, similar to a traditional planner.
For instance, the researchers demonstrated the model in a simulation known as a “bug trap, ” where a two-dimensional robot must escape from an inner chamber through a central narrow channel and reach its destination in a surrounding larger room. Blind alleys on either side of the channel can cause robots to lose their way. In such a scenario, the robot was trained on a few examples of how to escape different bug traps. When confronted with a new trap, it recognizes features of the trap, escapes, and continues to search for its goal in the larger room. The neural network helps the robot find the exit to the trap, identify the dead ends, and gives the robot a sense of its surroundings so it can quickly find its destination.
Browse our vast selection of original essay samples, each expertly formatted and styled