Robot Path Planning in a Dynamic Environment Using Deep Q-Learning
- Authors: Rifaqat Ali1, Preeti Chandrakar2
-
View Affiliations Hide AffiliationsAffiliations: 1 Department of Mathematics and Scientific Computing, National Institute of Technology Hamirpur, H.P., India 2 Department of CSE, National Institute of Technology Raipur, India
- Source: Robotics and Automation in Industry 4.0 , pp 9-33
- Publication Date: October 2024
- Language: English
Robot Path Planning in a Dynamic Environment Using Deep Q-Learning, Page 1 of 1
< Previous page | Next page > /docserver/preview/fulltext/9789815223491/chapter-2-1.gifRobot path planning is a necessary requirement for todays autonomous industry as robots are becoming a crucial part of the industry. Planning a path in a dynamic environment that changes over time is a difficult challenge for mobile robots. The robot needs to continuously avoid all the obstacles in its path and plan a suitable trajectory from the given source point to the target point within a dynamically changing environment. In this study, we will use Deep Q-Learning (Q-Learning using neural networks) to avoid the obstacles in the environment, which are being dynamically created by the user. The main aim of the robot is to plan a path without any collision with any of the obstacles. The environment is simulated in the form of a grid that initially contains information on the starting and the target location of the robot. Robots need to plan an obstacle-free path for the given points. The user introduces obstacles whenever he/she wishes during the simulation to make the environment dynamic. The accuracy of the path is judged by the path planned by the robot. Various architectures of neural networks are compared in the study that follows. Simulation results are analyzed for the evaluation of an optimized path, and the robot is able to plan a path in the dynamic environment.
-
From This Site
/content/books/9789815223491.chapter-2dcterms_subject,pub_keyword-contentType:Journal105