Implementing Robotic Pick and Place with Non-visual Sensing Using Reinforcement Learning

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

3 Citations (Scopus)

Abstract

In this study, we focus on learning and carrying out pick and place operations on various objects moving on a conveyor belt in a non-visual environment, using proximity sensors. The problem under consideration is formulated as a Markov Decision Process. and solved by using Reinforcement Learning. Learning robotic manipulations using simple reward signals is still considered to be an unresolved problem. Our reinforcement learning algorithm is based on model-free off-policy training using Q-Learning. Training and testing are performed in a simulation-based testbed, proving our approach to be successful in pick and place operations in non-visual industrial setups.

Original languageEnglish
Title of host publication2022 6th International Conference on Robotics, Control and Automation, ICRCA 2022
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages23-28
Number of pages6
ISBN (Electronic)9781665481748
DOIs
Publication statusPublished - 2022
Event6th International Conference on Robotics, Control and Automation, ICRCA 2022 - Xiamen, China
Duration: 26 Feb 202228 Feb 2022

Publication series

Name2022 6th International Conference on Robotics, Control and Automation, ICRCA 2022

Conference

Conference6th International Conference on Robotics, Control and Automation, ICRCA 2022
Country/TerritoryChina
CityXiamen
Period26/02/2228/02/22

Keywords

  • markov decision problem
  • non-visual
  • q-learning
  • reinforcemenet learning
  • robotic manipulations

Fingerprint

Dive into the research topics of 'Implementing Robotic Pick and Place with Non-visual Sensing Using Reinforcement Learning'. Together they form a unique fingerprint.

Cite this