A Comparison of Two Reinforcement Learning Algorithms for Robotic Pick and Place with Non-Visual Sensing

    Research output: Contribution to journalArticlepeer-review

    11 Citations (Scopus)

    Abstract

    Abstract—In this study, we perform a comparative analysis of two approaches we developed for learning to carryout pick and place operations on various objects moving on a conveyor belt in a non-visual environment, using proximity sensors. The problem under consideration is formulated as a Markov Decision Process. and solved by using Reinforcement Learning algorithms. Learning robotic manipulations using simple reward signals is still considered to be an unresolved problem. Our reinforcement learning algorithms are based on model-free off-policy training using Q-Learning and on-policy training using SARSA. Training and testing of both algorithms along with detailed a comparison analysis are performed in a simulation-based testbed.

    Original languageEnglish
    Pages (from-to)526-535
    Number of pages10
    JournalInternational Journal of Mechanical Engineering and Robotics Research
    Volume10
    Issue number10
    DOIs
    Publication statusPublished - 2021

    Keywords

    • Markov decision problem
    • Q-learning
    • SARSA
    • non-visual
    • reinforcement learning
    • robotic manipulation

    Fingerprint

    Dive into the research topics of 'A Comparison of Two Reinforcement Learning Algorithms for Robotic Pick and Place with Non-Visual Sensing'. Together they form a unique fingerprint.

    Cite this