nanoll extt
Please use this identifier to cite or link to this item: http://lrcdrs.bennett.edu.in:80/handle/123456789/1403
Title: QC SANE: Robust Control in DRL Using Quantile Critic With Spiking Actor and Normalized Ensemble
Authors: Surbhi Gupta, Gaurav Singal, Deepak Garg
Keywords: Actor critic, Artificial neural networks, deep reinforcement learning (DRL), ensemble, Neurons, reinforcement learning (RL), robust control, Robustness, Sociology, spiking neural network (SNN)., Statistics, Task analysis, Uncertainty
Issue Date: 2021
Publisher: Institute of Electrical and Electronics Engineers Inc.
Series/Report no.: 9
Abstract: Recently introduced deep reinforcement learning (DRL) techniques in discrete-time have resulted in significant advances in online games, robotics, and so on. Inspired from recent developments, we have proposed an approach referred to as Quantile Critic with Spiking Actor and Normalized Ensemble (QC_SANE) for continuous control problems, which uses quantile loss to train critic and a spiking neural network (NN) to train an ensemble of actors. The NN does an internal normalization using a scaled exponential linear unit (SELU) activation function and ensures robustness. The empirical study on multijoint dynamics with contact (MuJoCo)-based environments shows improved training and test results than the state-of-the-art approach: population coded spiking actor network (PopSAN). IEEE
URI: https://doi.org/10.1109/TNNLS.2021.3129525
http://lrcdrs.bennett.edu.in:80/handle/123456789/1403
ISSN: 2162-237X
Appears in Collections:Journal Articles_SCSET

Files in This Item:
File SizeFormat 
69.pdf
  Restricted Access
2.68 MBAdobe PDFView/Open Request a copy

Contact admin for Full-Text

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.