nanoll extt
Please use this identifier to cite or link to this item: http://lrcdrs.bennett.edu.in:80/handle/123456789/1832
Full metadata record
DC FieldValueLanguage
dc.contributor.authorBadal, Tapas
dc.contributor.authorSrivastava, Anugrah
dc.contributor.authorSaxena, Pawan
dc.date.accessioned2023-07-14T13:02:19Z-
dc.date.available2023-07-14T13:02:19Z-
dc.date.issued2022
dc.identifier.issn0928-8910
dc.identifier.urihttps://doi.org/10.1007/s10515-022-00323-3
dc.identifier.urihttp://lrcdrs.bennett.edu.in:80/handle/123456789/1832-
dc.description.abstractViolence detection and face recognition of the individuals involved in the violence has an influence that’s noticeable on the development of automated video surveillance research. With increasing risks in society and insufficient staff to monitor them, there is an expanding demand for drones square measure and computerized video surveillance. Violence detection is expeditious and can be utilized as the method to selectively filter the surveillance videos, and identify or take note of the individual who is creating the anomaly. Individual identification from drone surveillance videos in a crowded area is difficult because of the expeditious movement, overlapping features, and bestrew backgrounds. The goal is to come with a better drone surveillance system that recognizes the violent individuals that are implicated in violence and evoke a distress signal so that fast help can be offered. This paper uses the currently developed techniques based on deep learning and proposed the concept of transfer learning using deep learning-based different hybrid models with LSTM for violence detection. Identifying individuals incriminated in violence from drone-captured images involves major issues in variations of human facial appearance, hence the paper uses a CNN model combined with image processing techniques. For testing, the drone captured video dataset is developed for an unconstrained environment. Ultimately, the features extracted from a hybrid of inception modules and residual blocks, with LSTM architecture yielded an accuracy of 97.33% and thereby proved to be noteworthy and thereby, demonstrating its superiority over other models that have been tested. For the individual identification module, the best accuracy of 99.20% obtained on our dataset, is a CNN model with residual blocks trained for face identification. © 2022, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.en_US
dc.publisherSpringeren_US
dc.subjectDeep learningen_US
dc.subjectDrone surveillance videosen_US
dc.subjectLSTMen_US
dc.subjectTransfer learningen_US
dc.subjectViolence detectionen_US
dc.subjectViolent individualen_US
dc.titleUAV surveillance for violence detection and individual identificationen_US
dc.typeArticleen_US
dc.indexedscen_US
dc.indexedWCen_US
Appears in Collections:Journal Articles_SCSET

Files in This Item:
File SizeFormat 
1221.pdf
  Restricted Access
2.46 MBAdobe PDFView/Open Request a copy

Contact admin for Full-Text

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.