nanoll extt
Please use this identifier to cite or link to this item: http://lrcdrs.bennett.edu.in:80/handle/123456789/246
Full metadata record
DC FieldValueLanguage
dc.contributor.authorVerma, Madhushi-
dc.contributor.authorKaur, Manjit-
dc.date.accessioned2023-03-22T04:04:57Z-
dc.date.available2023-03-22T04:04:57Z-
dc.date.issued2022-
dc.identifier.citationDeotale, D., Verma, M., Suresh, P., Kumar Jangir, S., Kaur, M., Ahmed Idris, S., & Alshazly, H. (2022). HARTIV: Human Activity Recognition Using Temporal Information in Videos. In Computers, Materials & Continua (Vol. 70, Issue 2, pp. 3919–3938). Computers, Materials and Continua (Tech Science Press).en_US
dc.identifier.issn1546-2218-
dc.identifier.urihttp://lrcdrs.bennett.edu.in:80/handle/123456789/246-
dc.description.abstractNowadays, the most challenging and important problem of computer vision is to detect human activities and recognize the same with temporal information from video data. The video datasets are generated using cameras available in various devices that can be in a static or dynamic position and are referred to as untrimmed videos. Smarter monitoring is a historical necessity in which commonly occurring, regular, and out-of-the-ordinary activities can be automatically identified using intelligence systems and computer vision technology. In a long video, human activity may be present anywhere in the video. There can be a single or multiple human activities present in such videos. This paper presents a deep learning-based methodology to identify the locally present human activities in the video sequences captured by a single wide-view camera in a sports environment. The recognition process is split into four parts: firstly, the video is divided into different set of frames, then the human body part in a sequence of frames is identified, next process is to identify the human activity using a convolutional neural network and finally the time information of the observed postures for each activity is determined with the help of a deep learning algorithm. The proposed approach has been tested on two different sports datasets including ActivityNet and THUMOS. Three sports activities like swimming, cricket bowling and high jump have been considered in this paper and classified with the temporal information i.e., the start and end time for every activity present in the video. The convolutional neural network and long short-term memory are used for feature extraction of temporal action recognition from video data of sports activity. The outcomes show that the proposed method for activity recognition in the sports domain outperforms the existing methods.en_US
dc.publisherTech Science Pressen_US
dc.relation.ispartofseries;70-
dc.subjectAction recognitionen_US
dc.subjecthuman activity recognitionen_US
dc.subjectuntrimmed videoen_US
dc.subjectdeep learningen_US
dc.subjectconvolutional neural networksen_US
dc.titleHARTIV: Human activity recognition using temporal information in videosen_US
dc.typeArticleen_US
dc.indexedscen_US
Appears in Collections:Journal Articles_SCSET

Files in This Item:
File Description SizeFormat 
HARTIV_Human_Activity_Recognition_Using_Temporal_Information _in_Videos.pdf
  Restricted Access
1.17 MBAdobe PDFView/Open Request a copy

Contact admin for Full-Text

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.