nanoll extt
Please use this identifier to cite or link to this item: http://lrcdrs.bennett.edu.in:80/handle/123456789/246
Title: HARTIV: Human activity recognition using temporal information in videos
Authors: Verma, Madhushi
Kaur, Manjit
Keywords: Action recognition
human activity recognition
untrimmed video
deep learning
convolutional neural networks
Issue Date: 2022
Publisher: Tech Science Press
Citation: Deotale, D., Verma, M., Suresh, P., Kumar Jangir, S., Kaur, M., Ahmed Idris, S., & Alshazly, H. (2022). HARTIV: Human Activity Recognition Using Temporal Information in Videos. In Computers, Materials & Continua (Vol. 70, Issue 2, pp. 3919–3938). Computers, Materials and Continua (Tech Science Press).
Series/Report no.: ;70
Abstract: Nowadays, the most challenging and important problem of computer vision is to detect human activities and recognize the same with temporal information from video data. The video datasets are generated using cameras available in various devices that can be in a static or dynamic position and are referred to as untrimmed videos. Smarter monitoring is a historical necessity in which commonly occurring, regular, and out-of-the-ordinary activities can be automatically identified using intelligence systems and computer vision technology. In a long video, human activity may be present anywhere in the video. There can be a single or multiple human activities present in such videos. This paper presents a deep learning-based methodology to identify the locally present human activities in the video sequences captured by a single wide-view camera in a sports environment. The recognition process is split into four parts: firstly, the video is divided into different set of frames, then the human body part in a sequence of frames is identified, next process is to identify the human activity using a convolutional neural network and finally the time information of the observed postures for each activity is determined with the help of a deep learning algorithm. The proposed approach has been tested on two different sports datasets including ActivityNet and THUMOS. Three sports activities like swimming, cricket bowling and high jump have been considered in this paper and classified with the temporal information i.e., the start and end time for every activity present in the video. The convolutional neural network and long short-term memory are used for feature extraction of temporal action recognition from video data of sports activity. The outcomes show that the proposed method for activity recognition in the sports domain outperforms the existing methods.
URI: http://lrcdrs.bennett.edu.in:80/handle/123456789/246
ISSN: 1546-2218
Appears in Collections:Journal Articles_SCSET

Files in This Item:
File Description SizeFormat 
HARTIV_Human_Activity_Recognition_Using_Temporal_Information _in_Videos.pdf
  Restricted Access
1.17 MBAdobe PDFView/Open Request a copy

Contact admin for Full-Text

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.