» Articles » PMID: 28492486

A Human Activity Recognition System Based on Dynamic Clustering of Skeleton Data

Overview
Journal Sensors (Basel)
Publisher MDPI
Specialty Biotechnology
Date 2017 May 12
PMID 28492486
Citations 5
Authors
Affiliations
Soon will be listed here.
Abstract

Human activity recognition is an important area in computer vision, with its wide range of applications including ambient assisted living. In this paper, an activity recognition system based on skeleton data extracted from a depth camera is presented. The system makes use of machine learning techniques to classify the actions that are described with a set of a few basic postures. The training phase creates several models related to the number of clustered postures by means of a multiclass Support Vector Machine (SVM), trained with Sequential Minimal Optimization (SMO). The classification phase adopts the X-means algorithm to find the optimal number of clusters dynamically. The contribution of the paper is twofold. The first aim is to perform activity recognition employing features based on a small number of informative postures, extracted independently from each activity instance; secondly, it aims to assess the minimum number of frames needed for an adequate classification. The system is evaluated on two publicly available datasets, the Cornell Activity Dataset (CAD-60) and the Telecommunication Systems Team (TST) Fall detection dataset. The number of clusters needed to model each instance ranges from two to four elements. The proposed approach reaches excellent performances using only about 4 s of input data (~100 frames) and outperforms the state of the art when it uses approximately 500 frames on the CAD-60 dataset. The results are promising for the test in real context.

Citing Articles

Human Activity Recognition Data Analysis: History, Evolutions, and New Trends.

Ariza-Colpas P, Vicario E, Oviedo-Carrascal A, Butt Aziz S, Pineres-Melo M, Quintero-Linero A Sensors (Basel). 2022; 22(9).

PMID: 35591091 PMC: 9103712. DOI: 10.3390/s22093401.


The VISTA datasets, a combination of inertial sensors and depth cameras data for activity recognition.

Fiorini L, Cornacchia Loizzo F, Sorrentino A, Rovini E, Di Nuovo A, Cavallo F Sci Data. 2022; 9(1):218.

PMID: 35585077 PMC: 9117293. DOI: 10.1038/s41597-022-01324-3.


Human Activity Recognition via Hybrid Deep Learning Based Model.

Khan I, Afzal S, Lee J Sensors (Basel). 2022; 22(1).

PMID: 35009865 PMC: 8749555. DOI: 10.3390/s22010323.


The collaborative mind: intention reading and trust in human-robot interaction.

Vinanzi S, Cangelosi A, Goerick C iScience. 2021; 24(2):102130.

PMID: 33659886 PMC: 7890414. DOI: 10.1016/j.isci.2021.102130.


Prediction of Human Activities Based on a New Structure of Skeleton Features and Deep Learning Model.

Jaouedi N, Perales F, Buades J, Boujnah N, Bouhlel M Sensors (Basel). 2020; 20(17).

PMID: 32882884 PMC: 7506930. DOI: 10.3390/s20174944.

References
1.
Padilla-Lopez J, Chaaraoui A, Gu F, Florez-Revuelta F . Visual privacy by context: proposal and evaluation of a level-based visualisation scheme. Sensors (Basel). 2015; 15(6):12959-82. PMC: 4507699. DOI: 10.3390/s150612959. View

2.
Ni B, Pei Y, Moulin P, Yan S . Multilevel depth and image fusion for human activity detection. IEEE Trans Cybern. 2013; 43(5):1383-94. DOI: 10.1109/TCYB.2013.2276433. View

3.
Parisi G, Weber C, Wermter S . Self-organizing neural integration of pose-motion features for human action recognition. Front Neurorobot. 2015; 9:3. PMC: 4460528. DOI: 10.3389/fnbot.2015.00003. View

4.
Han J, Shao L, Xu D, Shotton J . Enhanced computer vision with Microsoft Kinect sensor: a review. IEEE Trans Cybern. 2013; 43(5):1318-34. DOI: 10.1109/TCYB.2013.2265378. View

5.
Zhu G, Zhang L, Shen P, Song J . An Online Continuous Human Action Recognition Algorithm Based on the Kinect Sensor. Sensors (Basel). 2016; 16(2):161. PMC: 4801539. DOI: 10.3390/s16020161. View