Analysis of temporal coherence in videos for action recognition

Adel Saleh*, Mohamed Abdel-Nasser, Farhan Akram, Miguel Angel Garcia, Domenec Puig

*Corresponding author for this work

Research output: Chapter/Conference proceedingConference proceedingAcademicpeer-review

Abstract

This paper proposes an approach to improve the performance of activity recognition methods by analyzing the coherence of the frames in the input videos and then modeling the evolution of the coherent frames, which constitute a sub-sequence, to learn a representation for the videos. The proposed method consist of three steps: coherence analysis, representation leaning and classification. Using two state-of-the-art datasets (Hollywood2 and HMDB51), we demonstrate that learning the evolution of subsequences in lieu of frames, improves the recognition results and makes actions classification faster.

Original languageEnglish
Title of host publicationImage Analysis and Recognition - 13th International Conference, ICIAR 2016, Proceedings
EditorsAurelio Campilho, Aurelio Campilho, Fakhri Karray
Pages325-332
Number of pages8
DOIs
Publication statusPublished - 2016
Externally publishedYes
Event13th International Conference on Image Analysis and Recognition, ICIAR 2016 - Povoa de Varzim, Portugal
Duration: 13 Jul 201616 Jul 2016

Publication series

SeriesLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume9730
ISSN0302-9743

Conference

Conference13th International Conference on Image Analysis and Recognition, ICIAR 2016
Country/TerritoryPortugal
CityPovoa de Varzim
Period13/07/1616/07/16

Bibliographical note

Funding Information:
This work was partly supported by Universitat Rovira i Virgili, Spain, and Hodeidah University, Yemen.

Publisher Copyright:
© Springer International Publishing Switzerland 2016.

Fingerprint

Dive into the research topics of 'Analysis of temporal coherence in videos for action recognition'. Together they form a unique fingerprint.

Cite this