Automatic activity estimation through recognizing and handling objects in video sequences for image annotation and retrieval
Conference Paper
Overview
Research
Additional Document Info
View All
Overview
abstract
Automatic estimation of human activities is widely studied topic. However, the process becomes difficult when we want to estimate activities from a video stream, because human activities are dynamic and complex. Our contribution is focused on activity estimation based on object behavior through automatic analysis of video sequences. Another contribution is focused on providing a tool with the aim of monitoring activities in a health-care environment. Our activity estimation process was developed in four phases: The first phase includes the detection of the interactions in the setting by slit-scanning technique; the second phase includes object recognition by composite correlation filters; the third phase follows several criteria for activity estimation. When the behavior of the objects related to the activities is validated, the estimation of an activity is confirmed. Each activity is related to the handling of objects, date and time of the activity, and activity description. All this information is recorded in a database; and after this the last phase includes the activity representation, using indexes for image recovery related to each activity, allowing us to create an activity representation. The activities are estimated at a 92.72 percentage of accuracy including hygiene, feeding and taking of vital signs.