A network of sensor-based framework for automated visual surveillance Article uri icon

abstract

  • This paper presents an architecture for sensor-based, distributed, automated scene surveillance. The goal of the work is to employ wireless visual sensors, scattered in an area, for detection and tracking of objects of interest and their movements through application of agents. The architecture consists of several units known as Object Processing Units (OPUs) that are wirelessly connected in a cluster fashion. Cluster heads communicate with the Scene Processing Units which are responsible for analyzing all the information sent by the former. Object detection and tracking is performed by cooperative agents, named as Region and Object Agents. The area under surveillance is divided into several sub-areas. One camera is assigned to each sub-area. A Region Agent (RA) is responsible for monitoring a given sub-area. First, a background subtraction is performed on the scene taken by the camera. Then, a computed foreground mask is passed to the RA, which is responsible for creating Object Agents dedicated to tracking detected objects. Object detection and tracking is done automatically and is performed on the OPU. The tracking information and foreground mask are sent to a Scene Processing Unit that analyzes this information and determines if a threat pattern is present at the scene and performs appropriate action. © 2006 Elsevier Ltd. All rights reserved.

publication date

  • 2007-01-01