« Webcam sphere (spycam sphere) | Main | Exemple d'utilisation du Tracking Vidéo (VTSC) »

06. 07. 2006  16:13 | 03_Sketches_&_Projects_4 , 12_Curated_posts

VTSC - Tech. Review

Video tracking systems are usually set up for object motion tracking or change detection. These systems are assumed to be able to run in real-time, e.g. analyzing a live video stream and giving the expected result straight forward without time delay.
The obvious main purpose of such systems are usually linked to video surveillance (persons, vehicles) or even object guidance (missiles).

PFTrack
Optibase
Logiware

A large set of academic (http://citeseer.ist.psu.edu/676131.html) and commercial references exists exploiting a well known set of distinct methods. Usually the best is the algorythm, the worst is its CPU print.
Commercial solution usually proposes very good solution while using dedicated hardware, making possible to have high performance algorythm running in real-time.

In the framework of this project, a set of pre-defined constraints must be taken in account:

-> The tracking system must interact with an existing robot control system developped at the EPFL
-> Low cost hardware may be used for cameras and computers (video streams analysis)
-> Several tracked area activations, issued from several distinct cameras, may be combined to make one decision validated or not
-> The number of cameras must be maximized (in order to obtain a maximum of tracked configurations) where the needed set of computers to perform video analysis must be minimized

This set of constraints excludes the use of any commercial solutions that may have an important costs as well as may imply problems to adapt itself to the describe experiment scope.

It disqualified as well open-source or freely available video analysis systems because of their lack of functionalities: none of the tested projects were able to deal with several cameras connected to the same host computer for example.
Some of them imply the use of a particular type of camera, compatible with some specific drivers only (WDM for JMyron).

By developing a highly networked system based on commonly used technology (Microsoft DirectShow) we will be able to use any windows compatible webcam without any particular limitation. It implies as well to be able to access to several camera video streams through USB from the same host computer.
The network layer will ensure that all video analysis data can be centralized to a dedicated application in charge of validating a given decision (ex: 3 persons are sitting around the table true-false?) as well as making available this information to the robot's controller application (EPFL), still through network.

The video analysis itself can be freely based on methods described in the numerous research papers found in the literature, making possible to choose from one method or another according to the kind of CPU print we can allow for the application.

New video tracking methods may even be included later, making possible to have a set of networked video tracking applications running a different video analysis algorithm each.

Posted by fabric | ch at 6. 07. 2006 16:13