MoCA Project: Last edited 04.06.1998, Author: Rainer Lienhart

Automatic Movie Content Analysis    

detectfaces

Description

detects and localizes faces in frontal view

Options

-video 
-v
<video_directory>    // e.g. e:/video/ForrestGump/  
<video_name>         // e.g. ForrestGump.mpg  
<first frame# = first video frame>      // e.g. 4100  
<last frame#  = last video frame>     // e.g. 4200  
<step size = 3 frames>  // e.g. 5
required
-fast 
-f
<int>   
size (in pixels) of smallest face in frontal view that can be detected. Must be >= 20.  For instance "-fast 40" accelerate search extremely. 
default: 20
optional
-color 
-c
switch; use color information of image to accelerate search; frontal faces are only searched in skin color like regions. optional
-rotate 
-ro
<step angle>  <# of steps in each direction>  
search also for rotated faces; For instance, "-ro 10 1" means that the face detector is applied to the orginal image and to the images rotated by 10 degrees clock- and counterclockwise.
optional
-report 
-r
switch; output detection result to file "<video_directory>/measurements/detectFacesLocations/frame#.txt" optional
-out 
-o
store marked faces as JPEG images named "frame#.jpg"; only Sun Solaris optional
 

Example

detectfaces -v /opt/Movies/ViCAS/ForrestGump/  ForrestGump.mpg  4113 4500 3 -ro 10 1 -c -fast 40 -r 

By this command line the faces in frontal view (at least 40by 40 pixels large) are located in the movie ForrestGump from frame 4113 to 4500. Every 3th frame is processed. The output looks like   

   frame 4113      Thu Jun  4 13:28:34 1998
   1       176     92      69      10
   frame 4116      Thu Jun  4 13:28:42 1998
   1       176     92      84      0
   frame 4119      Thu Jun  4 13:28:50 1998
   1       174     92      58      0
   frame 4122      Thu Jun  4 13:28:58 1998
   1       169     93      58      10
   frame 4125      Thu Jun  4 13:29:07 1998
   1       164     94      69      0
   ....
The output shoud be read in the following way: For each frame n which is processed "frame <no> <starting date/time of search>" is output followed by a list of localized faces. Each localized face is described in its own text line. The first integer specifies the running face number in the frame. It is followed by the position (x,y) of the face (center) and the size of the face. The last parameter specifies at which rotation angle the face was detected.  
 
 
© 1998  Rainer Lienhart@informatik.uni-mannheim.de