You are here: Home News and Events Global Summary of Selected INDECT Tests
Personal tools
You are here: Home News and Events Global Summary of Selected INDECT Tests

Summary of Selected INDECT Tests

It should be pointed out that in a case when the project research on the detection of threats requires some experimental studies; the project conducts these experiments on university campuses. The tests are conducted exclusively within the universities and directly adjacent areas, in the wake of obtaining all the possible approvals and permits, from people whose image and voice is recorded and stored. Prototype tools are tested in separated areas, (e.g. in the internal parking lot of the university) with the participation of people informed about such attempts and, according to the procedures, after giving their informed consent. In order to satisfy personal information constrains a signed permission was received from all subjects (consent template can be found in public INDECT Deliverable D0.5). No personal data of participants were collected. The person’s images were anonymized by utilization of low resolution cameras, insufficient for registration of recognizable face image.

The project’s research for the detection of threats by intelligent cameras, especially those threats related to terrorism and serious criminal activities, can obviously be used by companies producing equipment for monitoring the safety of people at the stadiums. INDECT as a research project will not perform testing of such equipment in the stadiums. The project is not expected to be tested in such places as locations accessible to the public. There are no plans for mass events like European Football Championship in Ukraine and Poland or Olympic Games in United Kingdom next year to test tools developed in the INDECT project.

Car Plate Recognition Tests

Car Plate Recognition Tests were performed at AGH University of Science and Technology. The purpose of the tests was to analyse human versus machine ability to recognize car registration numbers on video material recorder using a CCTV camera. The installation of the CCTV camera was approved by AGH authorities (consent can be found in public INDECT Deliverable D0.5). Video sequences used in the tests were compressed using H.264 codec. The video material was recorded at the AGH car parking lot.

This research is dedicated to emerging area of human and machine quality optimization in video monitoring systems. Quality of Experience term in monitoring systems defines an ability to recognize some specific actions and detect some objects. New video quality recommendation will have to be developed in order to assure an acceptable level of recognition/detection accuracy. This involves video codec parameters adjustment and constant control with respect to the current video characteristics and intended recognition/detection actions.

The tests were performed using 30 source video sequences, each showing different car entering or leaving a parking lot. The cars used in the experiment are owned by INDECT and AGH employees.  From all car owners a signed permission was received. The permission allows us to use the sequence for the research purpose and share it with the community. The example of the permission sheet is presented below:


The source sequences were 20 seconds long, GoP = 30, only I and P frames, 25 FPS, with average bitrate 10 Mbit/s. All video sequences were encoded using H.264/AVC video codec, x264 implementation. The following resolutions: 1280:720, 640:360, 704:576, 352:288 and the following quantization parameters QP were used: 33, 35, 37, 39, 41, 43, 45, 47, 49, and 51. In the result of the above parameters selection each source sequence SRC 1-30 was encoded into 30 different versions HRC (Hypothetical Reference Circuit) 1- 30. The whole test set consists of 900 sequences.

The tests were performed using web-based interface. In the whole experiment complete answers from 30 subjects were gathering. Except the plate number, testers had also to specify car colour and brand. It was possible to control a video sequences playback. Such actions as play, pause, stop, navigate, and enter full screen mode were allowed. Obtained results were store in a database for further processing purpose. The interface is presented on the picture below:


Obtained results require some processing including identification of reliable testers and results interpretation.


Automatic Tracking of Moving Objects


Two-camera setup was developed: megapixel fixed camera is used for general overview of the protected area and for automatic image analysis. Second Pan-Tilt-Zoom moving camera is used for automatic targeting at selected objects. Background modelling is performed for localization of moving objects. Object coordinates are converted into pan-tilt-zoom coordinates for moving camera, by considering a geometric transformation between ground plane, image pixels coordinates, and cameras positions in real 3D world.

The system oper
ator can select moving object detected in the left view (e.g. red car bounded by a blue box), and PTZ camera automatically zooms it and follows its location. PTZ camera can also be quickly positioned at any point in the scene by clicking on it in left view. That solution can be extended with moving PTZ cameras, and then an optimal camera for displaying particular point is automatically selected, considering obscuration by buildings, etc.


Automatic Detection of Threats Based on Video Analysis

AutmaticThreatDetectionThumbThe dangerous event detection process is presented. First, image analysis is performed for moving objects detection. The object is separated from the background and its movement is analysed. Then, based on changes of movement speed and direction, various potentially dangerous events (such as: robbery, assault) can be detected.

In presented sample an event “robbery” is defined as:

-          first a meeting of two moving objects A and B occurs,

-          then rapid change of speed and direction of A’s movement happens (escape),

-          it is followed by B running after A (moving fast in direction of A’s escape).

When such potentially dangerous event is automatically detected by video analysis algorithm, the video clip containing whole event is transmitted to the security system operator for verification. The operator verifies the alarm, confirms the event and decides what should be done next.


Crowd Observation at the Bus (or Tram) Stop

Crowd observation at the busThe camera with a module detecting people in a dangerous area installed. In this example the dangerous area is the railway track. If the person appears on the track, the system reacts and a red stripe pops up.





Left Luggage Detection

left_luggageA “regular” left luggage detection. In this case, the project is testing a custom algorithm on the (widely used) benchmark film from the conference:

PETS 2006 – a collection of test recordings from 9th IEEE International Workshop on Performance Evaluation of Tracking and Surveillance, New York, USA (2006).



Crowd Behaviour Analysis using Quad-Copter

quad-copter_thumbThe recordings to analyse the behaviour of the crowd, done with quad-copter






GUT Video and Audio Recordings

Three recording sessions were organized and performed by GUT. This is a useful and important step in INDECT development.

Gathered material serves as a reference for audio and video processing algorithms, aimed at an automatic detection of threats and dangerous events such as: gunshots, screams, broken glass, explosions, and panic reaction in the crowd. All participants were GUT students who signed “Informed Consent Form”.

The first session took place in the northern courtyard of the Main Building of Gdansk University of Technology. The equipment used in the recording was: 2 AVS, 1 digital PTZ camera, 2 fixed digital cameras, alarm signal gun. The material was recorded in highest possible quality (27 Mbit/s) with Ground Truth description annotation for experiments. Moreover two Canon XHG1 cameras were used to record the scenes in 1440x1080 resolutions at 25 frames per second. The test group consisted of 16 persons. 30 minutes of material was recorded. During this time 5 potentially hazardous situations were arranged. Hazardous situation consisted of a gunshot followed by a raise of panic in the group. Contrarily, a typical situation presents the group not reacting to the shooter. This represents the case in which the shot is an acceptable activity (e.g. for celebration).

The second recording session was done between the buildings of the Faculty of Electronics, Telecommunications and Informatics. During this session the MD4-200 – Quadro-Copter developed by Micro-drones GmbH was used to record the crowd reaction scenes from the air.

The third recording session was organized by GUT in cooperation with GHP on shooting range in Warsaw. An effort was made to record real weapon shots and explosion sounds serving as sound samples for automatic recognition of events. Numerous types of hand weapons were used, recorded with varying microphone distance. Recorded sounds were collected and catalogued.


Reversible Privacy Protection

The reversible privacy protection tool is tested with respect to two important performance aspects. Firstly, the tests are conducted on static images in order to assess the achievable reconstruction quality. The test materials consists of commonly used files from existing image databases, e.g. the SIPI database from the University of South California (USC). The development of the fundamental image reconstruction methods has been the main focus of the project research so far.

The second performance measure represents the efficiency of the applied protection, i.e. how well does the system detect which fragments of the video stream should be protected. Current prototype uses known pattern detection methods, e.g. cascaded Ada-Boost for single frame faces and licence plate detection. The target system will use more sophisticated methods based on joint object/pattern detection and tracking. Hence, since this functionality is an early work in progress, standardized evaluation of this aspect has not been relevant. Early tests for algorithm development purposes have been conducted on video material which is being recorded in the AGH offices and captures the research team only. A dedicated evaluation scenario will be developed as soon as the tracking sub-system is ready.

Face Recognition Search

According to a scenario connected with a face recognition problem the project conducted an analysis of available solutions. The project also had to find information about taking photos in similar to Police way. That knowledge is necessary to create a proper database of photos for future tests. It is important in choosing a set of testing objects to eliminate the typical face recognition problems.

The project also concentrates on establishing database structure to store photos in binary format. It was essential to conduct effective performance tests and to eventually accept photos obtained from Polish Police.

During the development of the face recognition system, many photos and movies of peoples in different positions, in different light etc. are needed for the research of efficient methods and testing. The aim was to make a numerous photos, and preparing them for the use in research and system building.

For accurate face recognition photos must be properly prepared or chosen. One of the factors that can improve face recognition rate is to use in system only images that contain face in vertical position. In scenario where face detection system receives many frames from camera the project is able to choose only images, which are best for face recognition system.

In the case of testing algorithms for face recognition should be given particular attention to the proper selection of photos. It is important to draw attention to two main problems arising during the process of recognition:

  • The illumination problem consists in comparing photos taken in various conditions. In the case of significant difference in lighting his face may not be properly recognized.
  • The pose problem appears when face is turned left or right to body axis. Algorithm having a two-dimensional image cannot recognize it. Police are taking two kinds of face photos. First are front photos, in which person look straight to camera. At this photo whole face is visible. Second kind is profile photo. At this photo also person’s ear is visible. Profile photos are taking separately for both sides of face. Police are interesting in taking photos of unknown, suspect and died person. Detailed description and characteristics of such images are available in Order No. 64 the Chief of Police dated 17 March 2003 on the acquisition, processing and use of information by the Police and the means of establishment and harvest an information.

Both a database and an interface were created. Their functionalities were fully sufficient to perform preliminary tests. Due to mentioned research and as a result of other system modules development a need was formed to modify and enhance the database as well as to implement additional interfaces. Implementation of an application responsible for face detection and cutting faces was finished with success. Moreover, the structure of the application is similar to the analogous application written in Python. Such feature eases the process of testing.

Over 270 photos and some movies were made, prepared and added to project’s photo resources. They are intensively used in current research and system development and testing.

Document Actions
« March 2019 »

You can use this calendar to browse through INDECT events.