Sensor data fusion for road obstacle detection: A validation framework. In : Sensor Fusion and its Applications

LABAYRADE ; PERROLLAZ ; GRUYER ; AUBERT

Type de document
CHAPITRE D'OUVRAGE (CO)
Langue
anglais
Auteur
LABAYRADE ; PERROLLAZ ; GRUYER ; AUBERT
Résumé / Abstract
Obstacle detection is an essential task for autonomous robots. In particular, in the context of Intelligent Transportation Systems (ITS), vehicles (cars, trucks, buses, etc.) can be considered as robots; the development of Advance Driving Assistance Systems (ADAS), such as collision mitigation, collision avoidance, pre-crash or Automatic Cruise Control, requires that reliable road obstacle detection systems are available. To perform obstacle detection, various approaches have been proposed, depending on the sensor involved: telemeters like radar (Skutek et al., 2003) or laser scanner (Labayrade et al., 2005; Mendes et al., 2004), cooperative detection systems (Griffiths et al., 2001; Von Arnim et al., 2007), or vision systems. In this particular field, monocular vision generally exploits the detection of specific features like edges, symmetry (Bertozzi et al., 2000), color (Betke & Nguyen, 1998) (Yamaguchi et al., 2006) or even saliency maps (Michalke et al., 2007). Anyway, most monocular approaches suppose recognition of specific objects, like vehicles or pedestrians, and are therefore not generic. Stereovision is particularly suitable for obstacle detection (Bertozzi & Broggi, 1998; Labayrade et al., 2002; Nedevschi et al., 2004; Williamson, 1998), because it provides a tri-dimensional representation of the road scene. A critical point about obstacle detection for the aimed automotive applications is reliability: the detection rate must be high, while the false detection rate must remain extremely low. So far, experiments and assessments of already developed systems show that using a single sensor is not enough to meet these requirements: due to the high complexity of road scenes, no single sensor system can currently reach the expected 100% detection rate with no false positives. Thus, multi-sensor approaches and fusion of data from various sensors must be considered, in order to improve the performances. Various fusion strategies can be imagined, such as merging heterogeneous data from various sensors (Steux et al., 2002). More specifically, many authors proposed cooperation between an active sensor and a vision system, for instance a radar with mono-vision (Sugimoto et al., 2004), a laser scanner with a camera (Kaempchen et al., 2005), a stereovision rig (Labayrade et al., 2005), etc. Cooperation between mono and stereovision has also been investigated (Toulminet et al., 2006). Our experiments in the automotive context showed that using specifically a sensor to validate the detections provided by another sensor is an efficient scheme that can lead to a very low false detection rate, while maintaining a high detection rate. The principle consists to tune the first sensor in order to provide overabundant detections (and not to miss any plausible obstacles), and to perform a post-process using the second sensor to confirm the existence of the previously detected obstacles. In this chapter, such a validation-based sensor data fusion strategy is proposed, illustrated and assessed. The chapter is organized as follows: the validation framework is presented in Section 2. The next sections show how this framework can be implemented in the case of two specific sensors, i.e. a laser scanner aimed at providing hypothesis of detections, and a stereovision rig aimed at validating these detections. Section 3 deals with the laser scanner raw data processing: 1) clustering of lasers points into targets; and 2) tracking algorithm to estimate the dynamic state of the objects and to monitor their appearance and disappearance. Section 4 is dedicated to the presentation of the stereovision sensor and of the validation criteria. An experimental evaluation of the system is given. Eventually, section 5 shows how this framework can be implemented with other kinds of sensors; experimental results are also presented. Section 6 concludes. Cooperative fusion, Target detection and tracking, belief theory
Editeur
SCIYO

puce  Accès à la notice sur le portail documentaire de l'IFSTTAR

  Liste complète des notices publiques de l'IFSTTAR