Automotive Collision Avoidance System Field Operational Test Program
FIRST ANNUAL REPORT

9 DATA FUSION (Task C1)

9.1 Requirements Definition and Architecture Development (Task C1A)

Objectives

The objective of this task is to develop performance and interface requirements and the architecture for the data fusion subsystem.

Approach

The approach is to gather information on each sensor subsystem data provided, performance specifications, confidence measures, and information on the requirements for the subsystems that use the output of the data fusion subsystem to develop performance and interface requirements. This information will also be used to determine the fusion algorithms and set requirements on the data fusion architecture.

Milestones and Deliverables

The initial data fusion architecture and performance requirements definition was completed and presented at a meeting at HRL on 9/16/99.

Work Accomplished

HRL developed performance and interface requirements for the data fusion subsystem, which have been incorporated into the Data Fusions requirements.

Research Findings

The main research finding of this task is that the data fusion subsystem must be robust and able to detect and handle situations when there is missing or invalid data.

Plans through December 2000

This task has been completed.

9.2 Initial Algorithm Development (Task C1B)

Objectives

The objective of this task is to develop fusion algorithms to fuse radar, lane tracking, GPS/map, and host vehicle sensors to produce a robust estimate of the host lane geometry, host state, driver distraction level, and environmental state.

Approach

The data fusion subsystem can be divided into four main functional subunits:

  1. Host lane geometry estimation: The data fusion subsystem provides an estimate of the forward lane geometry of the current host vehicle lane by fusing forward lane geometry estimates from the vision sensor subsystem, map-based subsystem, scene-tracking subsystem, and curvature estimates based upon vehicle dynamics sensors. Since vehicle motion along the road makes forward road geometry a quantity that varies dynamically with time, HRL needed to use a dynamic recursive estimation approach such as the Kalman filter. Kalman filters perform recursive estimation using both a model-based update of state variables and an update of the state estimates using a weighted version of the new measurements. Fusion is done using Kalman filters as it provides a natural framework of fusing incomplete and inaccurate information from multiple sources and can provide more accuracy and improved robustness to stochastic errors (e.g., sensor noise) as it acts as a sort of "low-pass" filter. A fundamental issue in fusing different forms of information about forward lane geometry in a Kalman filter framework is the choice of a good road model. HRL investigated several different road models (parabolic, single-clothoid, spline) and chose a "higher-order" road model after extensive testing on simulated and some real data.
  2. Host state estimation: The data fusion subsystem provides a "fused" host state estimate by fusing information from vision and scene-tracking subsystems. Host state primarily consists of host vehicle offset and orientation in its lane. HRL used a Kalman filter approach for host state estimation as well for reasons discussed above. In addition, since host vehicle sideslip angle needs to be estimated in the process model, this parameter was also included in the state-space representation of the Kalman filter.
  3. Driver distraction estimation: The approach to estimating driver distraction is based on determining if and what type of secondary task the driver is performing. HRL then use a fuzzy rule-based system to estimate the driver distraction depending on the type of task and when the task was initiated.
  4. Environmental state estimation: When used to interpret environment state, the data fusion subsystem detects and reports conditions indicative of slippery road surfaces. Data on conditions is used to modify the expected braking intensity the driver will achieve when responding to an alert. In turn, the expected intensity has an impact on the timing of the alerts. Our approach is to develop a rule-based system to indicate road conditions and an associated confidence measure as a function of windshield wiper activity and outside temperature.

Milestones and Deliverables

The first milestone for this task is the Preliminary Data Fusion Algorithm Demonstration. This demonstration, which is scheduled for December 2000, will be an offline (i.e., non real-time) demonstration of all four parts of the data fusion subsystem: host lane geometry estimation, host-state estimation, driver distraction level estimation, and environment state estimation.

Although not part of the official list of program deliverables, a preliminary version of the data fusion software was delivered to GM for insertion into the EDV in September 2000. Also, a model of the data fusion subsystem was provided to PATH for use in the PATH simulator.

Work Accomplished

HRL has developed and implemented initial versions of algorithms for host lane geometry, host state, driver distraction and environment state estimation. These algorithms were chosen and developed after extensive literature survey and testing of several competitive and promising approaches. For example, as discussed above, HRL tested several different commonly used road models and compared errors in estimating road geometry in both a recursive (Kalman) and a non-recursive (least-squares) framework. This performance evaluation demonstrated that conventional "single-clothoid" road models have estimation errors that would not meet the system performance requirements. This motivated us to develop a higher-order road model that was amenable to state-space representation in a Kalman filter framework.

We have completed development and implementation of this novel road model and evaluated its performance. Results show that this model is superior to a conventional "single clothoid" road model as it has smaller road geometry estimation errors, especially during sharp transitions in road curvature. Fig. 9.1 shows a simulation scenario used to evaluate these road models. The simulated road geometry is shown in the left half of the figure, while the clothoid coefficients c0 and c1 are shown in the right half. The transition points (changes in c1 coefficient) are also shown in Fig. 9.1. A host vehicle is simulated to traverse the road with a speed of 20m/sec with a look-ahead distance of 100m. Road geometry information is provided as offsets of 10 points along the road spaced 10m apart starting in front of the host vehicle. The sampling rate is chosen as 10Hz. A Kalman filter based on single-clothoid and new road model uses these offsets as measurements and estimates road geometry. The estimated road geometry is compared to the simulated road geometry and errors computed. Figure 9.2 shows the mean and maximum estimation errors of the single-clothoid and new road models as a function of time (x-axis).

Figure 9.1 Simulation Scenario Used to Evaluate These Road Models

Figure 9.1 Simulation Scenario Used to Evaluate These Road Models

Figure 9.2 Mean And Maximum Estimation Errors

Figure 9.2 Mean And Maximum Estimation Errors

The performance of this model is currently being evaluated on roads obtained from the NavTech database.

HRL has also developed an adaptive Kalman filter approach for road geometry and host state estimation which is superior to a conventional Kalman filter. The adaptive Kalman filter performs better during sharp transitions in road geometry compared to a conventional Kalman filter. Performance evaluation using real data is in progress.

We developed a fuzzy-rule based algorithm to estimate driver distraction. The data fusion subsystem provides an estimate of driver distraction by monitoring if the driver is performing a secondary task. In our working model, there are two major categories of secondary tasks that may affect driver situation awareness. The first one is a simple task that just requires one glance to complete the necessary visual aspect. The second one is complex and requires many short sampling glances away from the forward view. For the first category, once the control is activated, the amount of distraction left to predict is insignificant. In other words, the activation of the control essentially follows the single-glance distraction time. In complex secondary tasks the driver's vision is time-shared with the primary driving task. The driver cyclically samples the task, activates the control and returns to the forward view for as many glancing cycles as are needed to complete his task (adjusting the radio, perhaps, or turning on the air conditioning). The domain knowledge assumes that the first activation of any of the controls (FACT) for such tasks follows the first glance time and predicts a high degree of distraction for the next 8-10 seconds. In fact, the elapsed time (1stAct) from the FACT is used to predict the coming level of driver distraction for a given complex task such as radio knob adjustments. The 1stAct is defined as the difference between current time and the time of FACT. The longer the 1stAct is, the less predictable is the driver distraction level for the remaining glance time. In other words, the strength of the 1stAct is inversely proportional to its length. To predict driver distraction level, fuzzy rules are based on the strength of 1stAct and Duration as depicted by the matrix shown in Table 9.1. "Duration" relates to the current given cycle of activation of control and is quantized as long, normal, short, or off; and the strength of 1stAct, as off, weak, medium, or strong. Performance evaluation of the driver distraction estimation algorithm is in progress using simulated inputs.

Table 9.1 Driver Distraction Level

Radio, HVAC & DVI with knob adjustments
Driver
Distraction
Duration
long normal short off fault
1st
Act
off LOW LOW LOW NONE NONE
weak MED MED LOW NONE NONE
medium MED HIGH MED LOW NONE
strong MED HIGH HIGH MED NONE

The environment state estimation algorithm detects and reports conditions indicative of slippery road surfaces. Data on conditions is used to modify the expected braking intensity the driver will achieve when responding to an alert. In turn, the expected intensity has an impact on the timing of the alerts. HRL defined road conditions as dry, dry-icy, wet, or icy. They are provided at a confidence level specified as none, low, medium or high. Both the road conditions and their associated confidence levels are derived based first upon the windshield wipers activity; then further refined through use of outside temperature measurements, as shown by the matrix in Table 9.2. Performance evaluation of the environment state estimation algorithm is in progress using simulated inputs.

Table 9.2 Environment State Estimation

Road condition based on wiper activity and temperature
  wiper not active wiper active
above
freezing
below
freezing
above
freezing
below
freezing
Road surface condition DRY DRY-ICY WET ICY
Confidence level HIGH LOW HIGH MED

Research Findings

  1. The "new" road model is superior to a conventional single-clothoid road model as it produces smaller road geometry estimation errors, especially during sharp transitions in road curvature. In some of the simulation studies, during a transition form a straightaway to a 300m curvature segment, the single-clothoid road model had errors of about one-half lane width, while the new model had maximum errors of the order of less than one-quarter of lane width. Better road geometry estimation should translate into lower errors in identifying in-path targets vs. out-of-path targets.
  2. The adaptive Kalman filter performs better during sharp transitions in road geometry compared to a conventional Kalman filter. This allows the system to respond rapidly to changing road curvature and could once again provide increased accuracy in determining host vehicle path and reducing errors in detecting in-path vs. out-of-path targets. The same approach is also applicable to host state estimation where the system will have better ability in tracking host state accurately during transitions and lane-change maneuvers. Performance of this approach will be evaluated on real data in the near future.

Plans through December 2000

Plans for the next six months are to work with GM to collect synchronized data from all of the sensor subsystems so that we can test the performance of the fusion algorithms on real data. The real data will also be used to refine the fusion algorithms to improve performance.

9.3 Real-time Algorithm Development (Task C1C)

Objective

The objective of this task is to develop real-time versions of the algorithms developed in Task C1B for integration into pilot and deployment vehicles.

Approach

To develop real-time versions of the algorithms developed in Task C1B, our approach is first to port the algorithms onto the real-time hardware platform specified by GM for the data fusion subsystem. After porting the algorithms, we will evaluate algorithm real-time performance to determine if there are portions of the fusion algorithm that must be tuned or modified to meet real-time processing requirements.

Milestones and Deliverables

The first milestone for this task is the Data Fusion Algorithm Demonstration, which is scheduled for the end of April 2001.

Work Accomplished

This task has not yet started.

Plans Through December 2000

This task is scheduled to start in October 2000. We will begin porting of the data fusion algorithms to hardware specified by GM and begin real-time performance evaluation.

Figure 9.3 Task C1 Schedule

Figure 9.3 Task C1 Schedule


[TITLE PAGE]      [TABLE OF CONTENTS]
[1 Executive Summary]     [2 Introduction]     [3 System Integration]     [4 Forward Radar Sensor]
[5 Forward Vision Sensor]     [6 Brake Control System]     [7 Throttle Control System]
[8 Driver-Vehicle Interface]      [9 Data Fusion]     [10 Tracking & Identification]     [11 CW Function]   
 [12 ACC Function]     [13 Fleet Vehicle Build]      [14 Field Operational Test]
[Appendix A]     [Acronyms]