Automotive Collision Avoidance System Field Operational Test Program
FIRST ANNUAL REPORT

10 TRACKING AND IDENTIFICATION (Task C2)

The objectives of the Tracking and Identification Task are to:

  1. Refine the path estimation and target identification algorithms,
  2. Incorporate vision and GPS-derived information
  3. Integrate algorithms into the FOT vehicle system
  4. Support FOT Deployment.

A significant amount of progress has been accomplished during the first program year under this task. Delphi is largely responsible for the conventional target path estimation (Task C2A) and radar based scene tracking activities (Task C2B) associated with the Tracking and Identification Task. GM is responsible for the enhanced GPS approach (Task C2C). This section provides a summary of the major activities that were initiated and the achievements that were accomplished.

10.1 Conventional Approach Development (Task C2A)

Objectives

Adaptive Cruise Control (ACC) and Forward Collision Warning (FCW) systems require an ability to resolve and identify robustly the existence of both stationary and moving ‘target’ vehicles that are in the motion path of the Host vehicle. The performance of these systems is affected by their ability (a) to estimate the relative inter-vehicular path motion (i.e.: range, relative speed, radius of curvature, etc.) between the host vehicle, the roadway ahead of the host, and all of the appropriate targets (i.e.: roadside objects, and in-lane, adjacent lane, and crossing vehicles, etc.); and (b) to predict the mutual intersection of these motion paths. In addition, these systems must be robust in the presence of various types of driving behavior (e.g.: in-lane weaving/drift, lane change maneuvers, etc.) and roadway conditions (e.g. straight roads, curved roads, curve entry/exit transitions, intersections, etc.) that are encountered in the ‘real-world’ environment.

During the previous ACAS Program (1995-1998), significant activities were undertaken by Delphi to improve our existing path estimation and in-path target selection algorithms. The target selection approach pursued used a single active forward looking radar sensor augmented with a yaw rate sensor. The forward-looking radar sensor provided target range, range rate, and angular position information. The yaw rate sensor was used to estimate the roadway curvature ahead of the Host vehicle. Delphi’s first generation target discrimination algorithms were used to identify overhead bridge objects and to discriminate between moving cars and trucks. The Target / Host kinematics were evaluated to determine target motion status (i.e.: oncoming, stopped, moving, cut-in and cut-out, etc.), and geometric relationships were employed to determine which of the valid roadway objects fell within the Host’s forward projected path. The improved algorithms yielded very good results, but they were prone to false alarms during curve entry/exit scenarios and during host lane changes.

Approach

In the current ACAS FOT program, four complementary host and road state estimation approaches are being developed. The complementary approaches are as follows: (a) vision based road prediction (Task B2), (b) GPS based road prediction (Task C2C), (c) radar based scene tracking (Task C2B), and (d) yaw rate based road and host state estimation (Task C2A). These four road and host state estimation approaches are being correlated and fused by the Data Fusion Task (C1) and provided parametrically to the Tracking and Identification Task. The fused road and host state information provides an improved estimate of the roadway shape/geometry in the region ahead of the Host vehicle, and an improved estimate of the Host vehicle’s lateral position and heading within its own lane. This information is being incorporated into the Tracking and Identification functions to provide more robust roadside object discrimination and improved performance at long range, during lane change maneuvers, and during road transitions. In addition, a new radar-based roadside object discrimination algorithm is also being developed to cluster and group roadside stationary objects, and the first generation truck discrimination algorithms developed during the previous ACAS program are being enhanced. Furthermore, a new yaw rate based host lane change detection algorithm is also being developed.

Work Accomplished and Research Findings

Under this task accomplishments have been made in the areas of Path Algorithm development, Host Lane Change Detection Development, Roadside Distributed Stopped Object Detection, and simulated Road Scenario generation. In addition, significant progress has been made on the testing and integration of the various Target Tracking and Identification sub-systems.

Path Algorithm Development

During the first year of the program, enhancements to the target selection algorithms were developed to improve performance during curve transitions and host lane changes. Modifications were made to compute target lateral lane positions using the road and Host state derived from the radar based scene tracking sub-system (Task C2B), and to use this information to better distinguish between in-lane and adjacent-lane vehicles. Improvements were also made to shift the target selection zone to the adjacent lane during host lane changes, and to alter the zone’s characteristics while the host is settling into the new lane.

Host Lane Change Detection Development

Prior to the start of the ACAS FOT program, Delphi began an effort to develop and evaluate alternative host lane change classifiers. The classifiers were designed to satisfy the requirements that (a) lane-change must be detected before it is approximately 50% complete, and that (b) the cost of false lane-change detections is very high. A variety of neural network classifiers, decision-tree classifiers, and individual template-matching classifiers were constructed. In addition, ensemble classifiers consisting of various combinations of these individual classifiers were also been constructed. The inputs to each classifier have included various combinations of yaw-rate data, heading angle data, and lateral displacement data (Figure 10.1); the outputs denote whether the host vehicle is currently making a left lane-change, a right lane-change, or being driven in-lane.


(a) 
Yaw-Rate
Figure 10.1 Sample Input Data - (a) Yaw Rate

(b) 
Heading
Angle
Figure 10.1 Sample Input Data - (b) Heading Angle

(c) 
Lateral
Displacement
Figure 10.1 Sample Input Data - (c) Lateral Displacement

Figure 10.1 Sample Input Data

During the past year, refinements have been made to the core host lane change detection algorithms. Thus far, an ensemble classifier consisting of three neural networks has shown the most promise. Tests on a very limited amount of data suggest that this classifier can detect approximately 50% of the lane-changes made while generating on the order of 5-10 false alarms per hour of driving. Delco is continuing to look at techniques for improving this performance. In addition, the neural network ensemble classifier is currently being incorporated into the target tracking and identification simulation.

Roadside Distributed Stopped Object Detection

During the past year, an effort was initiated to detect roadside distributed stopped objects (DSOs) using various linear and curve fit approaches. Examples of such a distributed stopped object are a guardrail, a row of parked vehicles, a row of fence posts, etc. Two advantages of having this information are that it allows: (a) discrimination of false targets from real targets during curve transitions and pre-curve straight segments, and during host lane changes; and (b) utilization of the geometry of the distributed stopped object to aid in predicting curves in region ahead of the host vehicle.

This task is still in the very earliest stages of development. Several algorithms have been tried, with varying results. Much of the work has focused on finding useful ways to separate radar returns associated with DSOs from the other stopped object returns. In the early algorithms, it has been assumed that distributed stopped objects will provide returns that form a distinguishable line. The focus of these algorithms has been to find the line amid all of the stationary object returns. Other algorithm efforts have concentrated on defining the geometry of the DSO to aid in predicting the location of the road edge. Figure 10.2 shows an example of Delphi’s DSO clustering approach. The figure depicts stopped objects taken from a single frame of data that was collected with the HEM ACC2 radar during a road test. The circles in the figure represent stopped object returns that were seen for the first time in the current frame. The squares represent "persistent" stopped object returns (i.e.: returns that have appeared on enough successive scans to be considered real objects). The triangles represent formerly persistent radar returns that have disappeared momentarily and are being "coasted" by the radar tracker. Color-coding of the objects is used to denote radar track stage of each return.

Figure 10.2 Distributed Object Detection Example

Figure 10.2 Distributed Object Detection Example

In this figure, the Host Vehicle is on a road with a guardrail on the right side, approaching a left turn, and will then encounter a T-intersection. Some cars are parked along the other road. The algorithm was able to detect the guardrail and not be distracted by the parked cars. Work on this effort will continue as time permits.

Road Scenario Generator

During the past year, DDE has developed a Matlab™ based road scenario generator that propels the host and various scene targets along different predefined road scenarios. The model includes a host steering controller, radar model, and yaw rate and speed sensor models. The scenario generator also allows host and target weaving and lane change behavior to be specified. These simulated scenarios are used to evaluate the Target Tracking and Identification algorithms.

Delphi Engineering Development Vehicles

During the past year, DDE has modified three of its engineering development vehicles that are being used to support the ACAS FOT Program. These vehicles are: (a) 1994 Toyota Lexus LS400, (b) 1994 GM Cadillac Seville, and (c) 1998 Opel Vectra. These vehicles have been modified to provide the basic functionality of fully integrated ACC and FCW systems.

Lexus LS400

The Lexus LS400 was DDE’s first attempt at developing a completely integrated FCW system. The planning and build of this vehicle was initiated during the negotiation of the first ACAS Cooperative Agreement. It was completed and demonstrated prior to the ACAS contract award date of January 1995. However, this demonstration vehicle was used extensively during the first ACAS Program in order to further the understanding of the pertinent underlying issues associated with collision avoidance technologies. During the past year, the Lexus has been upgraded. The vehicle has been rewired and the serial interfaces between the vehicle interface processor and the radar, yaw rate, and target selection processor have been converted to CAN. In addition, an HE Microwave (HEM) ACC2 radar and Delphi’s vision-based lane tracking processor and camera have been integrated on the vehicle. This updated sensor suite has been used to collect correlated radar, vision, and vehicle data to support the Data Fusion, Vision Based Lane Tracking, and Target selection tasks.

Opel Vectra

The Opel Vectra is a radar/laser based ACC vehicle that was developed internally within Delphi in 1998. In February of this year, the Vectra ACC vehicle was prepared for a May delivery to UMTRI. The Vectra was upgraded with a HEM ACC1 pilot radar, and its control and target selection algorithms were refined. In addition, various CAN bus termination problems were resolved. The vehicle was used by UMTRI for various FOT related data collection and human use studies, and it will be later used by Delphi to support its FOT related ACC integration tasks.

Cadillac Seville

The Cadillac was Delphi’s second generation integrated FCW system. The planning and build of this vehicle was initiated during the contract negotiation phase of the first ACAS Program. It was completed and demonstrated after the ACAS Program contract award date of January 1995. This demonstration vehicle has proved to be a useful learning tool in expanding the knowledge base of the underlying issues associated with collision avoidance technologies.

This vehicle is currently equipped with two centralized processors, a Target Selection Processor (TSP) and Vehicle Interface Processor (VIP). The TSP and VIP are specialized hardware components, designed by Delphi. The VIP is the primary interface between all of the vehicle subsystems. It provides a platform to implement Delphi’s FCW threat assessment algorithms and control the Driver Vehicle Interface (DVI). The Driver-Vehicle Interface (DVI) warning cues include: (a) customized audio system with capabilities to mute the audio system and generate various warning tones, (b) tactile response in the form of short duration brake pulse, and (c) visual warning cues generated on an improved Delphi Eyecue™ color re-configurable HUD.

During the past year, the Cadillac has been upgraded. The vehicle has been rewired and the serial interfaces between the vehicle interface processor and the radar, yaw rate, and target selection processor have been converted to CAN. In addition, the following new components have been integrated on the vehicle and interfaced to the Target Selection Processor (TSP): (a) an ACC2 radar, (b) Delphi’s vision-based lane tracking processor, (c) a real-time PC104 implementation of Delphi’s radar based scene tracking (Task C2B). This vehicle will serve as the primary test bed and data collection platform for Delphi’s target selection and scene tracking subsystems.

The HEM ACCA radar that is targeted for the FOT prototype vehicle is currently being integrated on the bench with Delphi’s Target Selection Processor. During the next month, the ACCA radar will be installed and integrated on the Cadillac EDV. Subsequently, a series of new field tests will be held to characterize the performance of the ACCA radar and target selection subsystems.

Diagnostic Tools and Sub-System Validation

Each Delphi EDV test vehicle has been equipped with a suite of tools and used to (a) observe near real-time and real-time system behavior while performing system integration on laboratory bench hardware; (b) evaluate real-time system performance while performing on-road vehicle testing; (c) perform in-depth ACC/FCW system data analysis and quantify ACC/FCW system performance; and (d) iterate, refine, and validate key algorithm improvements (i.e.: ACC Control Algorithms, Scene Tracking Algorithms, Target Selection Algorithms, etc.) with real on-road data , both in simulation and on the lab bench.

Figure 10.3 summarizes Delphi’s Target Tracking and Identification validation and refinement process and the suite of tools that are used. The data collection and validation process can be performed in real-time on a vehicle, on lab bench hardware, and in the PC environment. A PC-type laptop computer is used to interface to the CAN bus and to host the various data collection tools. The tools consist of the various graphically oriented custom Delphi data collection utilities, and the commercial Canalyzer™ CAN bus utility, by Vector CANtech. In addition, a video system (i.e.: camera, 8mm video recorder, and mixer) is used to mix time-stamped video with the graphical output from the Delphi diagnostic tools.

Figure 10.3 Tracking and Identification Validation and Refinement Process

Figure 10.3 Tracking and Identification Validation and Refinement Process

Delphi’s custom utilities and tools are used to dynamically record and time stamp internal performance results and interfaces for various key ACC/FCW subsystems and to graphically depict the target environment and road geometry in front of the ACC/FCW vehicle. The Vector Canalyzer™ CAN bus utility is used to dynamically record/collect and time stamp all of the system’s CAN bus messages and events, in real-time. The data recorded by both the Canalyzer™ and the custom Delphi tools, as well as by Delphi’s Matlab™ based road scenario generator are used to build up a scenario database that can be back through the real time hardware in a laboratory setting for more detailed post processing engineering investigations. The scenario database can also be replayed through the Delphi’s Target Tracking and Identification sub-systems to refine and iterate the key algorithm components.

Figure 10.4 depicts the Delphi graphical target display in a split screen video format. The middle and lower left portion of the display contains text describing the radar, target selection, and optionally the ACC controller status. The target features of the primary in-path target are highlighted in red. The upper portion of the figure depicts a real-time graphical representation of the detected radar scene targets (i.e.: synthetically generated by the laptop computer). The Host vehicle’s perceived lane boundaries, based on the predicted road model, are also graphically drawn. The exterior color of the rectangles is based on relative target speed. For example, green rectangles denote targets "moving away" from the Host vehicle, red rectangles denote targets that the Host vehicle is "closing on", magenta rectangles denote "oncoming" targets, yellow rectangles denote targets that are "matched in speed" to the Host, and white rectangles depict "stationary" targets. The relative size of the rectangular-shaped "targets" is based on the target range and in-path target status. The "narrow" rectangle boxes denote "non-primary in-path" targets. The "large" rectangle with the dark blue center denotes the "primary in-path target".

Figure 10.4 Delphi Diagnostic Tracking and Identification Display

Figure 10.4 Delphi Diagnostic Tracking and Identification Display

The lower right portion of the figure is used to display real-time video imagery of the roadway environment ahead of the Host vehicle. This video-based diagnostic system is extremely useful tool. It provides a mechanism to review lengthy time segments of "on-road" data, and to isolate those time segments which had marginal or questionable performance. Once identified, voluminous files of more detailed sensor and system data, recorded together with the video, can be more carefully investigated, to determine the precise cause of any observed anomalies or unusual results.

Integration and Test

A preliminary design review of all of the Tracking and Identification sub-systems was held in June 2000. The sub-system interfaces between the Tracking and Identification sub-systems and the other vehicle sub-systems (i.e.: radar, data fusion, threat assessment, etc.) have been defined and they will be mimicked and implemented on EDV test vehicles.

In addition, Delphi has developed a radar tracking and target selection test plan with over 15 distinct scenarios. The scenarios include moving and stationary roadway and roadside objects, executing normal both normal driving and lane change maneuvers on straight, curved, and curve/entry exit type roadways. The scenarios include varying types of test vehicles (sedans, SUVs, trucks, motorcycles, bridges, and roadside clutter). The performance of the HEM ACC2 radar tracker and Delphi’s target selection algorithms has been evaluated against real and simulated sensor and system performance data that matches many of the critical test scenarios.

During the first part of the year, a three-day field test held on Los Angeles area freeways and at the Camarillo airport. The collected data was analyzed to track down software bugs and identify performance problems. The target selection algorithms were refined and iterated off-line by replaying the collected data through the target selection simulation. The HEM tracker algorithms were also enhanced.

During the Spring of 2000, another three-day field test was held to collect real time sensor and system performance data and to evaluate both ACC2 radar tracker and target selection algorithm improvements. Significant improvements were observed in the ACC2 radar tracker’s performance against stopped objects. In addition, improved target selection performance during host lane change and curve entry/exit maneuvers was observed when additional road and host state data was used (i.e.: from scene tracking or lane tracking).

Plans through December 2000

During the next six months, the development of all of the key target selection algorithms will continue (i.e.: path algorithms, distributed stationary object clustering, truck grouping, and host lane change detection). In addition, the ACCA sensor will be integrated on the Delphi EDV Cadillac test vehicle. The ACCA sensor and Target Selection subsystems performance will then be benchmarked against the Target Selection test scenarios. Key areas for algorithm improvement will be identified, and algorithm refinements will be made via simulation and bench tests. Furthermore, correlated ACCA sensor, vehicle sensor, vision, target selection, and high accuracy GPS data will be collected to support the development of the Target Selection, Scene Tracking, and Data Fusion tasks, and to provide ground truth.

Figure 10.5 Task C2A Schedule

Figure 10.5 Task C2A Schedule

10.2 Scene Tracking Approach Development (Task C2B)

Objectives

Scene tracking is an enhancement to the conventional path prediction process in which preceding vehicles are classified as being in-lane or not in-lane. The conventional yaw rate based road estimation approach cannot reliably predict changes in road curvature ahead of the host, since the road curvature is assumed to be constant. Moreover, the conventional yaw rate based approach also assumes that the host is not weaving in lane or changing lanes. In the scene tracking approach the paths of the preceding vehicles are observed in order to estimate the upcoming forward road curvature. This approach assumes that most of the preceding vehicles are staying in their lanes, and that there are reasonable constraints on the rate at which the road curvature can change. In addition to estimating the upcoming road shape, the scene tracking approach also estimates the angular orientation of the host vehicle in its lane, thereby accounting for in-lane weaving or lane changing by the host.

Approach

Two scene tracking approaches are currently under consideration: (1) the original ‘parallel’ approach; and (2) the newer ‘unified’ approach.

In the parallel approach, separate target tracking filters estimate the curvature of the trajectory of each target, along with the target’s heading angle. The curvature-at-range information from all of the targets is then combined in a road curvature estimation filter, in which parameters in a road curvature model are estimated. A host path angle filter combines all of the targets’ heading angle information and the road curvature estimates to estimate the host vehicle’s path angle, which is the angle between the host’s longitudinal axis and the local lane tangent. Finally, the path angle and road curvature estimates are used, along with the target coordinates, to estimate the lateral lane position of each target relative to the host’s lane.

In the unified approach, a single unified filter estimates all of the quantities above. Separate determination of target lane changes may be necessary in order to keep those targets from corrupting the road shape estimates.

Work Accomplished

The primary accomplishments to date include: (1) development of the core part of the unified approach; (2) experimentation with rules to identify and reject maneuvering targets and outliers in the unified approach; (3) conversion of the scene tracking algorithm from Matlab™ to the C language; (4) implementation of a real-time scene tracking algorithm on a PC104 computer; (5) integration of scene tracking software with baseline RCAP path algorithms; and (6) evaluation and tuning of algorithms with real world radar target data.

Plans through December 2000

Work is proceeding on several fronts: (1) improving the rejection of maneuvering targets in the unified approach, particularly without adding parallelism, (2) improving the performance of the unified approach when the host changes lanes, and (3) developing a new way of combining target information in the parallel approach.

Figure 10.6 Task C2B Schedule

Figure 10.6 Task C2B Schedule

10.3 Enhanced GPS Approach Development (Task C2C)

Objectives

The objectives of this task are to develop and implement a path prediction system capable of aiding the radar in eliminating irrelevant targets, and assisting in classifying detected targets as obstacles/non-obstacles, using dead reckoning, differential GPS and digitized roadway map database.

Approach

In this approach, path prediction is achieved by continuously estimating the location of the vehicle on the road, matching the vehicle location to a point on a road in the stored roadway map, tracking the path traversed by the vehicle and extracting the upcoming road geometry from the map. The objectives of this task are met using several sensors such as DGPS, dead reckoning and a digitized road map. The overall functional block diagram of this subsystem is shown in Figure 10.7.

Figure 10.7 Functional Diagram of the Map Based Path Prediction System

Figure 10.7 Functional Diagram of the Map Based Path Prediction System

DGPS is used to compute the heading and distance traversed by the vehicle. The accuracy in determining the heading and distance is further enhanced by computing the heading angle and distance relative to the previous position of the host vehicle. Apart from the benefits that DGPS based systems offer, they are seriously plagued by outages in GPS signals that occur in the presence of tunnels and tall buildings, among other things. In order to overcome this shortcoming, the developed approach is augmented with dead reckoning sensors, where wheel speed sensors and odometer are used for distance measurement and yaw rate sensor, compass and differential wheel sensors are used for angle measurement.

The combination of dead reckoning and DGPS with the map database has been explored to obtain a map based path prediction system. DGPS, when used in conjunction with the map database, can provide fairly accurate path prediction except in situations of GPS signal outages. At such times, the dead reckoning is expected to carry forward the task of path prediction.

The above discussion has assumed the availability of accurate map database (a major component of the discussed system) that meets the design specifications. It should be noted that such a database is not commercially available at the present time. Within the limited scope of the ACAS-FOT project, AssistWare has been contracted to aid in the development of maps that are superior to those commercially available.

The AssistWare system (Figure 10.8) is an integrated forward road geometry measurement system consisting of two parts – a vision system and an enhanced map development system. The vision system is a version of AssistWare’s commercial Safe-Trac lane tracking product, and is meant as the fallback vision system for the ACAS FOT project. It differs from the other three vision systems in that it uses a short-range visual field (39 deg. field of view), which provides the ability to obtain highly robust measurements of lane width, and vehicle position and orientation in the lane under a wide range of ambient conditions. It is expected to give the highest performance of the vision systems for the limited scope in which it is performing. This expectation is for two reasons. First, the short range gives a resolution and contrast advantage over the long-range systems. Second, the system has been in development for several years and has thousands of hours of testing behind it.

Figure 10.8 AssistWare System Components
Figure 10.8 AssistWare System Components

As part of the development of superior maps, AssistWare is in the process of developing a digital mapping module along with on-the-fly map generation and refinement that allows the creation of more accurate maps by multiple traversals over the route. A software module will be developed to facilitate the integration of AssistWare’s map matching method with the General Motors developed map-matching scheme.

Work Accomplished

The system specification of the processor hardware, software, signal interface and CAN messages between sensors and the processor, signal interface and CAN messages between this subsystem and other subsystems, CAN messages for fault diagnostics and subsystem status, sensors (DGPS and dead reckoning) and roadway map database has been completed. All the subsystem components including a preliminary version of the AssistWare system have been integrated into the GM Engineering Development Vehicle.

Software development of the sensor drivers is complete and sensor tests have yielded satisfactory performance results. The development and implementation of the algorithms that integrate DGPS and digitized roadway maps that form the basis of retrieving forward road geometry is complete. Limited testing of this implementation has been conducted and the results are very promising. In order to enable continuous improvement of algorithms, a laboratory setup for testing and tuning of algorithms is available for use.

Math based path prediction models using DGPS, dead reckoning and roadway maps have been designed and developed for incorporation into the simulation of the overall system model being developed by UC Berkeley-PATH. These models have been provided to PATH along with parameters for error estimates of standard GPS signals, DGPS and map databases obtained from realistic situations.

The initial integrated AssistWare system was delivered to GM in late February 2000. The vision portion of this system is complete and for the most part is unchanged from the commercial Safe-Trac system. The major development for this system is in the map processor. The CAN interface hardware is located in this processor. All CAN hardware and message protocol development for this system is complete. The enhanced map system development is currently in progress. The system is installed on the GM engineering development vehicle. A second system remains at AssistWare where further development is taking place.

Figure 10.9 Task C2C Schedule

Figure 10.9 Task C2C Schedule


[TITLE PAGE]      [TABLE OF CONTENTS]
[1 Executive Summary]     [2 Introduction]     [3 System Integration]     [4 Forward Radar Sensor]
[5 Forward Vision Sensor]     [6 Brake Control System]     [7 Throttle Control System]
[8 Driver-Vehicle Interface]      [9 Data Fusion]     [10 Tracking & Identification]     [11 CW Function]   
 [12 ACC Function]     [13 Fleet Vehicle Build]      [14 Field Operational Test]
[Appendix A]     [Acronyms]