Automatic Critical Event Extraction and Semantic Interpretation by Looking-Inside
Abstract
Data-driven systems are becoming prevalent for driver assistance systems and with large scale data collection such as from the 100-Car Study and the second Strategic Highway Research Program (SHRP2), there is a need for... [ view full abstract ]
Data-driven systems are becoming prevalent for driver assistance systems and with large scale data collection such as from the 100-Car Study and the second Strategic Highway Research Program (SHRP2), there is a need for automatic extraction of critical driving events and semantic characterization of driving. This is especially necessary for videos looking at the driver since manual extraction and annotation is time-consuming and subjective. This labeling process is often overlooked and undervalued, even though data mining is the first critical step in the design and development of machine vision based algorithms for predictive, safety systems. In this paper, we define and implement quantitative measures of vocabularies often used by data reductionist when labeling videos of looking at the driver. This is demonstrated on a significantly large sum of data containing almost 200 minutes ( 600 000 frames total from two videos of looking-in) of multiple drivers collected by UCSD-LISA. We qualitatively show the advantages of automatically extracting such information on this relatively large scale data.
Authors
-
Sujitha Martin
(University of California, San Diego)
-
Eshed Ohn-Bar
(University of California, San Diego)
-
Mohan M. Trivedi
(University of California, San Diego)
Topic Areas
Data Mining and Data Analysis , Driver Assistance Systems , Human Factors
Session
Th-D8 » Data Mining and Data Analysis VI (15:30 - Thursday, 17th September, San Borondón B2)