Miniaturized Sensors for Autonomous Driving
The ongoing development of driver assistance systems towards autonomous driving cars is a current trend in automotive industry. This changes the driver’s task from active driving to passive monitoring of the system, and as a result, many new scientific questions arise in the field of human-machine interaction. For example, it has to be guaranteed that the style of the autonomous driving is comfortable for the driver and for all the passengers. Therefore, the interior of the car needs to be captured in all three dimensions with high resolution.
Conventional sensor systems are generally only able to fulfill this task to a limited extent. Consequently, the scope of the work within the joint project COMFYDrive is the development and investigation of a new 3D multi sensor system for monitoring the interior of an autonomous driving car. The system will combine for the first time three principles that are capable of the generation of three-dimensional information – namely the principles of the array camera, stereoscopy, and active pattern projection techniques.
The system is based on two miniaturized array cameras with a field of view of 70° (diagonal) and a f-number smaller than 3. Natural compound eyes of insects have been the archetype for the design of the array camera. The micro objective itself consists of two freeform micro lens arrays that are molded on a lithographically structured glass substrate.
In order to prevent optical cross talk of neighboring imaging channels, a three-dimensional aperture array is placed above a commercial CMOS imaging sensor. The manufacturing of the freeform micro lens arrays is carried out by the combination of an ultra-precision micro machining process and a step&repeat micro replication technique. This approach allows the cost-effective realization of a huge quantity of elements on a wafer level scale.
The imaging optics possesses a height of less than 2 mm and is therefore well suited for integration in the interior of a car. The single camera module generates 15 x 9 partial images that are subsequently transformed via image processing into a reconstructed image of the full scenery or into corrected tile images. Based on image data, three-dimensional point clouds can be generated in high resolution. First, this data should be used in order to collect information on the condi-tion of the passengers with regard to the current driving situation. In the future, feedback on the driving style of the autonomous car will also be an option.
Authors: Jens Dunkel, Alexander Oberdörster, Christin Gassner, Andreas Reimann, Andreas Brückner