Driving to the Future: How Autonomous Vehicles are Changing the Automotive Landscape

Safety first. According to the National Highway Traffic Safety Administration, 94% of US traffic accidents are caused by human errors. Removing the human in human error will go a long way toward eliminating traffic accidents. The challenge: moving from driver assistance to full automation will take time, investment, and a lot of testing. Getting there will be a combination of hardware sensors and software algorithms to process the data and make decisions, hopefully better and faster than humans and with less errors. This transition will create a major economic opportunity for vehicle OEMs and technology developers, with the market segment projected to grow with a CAGR of 26.2% and reach $65.3 billion in market size by 2027.

Cameras, radars, and LiDAR: The eyes of autonomous vehicles

Sensor technology is the first of many different components that enable autonomous driving. More specifically, sensors provide the ability to detect the various objects surrounding the vehicle and act as the eyes of autonomous vehicles. For self-driving applications, these sensors not only need to detect objects, but also need to precisely determine their position, speed, and trajectory, and to provide meaningful value as an input to the autonomous vehicle’s software system.

Three main types of sensors are currently on the market – cameras, radars and LiDAR – each has its own unique properties, but all are aiming to achieve the same objective of detecting objects in absolute precision under real-world conditions. It’s relatively easy to detect traffic signals using cameras (color), but it’s far more challenging to distinguish the various moving objects on the road, say a pedestrian and cyclist, and identify their motion and direction of travel. Sensor cost is currently a major challenge to overcome before autonomous vehicles can be produced for the mass market. Veloydne’s LiDAR alone can cost up to $75,000, and an entire hardware system (cameras, radars, LiDAR, processing chips, etc.)  can cost around $150,000.

Waymo’s fully self-driving ride on public roads: encountering a school bus. Source: Waymo

Similar to other optical hardware used in other applications, key technological differentiators center on the sensors’ performance, in properties such as range, speed, weather conditions, and reliability. Cameras provide the ability to detect and identify objects; however, they can only operate under ideal conditions and are limited in range. Tesla has developed a camera-based system for its autopilot system and supplemented a forward-facing radar to enhance its sensing range in adverse weather conditions.

While radars have a long range of detection, they lack the ability to distinguish objects from each other. Radars are also less expensive and are the enabling technology for adaptive cruise control systems that are currently in the market. Metawave, a radar developer backed by several auto OEMs, has recently demonstrated its technology to detect automobiles and their speed up to 300 meters ahead.

LiDAR essentially combines the strengths of both radars and cameras to provide a wide range of view (even a 360-degree view) and generate 3D images. For instance, Waymo’s LiDAR system claims it can even detect human interactions, like hand gestures from cyclists, and process that information for vehicle operation. One weakness of LiDAR is the performance loss in poor optical conditions.

Software: The brain of autonomous vehicles

If sensors act as the eyes of autonomous vehicles, then artificial intelligence software is the brain that processes the imagery data and enables self-driving capability. This “brain” has 5 major elements within self-driving software – image processing, simulation, mapping, security, and autonomous operation. Software programs need to process the images produced by vehicle sensors in real-time, which includes generating a 3D view of the vehicle’s environment. Companies are building databases of 3D maps so that autonomous vehicles can combine their sensor data and mapping database to determine their precise location. As self-driving is a safety-critical operation, simulation software is aiming to reduce the risks of on-road testing, while cyber security software is more important than ever to reduce potential security risks.

Beyond data processing software that is reactive in nature, autonomous vehicles also need to able to predict and anticipate human behaviors, say a child running into the street to pick up a ball. This is a major challenge for machines to replicate and is currently being developed by Perceptive Automata. The company is developing software to detect subtle visual cues such as body movement, eye contact, and posture, and is training the software react to these behaviors accordingly.

Changing automotive landscape

The complexity of the hardware and sheer amount of software technologies falls outside the core expertise of auto OEMs, which explains why they are paying more attention to various autonomous driving technology startups. From chip makers to tech giants, incumbents that were previously not involved in the automotive sector are now either investing and/or acquiring startups developing self-driving technologies. Below is a list of illustrative examples of recent investment activities from technology incumbents into autonomous vehicles.

A parting thought: as more hardware and software is required, cars will get more expensive. But there is an upside. Do we still have to pay for insurance? If there is an accident, who is at fault? The driver won’t be the one paying extra premium for being at-fault in an accident.