Imagine yourself a passenger in a vehicle driving along an Expressway. A bus cuts into your path unexpectedly to take on a passenger, cyclists dart in and out of traffic with heart-stopping agility, and pedestrians step into the road apparently at random. Now imagine attempting to drive through this organized mayhem without using your eyes. Impossible, correct?
However, this is exactly the problem that self-driving car engineers are overcoming. Self-driving cars don’t “see” like humans, using two eyes. Instead, they’re fitted with a suite of superhuman senses — a collection of cameras, radar, and lasers that collectively create an updating, 360-degree digital map of the world.
So, how does this incredible technology work? Let’s dissect the three primary “senses” of an autonomous vehicle.
1. The Eyes: A Network of Cameras
Like us, the primary means by which a self-driving vehicle senses the world is by sight. These vehicles are festooned with cameras — frequently more than a dozen of them — with each one having a different task.
•What they do:Cameras are great at detecting colours, reading text, and distinguishing shapes. They read stop signs, traffic lights, and lane markings on the road. They can also detect pedestrians, cyclists, and other cars. Sophisticated AI software, called computer vision, processes the camera feeds in real time to interpret what each object is.
• Their Weakness: Like human eyes, cameras can be deceived by inclement weather. Thick fog, heavy rain, or the direct sun glare at sunset can blind them and make it hard for them to see. They are also poor at estimating the precise distance and speed of far-away objects. That is why they require assistance from the other senses.
2. The Motion Detector: Radar
Radar is a technology that has been used for decades in airplanes and ships, and it’s a crucial part of a self-driving car’s sensory toolkit.
• What it does: Radar systems emit radio waves that reflect off objects and back to the sensor. By calculating how long it takes for the waves to come back, the vehicle can precisely determine an object’s distance, speed, and direction of travel. Radar is excellent at picking up other cars, even those at a distance.
• Their Strength: Unlike cameras, radar performs flawlessly under difficult conditions. It “sees” through rain, fog, snow, and even nighttime with no issue. It’s the vehicle’s trusty, all-weather motion detector.
• Their Weakness: Radar can indicate that something is present and how quickly it is moving, but it cannot identify what it is. A big plastic bag flying across the highway and a small child may appear to be the same to a radar sensor.
3. The Depth Mapper: LiDAR (Light Detection and Ranging)
This is the tech that really provides autonomous cars with a superpower. LiDAR is usually the rotating cylindrical apparatus that you spot on the roof of most autonomous test cars.
• What it does: LiDAR is similar to radar but uses laser beams, which are invisible, rather than radio waves. It rotates, emitting millions of laser pulses per second. The pulses reflect off everything in the environment around it — other vehicles, buildings, curbs, pedestrians — and bounce back to the sensor. This provides an extremely detailed, real-time 3D map of the environment surrounding the car.
• Their Strength: LiDAR delivers accurate, 3D depth perception. It’s able to generate a “point cloud” so rich in detail that it not only can distinguish between a person and a tree but even pick up on the subtle motion of a pedestrian’s arms and legs to anticipate in which direction they may step.
• Their Weakness: LiDAR is the priciest sensor in the suite at present. Similar to cameras, its functionality may also be impacted by extremely heavy rain or fog, which will disperse the laser beams.
Putting It All Together: The AI “Brain” and Sensor Fusion
Having these three senses is one thing, but sensing all of the data is where the magic happens. That’s where the central AI “brain” of the car takes over. It employs a process known as “sensor fusion.”
Sensor fusion is when the AI takes all the sensors’ data streams and merges them into a single, extremely precise model of reality. So, for instance, the camera may detect a person-like object, the radar will determine its speed and distance, and the LiDAR will build a very detailed 3D model of it. When the AI combines this data, it can say with virtual certainty: “That is a pedestrian, 30 meters away, moving towards the road at 5 km/h.”
From this continually revised model, the AI then decides — to slow down, to switch lanes, or to come to a complete stop — often much faster and more accurately than a human driver ever could. The problem of driving on an expressway is enormous, but by equipping a car with superhuman senses, engineers are moving closer to a solution every day.
Top comments (0)