Tesla upgraded radar for Model S

Electric car maker Tesla announced that they will be updating their Autopilot software. Tesla car owners for models delivered after October 2014 can download the updates within two weeks.

After careful consideration, we now believe it can be used as a primary control sensor without requiring the camera to confirm visual image recognition. This is a non-trivial and counter-intuitive problem, because of how strange the world looks in radar. Photons of that wavelength travel easily through fog, dust, rain and snow, but anything metallic looks like a mirror. The radar can see people, but they appear partially translucent. Something made of wood or painted plastic, though opaque to a person, is almost as transparent as glass to radar.

The first part of  solving that problem is having a more detailed point cloud. Software 8.0 unlocks access to six times as many radar objects with the same hardware with a lot more information per object.

The second part of Tesla Autopilot software consists of assembling those radar snapshots, which take place every tenth of a second, into a 3D “picture” of the world. It is hard to tell from a single frame whether an object is moving or stationary or to distinguish spurious reflections. By comparing several contiguous frames against vehicle velocity and expected path, the car can tell if something is real and assess the probability of collision.

The third part is a lot more difficult. When the Tesla Model S car is approaching an overhead highway road sign positioned on a rise in the road or a bridge where the road dips underneath, this often looks like a collision course. The navigation data and height accuracy of the GPS are not enough to know whether the car will pass under the object or not. By the time the car is close and the road pitch changes, it is too late to brake.

The first part of solving that problem is having a more detailed point cloud. Software 8.0 unlocks access to six times as many radar objects with the same hardware with a lot more information per object.

The second part consists of assembling those radar snapshots, which take place every tenth of a second, into a 3D “picture” of the world. It is hard to tell from a single frame whether an object is moving or stationary or to distinguish spurious reflections. By comparing several contiguous frames against vehicle velocity and expected path, the car can tell if something is real and assess the probability of collision.

The third part is a lot more difficult. When the car is approaching an overhead highway road sign positioned on a rise in the road or a bridge where the road dips underneath, this often looks like a collision course. The navigation data and height accuracy of the GPS are not enough to know whether the car will pass under the object or not. By the time the car is close and the road pitch changes, it is too late to brake.