A Tesla Model 3 Tear-Down After a Hardware Retrofit
ByJunko Yoshida & Maurizio Di Paolo Emillio 06.22.2020
Tesla Model 3 is a three-year old model. And yet, with software updates and a hardware swap (from HW 2.5 to HW3.0), Tesla is promising to keep Tesla 3 relevant and get it ready for full self-driving future.
At least, that’s their promise.
A mildly cultish cadre of tech-savvy consumers in Silicon Valley are hooked on Tesla and can’t get enough. They love the car, they like its electric propulsion and they follow Elon Musk’s tweets religiously.
Above all, they admire Tesla’s clean, elegant vehicle architecture, designed from the ground up. Tesla can add new features and even upgrade vehicle performance almost magically, by over-the-air (OTA) software updates. None of the other car OEMs — their vehicles tied to legacy platforms — have engineered so sweeping a method of software-based vehicle update.
Tesla fans tend to worry less about Tesla’s controversial “Autopilot” feature. Their focus neither on what it does nor what it doesn’t. They prefer to concentrate on what Autopilot can become, someday, as promised by Tesla. In addition to a series of software updates, Tesla has upped the ante by rolling out a hardware swap — from Tesla’s HW2.5 to HW3.0 — last year.
With HW3.0, Elon Musk claimed in a tweet: “All cars being produced have all the hardware necessary, compute and otherwise, for full self-driving.” We will see what exactly Musk means by “full self-driving.”
What intrigued us is this Tesla-touted transition to HW3.0. What’s under the hood of a Model 3 today, and how will it transform?
Model 3 is a smaller and more affordable EV, first produced in mid-2017. Thanks to the introduction of its home-grown SoC last year, Tesla is promising Model 3 buyers that if they purchase the Full Self-Driving (FSD) software bundle, they’ll get a HW 2.5 to HW 3.0 retrofit with a simple service center appointment.
Just to be clear, though, the FSD package today does not yet make a Tesla capable of driving without human intervention. Right now, it’s a series of incremental Autopilot upgrades. Further, the current $7,000 FSD package is scheduled to go up by about $1,000 on July 1st, according to Musk’s tweeted announcement last month.
In this latest “Under the hood” series with System Plus Consulting (Nante, France), we take a deeper look inside Tesla Model 3, with focus on the automotive sensors and Autopilot ECU Tesla deployed inside Model 3.
Affordability comes first The computing power inside cars is an increasingly important feature. A significant amount of computing power is necessary to enable optimal driver assistance and automated driving and activate safety features.
To optimize automated driving, many car OEMs and Tier Ones are adopting a variety of sensors such as cameras, radar, lidar, and ultrasonic sensors, so that vehicles can detect in the surroundings they are in. All the data derived from the sensors must be grouped together, and this is where the control unit comes into play.
Given the “cutting-edge” image maintained by Tesla, the general public is forgiven if they believe all the hardware pieces inside Model 3 are technically the most advanced available on the market.
A peek under the hood, though, reveals that Tesla’s primary design goal for Model 3 was to reduce the cost of ADAS, making the model “affordable,” explained System Plus CEO Romain Fraux.
For automotive sensors in Model 3, Tesla is using eight cameras, one radar and 12 ultrasonic sensors. Model 3 uses no lidars, true to Musk’s famous claim that lidar is “a fool’s errand.”
In the following pages, System Plus shares highlights of sensors and computing unit of Model 3 under the hood.
Sensors for ADAS applications
The sensor package designed into Tesla Model 3 includes: eight cameras which provide 360-degree visibility around the car within a radius of 250 meters; 12 ultrasonic sensors that complete this vision system. Together, they allow the detection of hard and soft objects at a distance and with almost twice the accuracy of the previous system. The package also incorporates a forward-facing radar system with improved processing capabilities. It provides additional data about the surrounding environment on a redundant wavelength that can see through heavy rain, fog, dust and even beyond previous cars.
On the camera front, there are four cameras facing the front that support the radar and have different characteristics. The main one, covers 250 meters but with a very narrow-angle of view, and there are others that cover shorter distances (150-, 80- and 60 meters) but with a wide-angle view of the environment around the car and are those used to read the road signs. The other four cameras face the sides and rear of the car and can see up to 100 meters away.
Sonar, on the other hand, uses ultrasound to detect obstacles within a radius of 8 meters around the car. It works at any speed and also controls the blind spot. The data collected by the sonar is also used by Autopilot to manage the automatic lane change during overtaking. Finally, GPS is used to detect the position of the car concerning the road.
Forward cameras For forward vision, Tesla has developed a tri-camera module with three On Semiconductor image sensors. Model 3 also uses two forward-looking side cameras, two rear-peering side cameras and a rear-view camera.
Eight cameras in total designed into Model 3 are based on the same 1.2 Megapixel image sensors released by On Semiconductor in 2015. “They are low cost. They are neither new nor high resolution,” observed Fraux.
Sourcing all eight image sensors from the same supplier means “Tesla must be trying to get the better purchasing price,” Fraux noted.
The Triple Forward Camera of the Tesla Model 3 features three CMOS image sensors of the ON Semi AR0136A with 3.75 um Pixel size at 1280×960 1.2Mp resolution. It offers a front image capture system up to 250 meters that are used by the Tesla Model 3 Driver Assist Autopilot Control Module Unit.
To add context to Tesla’s tri-camera module, System Plus compared it to a triple camera module designed by ZF, one of the largest tier-one automotive suppliers. ZF’s S-Cam4 comes with two solutions, one with a mono camera and the other with a triple camera.
S-Cam4, a triple camera version of ZF’s module, features Omnivision CMOS image sensors and Mobileye’a EyeQ4 vision processors.
The PCB mounting technology used by Tesla differs from the BMW’s as shown in the image above. BMW prefers an isolated combination of sensors on three different PCBs. In contrast, Tesla’s triple forward camera module embeds all CMOS sensors in a PCB without the processing SoC.
ZF’s S-Cam4 includes Mobileye’s vision processing capabilities.
By choosing mature image sensors from On Semiconductor and adding no post-processing, Tesla made its camera module “not about having the newest image sensors,” but all about cost,” System Plus observed. The firm estimates the cost of ZF’s tri-camera at $165, while Tesla’s tri-camera is at $65.
Tesla chose to go with a proven radar module from Continental. Inside Continental’s ARS4-B are a 77GHz radar chip set and 32-bit MCU provided by NXP Semiconductors. System Plus’ Fraux noted that although several chip companies — including MediaTek and Texas Instruments — claimed entry into the automotive radar market, NXP and Infineon are the undisputed big two. Continental is a key player among radar module vendors. Its ARS4-B “can be found at least 15 other vehicles including Audio Q3, VW Tiguan, Nissan Rogue and others,” explained Fraux.
The Continental ARS4-A radar system is used for forwarding collision warning, emergency brake assist, collision mitigation or adaptive cruise control (ACC). An important element is the simultaneous measurement of long distances, up to 250m with an accuracy of +/-0.2m, and short-range, up to 70m, relative speed, and angle between two objects.
The system consists of two electronic boards that include an NXP Semiconductor microcontroller and a Broadcom Ethernet transceiver. The radiofrequency (RF) board is made with an asymmetrical structure with a PTFE/FR4 hybrid substrate equipped with planar antennas.
The NXP Semiconductor 77 GHz multi-channel radar transceiver chipset, consisting of four receivers, two transmitters, and an associated voltage controlled oscillator (VCO), is used as a high-frequency transmitter and receiver.
Tesla Computer Unit
Tesla has developed a custom “liquid-cooled dual computing platform” consisting of its Autopilot and information computers. They are built onto two different boards in the same module, explained System Plus’ CEO Fraux.
On one side, there is the infotainment electronic control unit (ECU) or MCU. And on the other side, there is the Autopilot ECU. In HW2.5, originally installed in Model 3, Tesla’s Autopilot was still enabled by Nvidia’s SoCs and GPU.
Tesla integrated complete modules from several manufacturers that are associated with Nvidia’s high-performance integrated circuits for the GPU, Intel for the processor, NXP and Infineon for the microcontrollers, Micron Technology, Samsung and SK Hynix for the memory and STMicroelectronics for the audio amplifiers.
Evolution of Autopilot ECU
System Plus notes that the evolution of Tesla’s computer has been taking place on the Autopilot ECU. With HW2.5, Tesla incorporated two Nvidia Parker SoCs, one Nvidia Pascal GPU and one Infineon TriCore CPU. By moving onto HW3.0, Tesla has integrated two newly designed Tesla SoCs, two GPUs, two neural network processors and one lock-step CPU.
On one hand, zFAS Audi A8’s central driver assistance controller — “comes with no redundancy, and is really expensive,” Fraux observed. On the other hand, Tesla’s version, using two of its SoCs, offers redundancy.
HW2.5 vs. HW3.0
Fraux said Tesla crammed more components for HW3 (4746 components) on the same size board, compared to HW2.5 (4681 components)
For HW3, the processor count shrank from four Tesla SoCs to two (Nvidia, Infineon).
The technology node used by Tesla SoC in HW3 is 14nm, compared to 16nm in Nvidia’s HW2.5 processors. At the time of the HW3 rollout, Fraux observed, “This was the first time when 14nm FinFET process was used in a car.”
Return on Investment for internally designed SoCs?
The car industry has rarely seen any automakers internally designing their own automotive ASICs for their vehicles. It’s “a big risk,” said Fraux, “unless you have a talented design team hardware competency internally.” Given the automotive market today, this won’t be an easy decision to make, he added.
And yet, Tesla isn’t alone. There are a number of car OEMs aspired to do what Tesla has done with the development of its own Autopilot processor.
But does it really pay to spend a lot of R&D dollars and design your own ASIC just for your own car models?
“If you want to keep a good margin and go for volume production, it could make sense,” observed Fraux. Over the last several years as more and more electronics components coming into vehicles, it was a real shocker for many car OEMs to learn that leading chip vendors like Nvidia and Intel are accustomed to keeping big margins for their SoCs, CPUs and GPUs. If OEMs don’t want to continue price negotiations with chip vendors over the next five years, they might find it easier to develop their own SoCs to control their own destiny.
System Plus estimates that Tesla’s HW2.5, consisting of three Nvdia’s chips and Infineon MCU, costs $280. In contrast, Tesla’s HW3.0 based on Tesla’s two SoCs costs $190.
Assuming a carmaker spends $150 million for the design cost on its own processor, with no evolution on component pricing and the annual production of 400,000 units, noted Fraux, “Our quick estimate shows that you can recover your investment in four years.”