Tesla reveals information about the company’s Dojo Custom AI Supercomputer at this years Hot Chips

OSTN Staff

Tesla revealed more information about the company’s Dojo microarchitecture at this year’s Hot Chips 34. The company hopes to create an AI supercomputer to aid video data for its vehicle lines. The website Serve The Home reported recently with a plethora of slides breaking down the custom AI supercomputer.

An in-depth look at the upcoming Tesla Dojo AI Supercomputer at Hot Chips 34

Tesla’s Dojo is the company’s custom supercomputer platform constructed by Tesla for AI machine learning and video training by utilizing the video data delivered from its line of vehicles. An extensive backbone of the Dojo’s microarchitecture is run on video data and is more advanced than merely looking at static images or basic text. This advancement is crucial for Tesla as the company requires a hefty amount of computing for artificial intelligence and standard autonomy for self-driving vehicles.

hc34-tesla-dojo-system-path-to-general-autonomy
hc34-tesla-dojo-system-technology-enabled-scaling
hc34-tesla-dojo-system-training-tile
hc34-tesla-dojo-system-training-tile-building-block
hc34-tesla-dojo-system-tesla-v1-dojo-interface-processor-696x390
hc34-tesla-dojo-system-tesla-v1-dojo-interface-processor-pcie-topology
hc34-tesla-dojo-system-tesla-transport-protocol-696x391
hc34-tesla-dojo-system-tesla-v1-dojo-interface-processor-z-plane-topology
hc34-tesla-dojo-system-tesla-v1-dojo-interface-processor-z-plane-topology-2-696x389
hc34-tesla-dojo-system-tesla-v1-dojo-interface-processor-z-plane-topology-3-696x390
hc34-tesla-dojo-system-tesla-v1-dojo-nic-696x391
hc34-tesla-dojo-system-tesla-remote-dna-696x389
hc34-tesla-dojo-system-tesla-v1-dojo-training-matrix-696x390
hc34-tesla-dojo-system-compute-memory-io-696x390
hc34-tesla-dojo-system-model-execution-696x391
hc34-tesla-dojo-system-e2e-training-workflow-696x390
hc34-tesla-dojo-system-e2e-training-workflow-video-based-training-696x391
hc34-tesla-dojo-system-e2e-training-workflow-data-loading-needs-1-696x390
hc34-tesla-dojo-system-e2e-training-workflow-data-loading-needs-2-696x391
hc34-tesla-dojo-system-e2e-training-workflow-disaggregated-data-loading-696x390
hc34-tesla-dojo-system-e2e-training-workflow-disaggregated-data-loading-2-696x391
hc34-tesla-dojo-system-e2e-training-workflow-disaggregated-resources-696x390
hc34-tesla-dojo-system-summary-696x391

The most significant technological advancement for the Dojo is its System-On-Wafer solution. There is an integrated D1 die processing at 15 kilowatts for every twenty-five dies. Included with the twenty-five D1 dies, Tesla is also incorporating forty I/O dies that are much smaller.

Cooling and power are integrated into the training tile on the Tesla Dojo. The tiles are scaled between additional tiles with a link total of 9TB/s and must be attached instead of relying on servers.

Tesla’s first version of the Dojo Interface processor offers high bandwidths of memory on a PCIe card which can be either standalone or on a server. The Dojo uses the in-house Tesla Transport Protocol (TTP) interface offering extreme levels of bandwidth. The Tesla Transport Protocol also uses Ethernet by the included software to locate a consistent address space across the Tesla system or be bridged to the interface card.

The Tesla Dojo custom AI supercomputer is packed with as many as five cards utilizing a PCIe host server and provides 4.5TB/s bandwidth on each training tile.

With companies like Apple, Google, and Amazon using other companies’ hardware, Tesla insists on using its custom hardware, including chips and more. It will be interesting to see what Tesla will offer in the future and if it will venture outside of just autonomous vehicles.

The post Tesla reveals information about the company’s Dojo Custom AI Supercomputer at this years Hot Chips by Jason R. Wilson appeared first on Wccftech.

Powered by WPeMatico

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.