28.1 C
Delhi

Tesla announced its own ASIC D1 Dojo AI chip for AI training purposes

Artificial intelligence (AI) is the next big thing in the market, something which the tech companies are investing heavily to get themselves a clear lead in the future AI-driven market. And who doesn’t know that Tesla values AI more than anything? The company’s entire EV business depends on the AI capabilities of its vehicles.

According to Tesla in its latest presentation, the company believes that AI has limitless possibilities and the system is getting smarter than an average human. Tesla announced that to speed up the AI software workloads, its D1 Dojo custom application-specific integrated circuit (ASIC) for AI training will be of great use, the software that the company presented today.

As we know many companies are building ASICs for AI workloads these companies work tirelessly and their list not only included start-ups but also big names like Amazon, Baidu, Intel and NVIDIA. However, not everyone has the right formula and not everyone can satisfy each workload perfectly, meaning that there is a market for AI training. And hence, the reason why Tesla opted to develop its own ASIC for AI training purposes.

The system which is called the D1 resembles a part of the Dojo supercomputer used to train AI models inside Tesla HQ. The chip is a product of TSMC’s manufacturing efforts and is produced using the 7nm semiconductor node. The chip reportedly is packed with over 50 billion transistors and boasts a huge die size of 645mm^2.

- Advertisement -TechnoSports-Ad

According to Tesla, its latest AI chip has some impressive performance and can output as much as 362 TeraFLOPs at FP16/CFP8 precision or about 22.6 TeraFLOPs of single-precision FP32 tasks. It’s a remarkable feat that in terms of optimized FP16 data types, Tesla has even managed to beat the current leader in compute power which is Nvidia. As we know the green teams, A100 Ampere GPU is capable of producing “only” 312 TeraFLOPs of power at FP16 workloads.

Tesla has built up a mesh of functional units (FUs) which are interconnected together to form one massive chip and each FU contains a 64-bit CPU with custom ISA. The CPU is a superscalar implementation that has 4-wide scalar and 2-wide vector pipelines. According to reports, the functional unit has the capability of performing one TeraFLOP of BF16 or CFP8, 64 GigaFLOPs of FP32 computation, and has 512 GB/s bandwidth in any direction in the mesh.

source

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Nivedita Bangari
Nivedita Bangari
I am a software engineer by profession and technology is my love, learning and playing with new technologies is my passion.
TechnoSports-Ad

Popular

TechnoSports-Ad

Related Stories

More from author

The list of Airtel SMS packs as of April 20, 2024

Check out the list of Airtel SMS packs, including costs and validity information. We have shared a list of Airtel SMS recharge plans that...

The Best Recharge Plan for Jio as of 20th April 2024

Best Recharge Plan for Jio in 2024: The Ultimate Guide In the past few months, Jio has introduced and tweaked a slew of new...

My Jio Recharge Plans as of April 20, 2024: Top trending plans from Jio

My Jio Recharge Plans: Since its establishment in 2016, Reliance Jio has made a remarkable impact on the Indian te­lecommunications industry. The company has...

HBO Max in India: Here’s how you can watch the service using VPN (April 17)

HBO Max in India might launch soon but still, we cannot deny that we want to enjoy our favourite HBO shows as soon as...