Elon Musk's liquid-cooled 'Gigafactory' AI data centers get a plug from Supermicro CEO — Tesla and xAI's new supercomputers will have 350,000 Nvidia GPUs, both will be online within months

Charles Liang of Supermicro and Elon Musk in gigafactory
(Image credit: Charles Liang)

Elon Musk's Texas Tesla Gigafactory is expanding to contain an AI supercomputer cluster, and Supermicro's CEO is a big fan of the cooling solution. Charles Liang, founder and CEO of Supermicro, took to X (formerly Twitter) to celebrate Musk's use of Supermicro's liquid cooling technology for both Tesla's new cluster and xAI's similar supercomputer, which is also on the way.

Pictured together among server racks, Liang and Musk are looking to "lead the liquid cooling technology to large AI data centers." Liang estimates the impact of Musk leading the move to liquid cooling AI data centers "may lead to preserving 20 billion trees for our planet," obviously referring to the improvements that could be had if liquid cooling were adopted at all data centers worldwide.

AI data centers are well known for their massive power draws, and Supermicro hopes to reduce this strain by pushing liquid cooling. The company claims direct liquid cooling may offer up to an 89% reduction in electricity costs of cooling infrastructure compared to air cooling. 

In a previous Tweet, Liang clarified that Supermicro's goal is "to boost DLC [direct liquid cooling] adoption from <1% to 30%+ in a year." Musk is deploying Supermicro's cooling at a major scale for his Tesla Gigafactory supercomputer cluster. The new expansion to the existing Gigafactory will house 50,000 Nvidia GPUs and more Tesla AI hardware to train Tesla's Full Self Driving feature. 

The expansion is turning heads thanks to the supermassive fans under construction to chill the liquid cooling, which Musk also recently highlighted in an X post of his own (expand tweet below). Musk estimates the Gigafactory supercomputer will draw 130 megawatts on deployment, with growth up to 500MW expected after Tesla's proprietary AI hardware is also installed. Musk claims that the facility's construction is nearly complete, and it is planned to be ready for deployment in the next few months. 

Tesla's Gigafactory supercomputer cluster is not to be confused with Elon's other multi-billion dollar supercomputer cluster, the X/xAI supercomputer, which is also currently under construction. That's right: Elon Musk is building not one but two of the world's largest GPU-powered AI supercomputer clusters. The xAI supercomputer is a bit more well-known than Tesla's, with Musk already having ordered 100,000 of Nvidia's H100 GPUs. xAI will use its supercomputer to train GrokAI, X's quirky AI chatbot alternative that is available to X Premium subscribers. 

Also expected to be ready "within a few months," the xAI supercomputer will also be liquid-cooled by Supermicro and already has a planned upgrade path to 300,000 Nvidia B200 GPUs next summer. According to recent reports, getting the xAI cluster online is a slightly greater priority for Musk than Tesla, as Musk reportedly ordered Nvidia to ship thousands of GPUs originally ordered for Tesla to X instead in June. The move was reported to have delayed Tesla's supercomputer cluster's construction by months, but like so much Musk-centric news, exaggeration is highly likely.

Dallin Grimm
Contributing Writer

Dallin Grimm is a contributing writer for Tom's Hardware. He has been building and breaking computers since 2017, serving as the resident youngster at Tom's. From APUs to RGB, Dallin has a handle on all the latest tech news. 

  • MacZ24
    The only real advantage of advanced nodes is AI (games don't count), so it is no suprise everybody is all in on it. (Otherwise, why prevent China from accessing the last nodes ?)

    But all these gigantic clusters of GPUs for chatbots, that are not reliable and still need experts to look at their outputs to make sure they haven't soiled the sheets, leave me dubious. I'm not conviced they will produce any productivity gain.

    There are a lot of uses of AI that make a lot more sense than that, IMHO. And probably these don't need trillions of parameters to be useful.
    Reply
  • Vanderlindemedia
    Chatbots? Lmao. 350k of GPU's are used for completely different tasks. Its likely for tesla and space related things.
    Reply
  • ThomasKinsley
    Everyone rushing out to buy these GPUs is going to have egg on their face when they're obsolete in 2 years.
    Reply
  • CmdrShepard
    This may lead to preserving 20 billion trees for our planet❤️
    You know what else would have preserved those trees?

    Not making that Gigafactory, and not filling it with thousands of racks with power hungry servers and GPUs

    Literally not clearing the large chunk of land and pouring concrete on it to build the factory / datacenter would have probably saved a lot of trees.
    Reply
  • Tonet666
    "The more you buy the more you save." - Jensen Huang :LOL::ROFLMAO:
    Reply
  • Tonet666
    ThomasKinsley said:
    Everyone rushing out to buy these GPUs is going to have egg on their face when they're obsolete in 2 years.
    Jensen will release a new version of those GPU that are 10-20% faster but 2x price increase. :ROFLMAO:
    Reply
  • usertests
    ThomasKinsley said:
    Everyone rushing out to buy these GPUs is going to have egg on their face when they're obsolete in 2 years.
    If a certain event happens in Taiwan, those GPUs could retain or even increase in value. :-)
    Reply
  • watzupken
    Elon Musk took Jensen’s “The more you buy, the more you save”, marketing very seriously. In the end, it only ended up saving Nvidia. In reality, the more you buy, the more you spend because these hardware are not powered by air.
    Reply
  • Flayed
    I think the Tesla self-driving feature needs all the GPUs he can get.
    Reply