Despite advocating for an industry-wide moratorium on AI training, Elon Musk is said to have launched a major artificial intelligence project within Twitter. According to Business Insider, the company has already purchased approximately 10,000 GPUs and hired DeepMind AI talent for the project, which involves a large language model (LLM).
Elon Musk’s AI project is still in its early stages.
According to report, a significant amount of additional computational power indicates his dedication to advancing the project. Meanwhile, the generative AI’s exact purpose is unknown, but potential applications include improving search functionality and creating targeted advertising content.
At this time, it is unknown what specific hardware Twitter purchased. Despite Twitter’s ongoing financial problems, which Elon Musk describes as a ‘unstable financial situation,’ Twitter has reportedly spent tens of millions of dollars on these compute GPUs. These GPUs will most likely be used in one of Twitter’s two remaining data centres, with Atlanta being the most likely location. Interestingly, Elon Musk shut down Twitter’s primary datacenter in Sacramento in late December, reducing the company’s compute capabilities.
Twitter is hiring additional engineers in addition to purchasing GPU hardware for its generative AI project. Earlier this year, the company hired Igor Babuschkin and Manuel Kroiss from Alphabet’s AI research subsidiary DeepMind. Since at least February, Elon Musk has been actively seeking AI talent to compete with OpenAI’s ChatGPT.
OpenAI trained its ChatGPT bot on Nvidia’s A100 GPUs and continues to run it on these machines. Nvidia has now released the A100’s successor, the H100 compute GPUs, which are several times faster at roughly the same power. Twitter’s AI project will most likely use Nvidia’s Hopper H100 or similar hardware, though this is just speculation. Given that the company has yet to decide what its AI project will be used for, estimating how many Hopper GPUs it will require is difficult.
Also Read: