Regardless of advocating for an industry-wide halt to AI coaching, Elon Musk has reportedly kicked off a significant synthetic intelligence venture inside Twitter. The corporate has already bought roughly 10,000 GPUs and recruited AI expertise from DeepMind for the venture that includes a big language mannequin (LLM), studies Enterprise Insider.
One supply acquainted with the matter acknowledged that Musk’s AI venture remains to be in its preliminary section. Nevertheless, buying a big quantity of extra computational energy suggests his dedication in the direction of advancing the venture, as per one other particular person. In the meantime, the precise objective of the generative AI is unclear, however potential purposes embody bettering search performance or producing focused promoting content material.
At this level, it’s unknown what precise {hardware} was procured by Twitter. Nevertheless, Twitter has reportedly spent tens of tens of millions of {dollars} on these compute GPUs regardless of Twitter’s ongoing monetary issues, which Musk describes as an ‘unstable monetary scenario.’ These GPUs are anticipated to be deployed in one among Twitter’s two remaining information facilities, with Atlanta being the most certainly vacation spot. Curiously, Musk closed Twitter’s main datacenter in Sacramento in late December, which clearly lowered the corporate’s compute capabilities.
Along with shopping for GPU {hardware} for its generative AI venture, Twitter is hiring extra engineers. Earlier this yr, the corporate recruited Igor Babuschkin and Manuel Kroiss, engineers from AI analysis DeepMind, a subsidiary of Alphabet. Musk has been actively in search of expertise within the AI {industry} to compete with OpenAI’s ChatGPT since no less than February.
OpenAI used Nvidia’s A100 GPUs to coach its ChatGPT bot and continues to make use of these machines to run it. By now, Nvidia has launched the successor to the A100, its H100 compute GPUs which can be a number of occasions quicker at across the identical energy. Twitter will doubtless use Nvidia’s Hopper H100 or comparable {hardware} for its AI venture, although we’re speculating right here. Contemplating that the corporate has but to find out what its AI venture can be used for, it’s laborious to estimate what number of Hopper GPUs it might want.
When large firms like Twitter purchase {hardware}, they purchase at particular charges as they procure hundreds of models. In the meantime, when bought individually from retailers like CDW, Nvidia’s H100 boards can price north of $10,000 per unit, which provides an concept of how a lot the corporate may need spent on {hardware} for its AI initiative.