Meta Reveals Its Plans To Reimagine the AI Infrastructure This includes first generation custom silicon chip for running AI models, a new AI-optimized data center design and the second phase of their 16,000 GPU supercomputer for AI research
By Teena Jose
Opinions expressed by Entrepreneur contributors are their own.
You're reading Entrepreneur India, an international franchise of Entrepreneur Media.
The social networking giant Meta revealed its plan to make strides in with the development of its in-house chip specifically designed for running AI models. This includes a first generation custom silicon chip for running AI models, a new AI-optimized data centre design and the second phase of their 16,000 GPU supercomputer for AI research.
In a recent blog post titled, 'Reimagining our infrastructure for the AI age', Santosh Janardhan, VP and head of infrastructure has stated, "Our artificial intelligence (AI) compute needs will grow dramatically over the next decade as we break new ground in AI research, ship more cutting-edge AI applications and experiences across our family of apps, and build our long-term vision of the metaverse. We are executing on an ambitious plan to build the next generation of Meta's AI infrastructure and today, we're sharing some details on our progress."
"Alongside the custom silicon chip for AI models, Meta is also working on an AI-optimized data center design and the second phase of a massive 16,000 GPU supercomputer dedicated to AI research. These initiatives aim to facilitate the development and deployment of larger and more sophisticated AI models at scale," he added.
As per the post, the company is reimagining how coding is done with the deployment of CodeCompose, a generative AI-based coding assistant developed to boost developer productivity throughout the software development lifecycle.
The centerpiece of Meta's infrastructure advancements is the MTIA (Meta Training and Inference Accelerator), their in-house custom accelerator chip family. According to Meta, MTIA is designed specifically for inference workloads and offers superior compute power and efficiency compared to CPUs. Combining MTIA chips with GPUs will result in improved performance, reduced latency, and greater efficiency for each workload. Meta's blog posts acknowledged that its first MTIA chip stumbled with high-complexity AI models, but further noted that it handled low and medium complexity models more efficiently than competitor chips.
Meta also announced plans to redesign its data centres with modern AI-oriented networking and cooling systems, with the first facility set to break ground this year. The new design is expected to be 31% cheaper and built twice as fast as the company's current data centres.
"We're always focused on delivering long-term value and impact to guide our infrastructure vision. We believe our track record of building world-class infrastructure positions us to continue leading in AI over the next decade and beyond," the blog noted.