Meta, the parent company of Facebook, Instagram, and WhatsApp, has begun testing its first in-house chip designed for training artificial intelligence (AI) systems, marking a significant step toward reducing reliance on external suppliers such as Nvidia, sources told Reuters.
The chip is currently being deployed on a small scale, with plans to expand production if the test proves successful. This move is part of Meta’s broader strategy to lower infrastructure costs as it invests heavily in AI technologies.
A Push for Custom AI Hardware
Meta has projected total expenses of up to $119 billion in 2025, with a significant portion—up to $65 billion—allocated to AI infrastructure development. Developing its own AI chips is a key part of efforts to control costs while enhancing the company’s AI capabilities.
One of the sources said the new chip is a dedicated AI accelerator, meaning it is designed specifically for AI tasks rather than general-purpose GPUs, which are traditionally used for AI workloads. This specialization could lead to greater power efficiency compared to conventional chips.
Manufacturing with TSMC
Meta is collaborating with Taiwan Semiconductor Manufacturing Company (TSMC) to produce the chip, according to a source. The company recently completed its first “tape-out”, an important stage in chip development that involves creating an initial design and sending it for fabrication.
Tape-out is a costly and time-consuming process, often requiring tens of millions of dollars and several months to complete. If the test deployment fails, Meta would need to diagnose issues and restart the process, further delaying its efforts to roll out custom AI chips.
Meta and TSMC have declined to comment on the development.
Part of the MTIA Series
The new chip is the latest addition to Meta’s Meta Training and Inference Accelerator (MTIA) series, an initiative aimed at building AI-specific hardware. The program has faced setbacks in the past, including the cancellation of a previous training chip at a similar stage of development.
Despite earlier struggles, Meta successfully deployed an MTIA inference chip last year to power recommendation systems on Facebook and Instagram, helping determine which content appears on users’ feeds.
Meta executives have indicated that by 2026, the company aims to use its own chips for AI training—the intensive process of feeding AI models large datasets to improve their performance. The new training chip will first support recommendation systems before expanding to generative AI tools, such as the chatbot Meta AI.
“We’re working on how we would do training for recommender systems and then, eventually, how to think about training and inference for generative AI,” Meta’s Chief Product Officer Chris Cox said at a recent technology conference.
Balancing In-House Chips and Nvidia GPUs
Meta’s attempt to develop AI hardware comes after a previous failure with an in-house inference chip, which performed poorly in early testing. Following that setback, the company pivoted back to Nvidia GPUs, placing orders worth billions of dollars in 2022.
Meta remains one of Nvidia’s largest customers, using its high-performance GPUs to train AI models for recommendations, advertising, and the Llama foundation model series. These units also power inference tasks for over 3 billion daily users across Meta’s platforms.
However, the future of AI chip demand has come under scrutiny in recent months. Researchers have questioned the long-term scalability of AI models, with some suggesting that simply increasing computing power and data may not lead to continued breakthroughs.
This skepticism gained traction following the January launch of low-cost AI models from Chinese startup DeepSeek, which prioritize computational efficiency through optimized inference techniques rather than relying on ever-larger datasets.
DeepSeek’s innovations triggered a brief downturn in AI stocks, including Nvidia, whose shares fell by nearly 20% before recovering most of their value. While investors still view Nvidia’s chips as the industry standard, the company has faced renewed concerns over the long-term sustainability of AI hardware scaling.
The Road Ahead
Meta’s push into custom AI chips reflects a broader industry trend, with major tech firms seeking greater control over their AI infrastructure. While the success of Meta’s new training chip remains uncertain, a successful deployment could significantly reduce costs and lessen its dependence on third-party suppliers like Nvidia.
For now, Meta continues to balance its AI strategy between in-house development and reliance on external GPU manufacturers, as it positions itself at the forefront of the next generation of AI-driven technology.
Facebook
Twitter
Instagram
LinkedIn
RSS