Meta (formerly Facebook) is building its first-generation custom silicon chip for running artificial intelligence (AI) models, saying its AI compute needs will grow dramatically over the next decade as we break new ground in AI research.
Called MTIA (Meta Training and Inference Accelerator), the in-house, custom accelerator chip will provide greater compute power and efficiency than CPUs, and is customised for internal workloads.
“By deploying both MTIA chips and GPUs, we’ll deliver better performance, decreased latency and greater efficiency for each workload,” said Santosh Janardhan, VP and Head of Infrastructure at Meta.
The company also plans a new AI-optimised data centre design and the second phase of its 16,000 GPU supercomputer for AI research.
“These efforts — and additional projects still underway — will enable us to develop larger, more sophisticated AI models and then deploy them efficiently at scale,” Janardhan added.
The next-generation data centre will be an AI-optimised design, supporting liquid-cooled AI hardware and a high-performance AI network connecting thousands of AI chips together for data centre-scale AI training clusters.