Social

Facebook parent Meta said on Monday it has launched a new set of free software tools for artificial intelligence applications that could make it easier for developers to switch back and forth between different underlying chips.

Meta‘s new open-source AI platform is based on an open-source machine learning framework called PyTorch, and can help code run up to 12 times faster on Nvidia‘s flagship A100 chip or up to four times faster on AMD‘s MI250 chip, it said.

But just as important as the speed boost is the flexibility the software can provide, Meta said in a blog post.

The software has become a key battleground for chipmakers seeking to build up an ecosystem of developers to use their chips. Nvidia’s CUDA platform has been the most popular so far for artificial intelligence work.

However, once developers tailor their code for Nvidia chips, it is difficult to run it on graphics processing units, or GPUs, from Nvidia competitors like AMD. Meta said the software is designed to easily swap between chips without being locked in.

“The unified GPU back-end support gives deep learning developers more hardware vendor choices with minimal migration costs,” Meta said in its blog post.

Nvidia and AMD did not immediately return requests for comment.

Meta’s software is made for AI work called inference, which is when machine learning algorithms that previously have been trained on huge amounts of data are called on to make quick judgments, such as deciding whether a photograph is of a cat or a dog.

“This is a software effort that is multi-platform. And it’s a testament to the importance of software, particularly for deploying neural networks in machine learning for inference,” said David Kanter, a founder of MLCommons, an independent group that measures AI speed.

Kanter added that this new Meta AI platform would be “good for customer choice.”

© Thomson Reuters 2022


Affiliate links may be automatically generated – see our ethics statement for details.