New Benchmark Tests Speed of Systems Training ChatGPT-like Chatbots

0
638
New benchmark tests speed of systems training ChatGPT
New benchmark tests speed of systems training ChatGPT

The MLPerf benchmark is based on GPT-3, an AI model used to train ChatGPT, OpenAI’s viral chatbot developed with the assistance of Microsoft.

On Tuesday, MLCommons, a company that develops benchmark testing for artificial intelligence (AI) technologies, announced its results for a new test that assesses system speeds when training algorithms used for chatbots like ChatGPT – and Nvidia won.

The MLPerf benchmark is based on GPT-3, an AI model used to train ChatGPT, OpenAI’s viral chatbot supported by Microsoft. The benchmark, however, only includes a representative section of the model due to its size.

This was our most expensive benchmark to date,” said MLCommons Executive Director David Kanter, according to Reuters. “We spent over 600,000 hours of accelerator compute time developing it, as well as some incredibly talented engineers.

Kanter refused to reveal the development cost, simply stating that it was in the millions of dollars. Only Nvidia and Intel’s Habana Labs submitted data for the benchmark, with the fastest times coming from computers employing Nvidia’s newest H100 chip, the undisputed leader in hardware for teaching AI.

Nvidia’s largest system, presented in collaboration with AI cloud firm CoreWeave, used 3,584 H100 chips and required 10.94 minutes to train. Habana Labs, an Intel-acquired AI chip business, completed the benchmark in 311.945 minutes using a significantly smaller system outfitted with 384 Gaudi2 chips.

The results, according to Intel’s Jordan Plawner, senior director of AI Products, revealed the promise of Gaudi2, which will receive a software update in September to increase speed.

The Habana results will be faster by 1.5X to 2X. So that’s when we expect Habana Gaudi2 to be more competitive and less expensive than H100, according to Plawner.

Plawner would not specify how much a Gaudi2 device costs, but he did suggest the industry needs a second supplier of AI training chips, and the MLPerf findings demonstrate Intel can meet that requirement.

Given Below are Some Adaptive Features of ChatGPT :-

Follow and Connect with us on

 Facebook | Instagram  | Linkedin | Dribbble | Twitter | Tumblr | Pinterest

LEAVE A REPLY

Please enter your comment!
Please enter your name here