Follow us on Google News Follow Blog


New AI benchmark examines speed of replies to user inquiries

File photo: Visitors stare at a Go1 quadruped robot by Unitree Robotics at the World Artificial Intelligence Cannes Festival (WAICF) in Cannes, France, February 10, 2023.

SAN FRANCISCO, March 27 (mod1s) - Artificial intelligence benchmarking organization MLCommons on Wednesday unveiled a new set of tests and findings that measure the speed at which top-of-the-line technology can execute AI apps and reply to consumers.

The two new benchmarks established by MLCommons test the speed at which the AI chips and systems can create replies from the sophisticated AI models filled with data. The findings generally indicate to how rapidly an AI application such as ChatGPT can offer an answer to a user question.

One of the new benchmarks provided the capacity to test the speediness of a question-and-answer scenario for big language models. Called Llama 2, it comprises 70 billion parameters and was built by Meta Platforms (META.O), opens new tab.

MLCommons officials have introduced a second text-to-image generator to the collection of benchmarking tools, dubbed MLPerf, based on Stability AI's Stable Diffusion XL model.

Servers powered by Nvidia's H100 processors developed by the likes of Alphabet's Google (GOOGL.O), opens new tab, Supermicro (SMCI.O), opens new tab and Nvidia (NVDA.O), opens new tab itself comfortably won both new benchmarks on raw performance. Several server manufacturers submitted designs based on the company's less powerful L40S processor.

Server maker Krai submitted a design for the picture creation test using a Qualcomm (QCOM.O), opens new tab AI chip that takes considerably less power than Nvidia's cutting edge CPUs.

Intel (INTC.O), opens new tab also proposed a design based on its Gaudi2 accelerator processors. The business hailed the findings as "solid."

Raw performance is not the only parameter that is crucial when implementing AI systems. Advanced AI chips suck up tremendous amounts of energy and one of the most critical difficulties for AI firms is deploying processors that offer an optimum level of performance for a minimum amount of energy.

MLCommons provides a dedicated benchmark category for assessing power usage.


Post a Comment

Cookie Consent
We serve cookies on this site to analyze traffic, remember your preferences, and optimize your experience.
It seems there is something wrong with your internet connection. Please connect to the internet and start browsing again.
AdBlock Detected!
We have detected that you are using adblocking plugin in your browser.
The revenue we earn by the advertisements is used to manage this website, we request you to whitelist our website in your adblocking plugin.
Site is Blocked
Sorry! This site is not available in your country.