Hyena code

For all the fervor over the chatbot AI program known as ChatGPT, from OpenAI, and its successor technology, GPT-4, the programs are, at the end of the day, just software applications. And like all applications, they have technical limitations that can make their performance sub-optimal.

In a paper published in March, artificial intelligence (AI) scientists at Stanford University and Canada’s MILA institute for AI proposed a technology that could be far more efficient than GPT-4 — or anything like it — at gobbling vast amounts of data and transforming it into an answer.

Known as Hyena, the technology is able to achieve equivalent accuracy on benchmark tests, such as question answering, while using a fraction of the computing power. In some instances, the Hyena code is able to handle amounts of text that make GPT-style technology simply run out of memory and fail.

“Our promising results at the sub-billion parameter scale suggest that attention may not be all we need,” write the authors. That remark refers to the title of a landmark AI report of 2017, ‘Attention is all you need’. In that paper, Google scientist Ashish Vaswani and colleagues introduced the world to Google’s Transformer AI program. The Transformer became the basis for every one of the recent large language models.

But the Transformer has a big flaw. It uses something called “attention,” where the computer program takes the information in one group of symbols, such as words, and moves that information to a new group of symbols, such as the answer you see from ChatGPT, which is the output.

April 20, 2023

Written by Teirnan Ray

Click to read the entire article on ZDNet

More Posts

November 18 through 21, 2024

I will be away from my desk in the late morning every day this week due to appointments. Otherwise I will be continuing to work