OpenAI faces diminishing returns with latest AI model

Date:

OpenAI is facing diminishing returns with its latest AI model while navigating the pressures of recent investments.

According to The Information, OpenAI’s next AI model – codenamed Orion – is delivering smaller performance gains compared to its predecessors.

In employee testing, Orion reportedly achieved the performance level of GPT-4 after completing just 20% of its training. However, the transition from GPT-4 to the anticipated GPT-5 is said to exhibit smaller quality improvements than the leap from GPT-3 to GPT-4.

“Some researchers at the company believe Orion isn’t reliably better than its predecessor in handling certain tasks,” stated employees in the report. “Orion performs better at language tasks but may not outperform previous models at tasks such as coding, according to an OpenAI employee.”

Early stages of AI training usually yield the most significant improvements, while subsequent phases typically result in smaller performance gains. Consequently, the remaining 80% of training is unlikely to deliver advancements on par with previous generational improvements.

This situation with its latest AI model emerges at a pivotal time for OpenAI, following a recent funding round that saw the company raise $6.6 billion. With this financial backing comes increased expectations from investors, as well as technical challenges that complicate traditional scaling methodologies in AI development.

If these early versions do not meet expectations, OpenAI’s future fundraising prospects may not attract the same level of interest.

The limitations highlighted in the report underline a significant challenge confronting the entire AI industry: the diminishing availability of high-quality training data and the necessity to maintain relevance in an increasingly competitive field.

According to a paper (PDF) that was published in June, AI firms will deplete the pool of publicly available human-generated text data between 2026 and 2032. The Information notes that developers have “”largely squeezed as much out of” the data that has been used for enabling the rapid AI advancements we’ve seen in recent years.

To address these challenges, OpenAI is fundamentally rethinking its AI development strategy.

“In response to the recent challenge to training-based scaling laws posed by slowing GPT improvements, the industry appears to be shifting its effort to improving models after their initial training, potentially yielding a different type of scaling law,” explains The Information.

As OpenAI navigates these challenges, the company must balance innovation with practical application and investor expectations. However, the ongoing exodus of leading figures from the company won’t help matters.

(Photo by Jukan Tateisi)

See also: ASI Alliance launches AIRIS that ‘learns’ in Minecraft

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

Tags: ai, artificial intelligence, development, llm, models, openai, orion

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

spot_imgspot_img

Popular

More like this
Related

Alibaba Marco-o1: Advancing LLM reasoning capabilities

Alibaba has announced Marco-o1, a large language model (LLM) designed to tackle both conventional and open-ended problem-solving tasks.

New AI training techniques aim to overcome current challenges

OpenAI and other leading AI companies are developing new training techniques to overcome limitations of current methods. Addressing unexpected delays and complications in the development of larger, more powerful language models, these fresh techniques focus on human-like behaviour to teach algorithms to ‘think.

Ai2 OLMo 2: Raising the bar for open language models

Ai2 is releasing OLMo 2, a family of open-source language models that advances the democratisation of AI and narrows the gap between open and proprietary solutions.

Generative AI use soars among brits, but is it sustainable?

A survey by CloudNine PR shows that 83% of UK adults are aware of generative AI tools, and 45% of those familiar with them want companies to be transparent about the environmental costs associated with the technologies.