OpenAI Releases New “Reasoning” AI Model, o1

Image by Ishmael Daro, from Flickr

OpenAI Releases New “Reasoning” AI Model, o1

Reading time: 3 min

  • Kiara Fabbri

    Written by: Kiara Fabbri Multimedia Journalist

  • Justyn Newman

    Fact-Checked by Justyn Newman Lead Cybersecurity Editor

OpenAI announced today its latest AI model, o1, the first in a new family of “reasoning” models designed to handle complex problems faster and more accurately than previous models.

Alongside o1, the company is also releasing a smaller and more affordable version called o1-mini. This release is being described as a “preview,” signaling that the technology is still in its early stages.

o1, which was previously known by the code name “Strawberry,” is the first in a series of “reasoning” models that OpenAI plans to release, as noted by TechCrunch.

Jerry Tworek, OpenAI’s research lead, told The Verge that the training behind o1 differs significantly from previous models, though the company has been unclear about the specifics.

Unlike its predecessors, which were designed to mimic patterns from training data, o1 uses reinforcement learning, a method that teaches the system to solve problems through rewards and penalties.

One of the most striking features of o1 is its ability to solve multi-step problems and write code with a higher degree of accuracy compared to earlier models.

For instance, in a qualifying exam for the International Mathematics Olympiad, o1 outperformed OpenAI’s previous GPT-4o model, solving 83% of the problems compared to GPT-4o’s 13%.

This leap in performance is attributed to o1’s new training process, which incorporates what OpenAI refers to as a “chain of thought” mechanism, allowing the model to break down and solve problems step-by-step.

Tworek notes that while the model has reduced its tendency to “hallucinate,” or provide inaccurate information, the issue has not been entirely resolved, as reported by The Verge.

In addition to its mathematical prowess, o1 ranked in the 89th percentile in competitive programming contests, demonstrating its potential as a tool for developers and researchers alike.

According to The Verge, what stood out how intentionally o1 seemed to mimic human-like thought. Phrases like “I’m curious,” “I’m thinking,” and “Let me see” gave the illusion of a thought process, though the model isn’t actually thinking. So why make it seem like it is?

The Verge reports that Tworek explains this interface is meant to demonstrate how the model takes more time to process and explore problems in greater depth.

Despite these advancements, OpenAI acknowledges that o1 still has limitations. While it excels in complex problem-solving, it is less adept at handling factual knowledge about the world. Additionally, it lacks some of the features that make GPT-4o highly versatile, such as the ability to browse the web or process files and images.

Moreover, o1’s new capabilities come at a cost. The model is significantly more expensive to use than GPT-4o. On OpenAI’s API, o1-preview costs $15 per 1 million input tokens and $60 per 1 million output tokens, which is three to four times higher than the cost of GPT-4o.

While the technology is still in its early stages, the release of o1 highlights ongoing advancements in AI’s ability to tackle complex tasks, offering potential benefits across various fields that require advanced problem-solving capabilities.

Did you like this article? Rate it!
I hated it I don't really like it It was ok Pretty good! Loved it!
0 Voted by 0 users
Title
Comment
Thanks for your feedback
Please wait 5 minutes before posting another comment.
Comment sent for approval.

Leave a Comment

Show more...