← Back to blogWhen does it make sense to use o1 vs GPT-4o?

When does it make sense to use o1 vs GPT-4o?

Sep 17, 2024.

OpenAI has announced its latest innovation which is the o1 series of models. These aren't just another incremental update but they represent a significant improvement forward in AI's ability to reason and solve complex problems.

There are two reasoning models available at the moment

  • o1-preview: Designed to reason about hard problems using broad general knowledge about the world.
  • o1-mini: A faster and cheaper version of o1, particularly for coding, math, and science tasks where extensive general knowledge isn't required.

Let's explore into what makes these models special and how they're performing in advanced reasoning.

What sets o1 models apart ?

The o1 models are designed with a unique approach where they think before they answer. Unlike previous models that might rush to provide a response, o1 models take their time to process information, consider multiple angles, and arrive at well-reasoned conclusions. This approach mirrors human problem-solving more closely.

The o1 models employs a system for reasoning tokens which simulates the process of thinking before responding.

When presented with a prompt, the model generates a series of reasoning tokens. These tokens represents the model's internal thought process. The reasoning tokens allows the model to explore multiple solution strategies simultaneously by weighing the pros and cons of each approach.

After this internal deliberation, the model produces its final answer in the form of visible completion tokens. Once the response is generated, the reasoning tokens are discarded. This process mimics how humans consider every step of their thought process when providing an answer.

Reasoning Tokens

The diagram illustrates the flow of information through the o1 model over three conversational turns. Each turn consists of three key components: input, reasoning, and output.

Turn 1: Initiating the Conversation

  • Input: The model receives an initial prompt or question.
  • Reasoning: The o1 model engages its internal reasoning process, analyzing the input and formulating a response.
  • Output: The model generates its response based on the input and its reasoning.

Turn 2: Building on Previous Context

  • Input: A new input arrives, building upon the context established in Turn 1.
  • Reasoning: The model reasons over this new input while considering the previous turn's information.
  • Output: Another response is generated, maintaining continuity with the ongoing conversation.

Turn 3: Managing Token Limits

  • Input: The conversation continues with another input.
  • Reasoning: The model reasons over this input, considering all previous context.
  • Output: The model generates a response, but now we see a crucial new element: truncated output.

A dashed line represents the context window, which for o1 models is set at 128,000 tokens. This window plays a critical role in managing the conversation's flow and the model's memory.

As the conversation progresses, each input, reasoning process, and output contributes to the total token count. The model maintains this history to provide context for future responses. Once the token limit is approached, the model begins to truncate older parts of the conversation to make room for new information.

Accessing o1 models

For Individual Users

  • ChatGPT Plus and Teams Subscribers: o1-preview and o1-mini both are available. Simply navigate to the model picker in your ChatGPT interface and select your preferred o1 variant. ChatGPT Model Picker

  • ChatGPT Free Users: While not initially available, OpenAI has announced plans to democratize access to o1-mini for all free users in the future. Stay tuned for updates on this exciting development!

For Organizations

  • ChatGPT Enterprise and Education Customers: Both o1 models are integrated into your existing OpenAI suite. This rollout ensures that businesses and educational institutions can leverage o1's advanced reasoning capabilities in their operations and research.

For Developers

  • APIs: OpenAI has integrated o1-preview and o1-mini into their applications via the OpenAI API.

  • Third Party Access: For those preferring alternative ecosystems, o1 models are also accessible through reputable third-party services. Notable platforms include Microsoft Azure AI Studio and GitHub Models, offering additional integration options with existing workflows.

Comparision Table : GPT-o1 vs GPT-4o

This table provides a side-by-side comparison of key features and capabilities of o1 and GPT-4o. It's important to note that while this comparison highlights significant differences, both models are powerful tools with their own strengths and ideal use cases.

Featureo1GPT-4o
ArchitectureTransformer-based with reasoning tokensTransformer-based
Primary StrengthAdvanced reasoning, especially in STEM fieldsBroad knowledge and general language tasks
Reasoning ApproachStructured reasoning with "thinking" phasePattern recognition and statistical correlations
Performance in Math (IMO qualifying exam)83% accuracy13% accuracy
Coding Ability (Codeforces percentile)89th percentileNot specified
Safety Measure (Jailbreak test score)84/10022/100
Specialized CapabilitiesExcels in logical reasoning and multi-step problem-solvingWide range of capabilities across various domains
Context Window128,000 tokens32,000 tokens
Handling of UncertaintyDesigned to recognize and communicate uncertaintiesMay provide confident answers even with uncertainty
Transparency of Thought ProcessReasoning tokens provide insight into "thought process"Internal processes largely opaque
Multimodal CapabilitiesCurrently text-onlyText and image inputs
AvailabilityChatGPT Plus, Team, Enterprise, and API (with limitations)Widely available across OpenAI's platforms

How to prompt o1 for better results?

As the o1 models are designed to reason before responding, The prompts work differently than they were before with GPT-4o or similar models. The o1 models performs better when the prompts are clear and specific.

In other models, we used to write prompts in general language and we used to add some instructions in the end like "Explain your answer" or "Show calculations" but in o1 models, those instructions are not needed because the model has the capability to reason and explain itself. Reducing the general instructions can infact improve the performance of the o1 model.

In prompts, include only the relevant information without giving general statements to prevent the model to overthink and generate overcomplicated response.

If your prompt has different sections or different parts of inputs, use delimiters to separate those parts. This will help the model to understand the structure of the input and to generate more logical response.

Let's take an example to understand what prompt works better and how to iteratively improve the prompts.

For GPT-4o and other general models, we might write prompts like this

Write a function to efficiently calculate the Fibonacci sequence.
Comment the code liberally to explain what each piece does and why it's written that way.

But for o1 models, we can write prompts like this

Write a function to calculate the Fibonacci sequence and comment the code liberally.

See the difference? We need not to mention about explaining the code because the model has the capability to reason and explain itself.

The future of reasoning in LLMs

The future of LLMs is moving beyond just text generation into the realm of true reasoning. With models like OpenAI’s o1 we are seeing the early steps toward AI that can think in more human-like ways. This has the potential to revolutionize industries, improve daily life, and even enhance our own problem-solving skills.

A user recently launched an experimental and open-sourced prototype called g1 using Llama 3.1 70B that can help in creating o1-like reasoning chains. While o1 uses reinforcement learning for advanced reasoning, the g1 prototype demonstrates how clever prompting can enhance existing models' logic.

This is just a start and we can expect more advanced reasoning models in the future with more capabilities.

FAQs

Q: Is ChatGPT o1 Free?

At the moment, OpenAI o1 models are only available on ChatGPT Paid tiers and for Usage Tier 5 API customers. OpenAI has plans to bring access to o1 models on Free tiers at a later time.

Q: What are the limitations of o1 models?

Currently, o1 models lack some features like web browsing and image processing that are available in other models.

Q: Is o1 multimodal?

No, o1 is currently text-only.

Q: How much does o1 cost for API usage?

$15.00 per 1 million input tokens $60.00 per 1 million output tokens

Q: What measures has OpenAI taken to ensure o1's safety?

A: OpenAI has implemented enhanced safety training, rigorous testing, and collaborations with government AI safety institutes.


The introduction of OpenAI's o1 models marks a significant milestone in the development of artificial intelligence. As we continue to explore and harness the power of these models, we're not just advancing technology but also we're reimagining the future of human-AI collaboration. The question now is not just what these models can do, but how we'll use them to shape a better, smarter future for all.

Are you building an AI product?

Lunary: open-source GenAI monitoring, prompt management, and magic.

Learn More