What are the differences?
Between the GPT-4o and Claude 3 Opus LLM models, which follows best instructions?
Compare
to

GPT-4o
OpenAI

Claude 3 Opus
Anthropic
Overview
GPT-4o | ![]() Claude 3 Opus | |
|---|---|---|
Provider Organization responsible for this model. | OpenAI | ![]() Anthropic |
Input Context Window The total number of tokens that the input context window can accommodate. | 128K | 200K |
Maximum Output Tokens The maximum number of tokens this model can produce in one operation. | 2K | 4.1K |
Release Date The initial release date of the model. | May 12, 2024 18 months ago | March 4, 2024 20 months ago |
Knowledge Cutoff The latest date for which the information provided is considered reliable and current. | 2023/10 |
Pricing
GPT-4o | ![]() Claude 3 Opus | |
|---|---|---|
Input Costs associated with the data input to the model. | $0.01 | $15.00 |
Output Costs associated with the tokens produced by the model. | $0.02 | $75.00 |
Benchmark
GPT-4o | ![]() Claude 3 Opus | |
|---|---|---|
MMLU Assesses LLMs' ability to acquire knowledge in zero-shot and few-shot scenarios. | 88.7 | 88.2 |
MMMU Comprehensive benchmark covering multiple disciplines and modalities. | 69.1 | 59.4 |
HellaSwag A demanding benchmark for sentence completion tasks. | 95.4 | |
Arena Elo Ranking metric for LMSYS Chatbot Arena. | 1287 | 1251 |
5000+ teams use Lunary to build reliable AI applications
Building an AI chatbot?
Open-source GenAI monitoring, prompt management, and magic.
Open Source
Self Hostable
1-line Integration
Prompt Templates
Chat Replays
Analytics
Topic Classification
Agent Tracing
Custom Dashboards
Score LLM responses
PII Masking
Feedback Tracking
Open Source
Self Hostable
1-line Integration
Prompt Templates
Chat Replays
Analytics
Topic Classification
Agent Tracing
Custom Dashboards
Score LLM responses
PII Masking
Feedback Tracking



