Compare
to
Overview
Gemini 1.5 Flash | Gemini Pro | |
---|---|---|
Provider Organization responsible for this model. | ||
Input Context Window The total number of tokens that the input context window can accommodate. | 1M tokens | 33K tokens |
Maximum Output Tokens The maximum number of tokens this model can produce in one operation. | 8.2K tokens | 8.2K tokens |
Release Date The initial release date of the model. | May 21st, 2024 11 months ago | December 50th, 2023 over 1 year ago |
Knowledge Cutoff The latest date for which the information provided is considered reliable and current. | November 2023 | Not specified. |
Pricing
Gemini 1.5 Flash | Gemini Pro | |
---|---|---|
Input Costs associated with the data input to the model. | $0.35 per million tokens | $0.50 per million tokens |
Output Costs associated with the tokens produced by the model. | $0.70 per million tokens | $1.50 per million tokens |
Benchmark
Gemini 1.5 Flash | Gemini Pro | |
---|---|---|
MMLU Assesses LLMs' ability to acquire knowledge in zero-shot and few-shot scenarios. | 78.9 | 71.8 |
MMMU Comprehensive benchmark covering multiple disciplines and modalities. | 56.1 | 47.9 |
HellaSwag A demanding benchmark for sentence completion tasks. | Not specified. | Not specified. |
Arena Elo Ranking metric for LMSYS Chatbot Arena. | 1231 | 1209 |
Building an AI chatbot?
Open-source GenAI monitoring, prompt management, and magic.
Open Source
Self Hostable
1-line Integration
Prompt Templates
Chat Replays
Analytics
Topic Classification
Agent Tracing
Custom Dashboards
Score LLM responses
PII Masking
Feedback Tracking
Open Source
Self Hostable
1-line Integration
Prompt Templates
Chat Replays
Analytics
Topic Classification
Agent Tracing
Custom Dashboards
Score LLM responses
PII Masking
Feedback Tracking