What are the differences?
Between the Vicuna 13B and Gemini 1.5 Flash LLM models, which follows best instructions?
Compare

to

Vicuna 13B
LMSYS
Gemini 1.5 Flash
Overview
![]() Vicuna 13B | Gemini 1.5 Flash | |
|---|---|---|
Provider Organization responsible for this model. | ![]() LMSYS | |
Input Context Window The total number of tokens that the input context window can accommodate. | 2K | 1M |
Maximum Output Tokens The maximum number of tokens this model can produce in one operation. | Not specified. | 8.2K |
Release Date The initial release date of the model. | March 30, 2023 31 months ago | May 24, 2024 17 months ago |
Knowledge Cutoff The latest date for which the information provided is considered reliable and current. | 2023/7 |
Pricing
![]() Vicuna 13B | Gemini 1.5 Flash | |
|---|---|---|
Input Costs associated with the data input to the model. | Not specified. | $0.35 |
Output Costs associated with the tokens produced by the model. | Not specified. | $0.70 |
Benchmark
![]() Vicuna 13B | Gemini 1.5 Flash | |
|---|---|---|
MMLU Assesses LLMs' ability to acquire knowledge in zero-shot and few-shot scenarios. | 52.1 | 78.9 |
MMMU Comprehensive benchmark covering multiple disciplines and modalities. | 56.1 | |
HellaSwag A demanding benchmark for sentence completion tasks. | ||
Arena Elo Ranking metric for LMSYS Chatbot Arena. | 1041 | 1231 |
5000+ teams use Lunary to build reliable AI applications
Building an AI chatbot?
Open-source GenAI monitoring, prompt management, and magic.
Open Source
Self Hostable
1-line Integration
Prompt Templates
Chat Replays
Analytics
Topic Classification
Agent Tracing
Custom Dashboards
Score LLM responses
PII Masking
Feedback Tracking
Open Source
Self Hostable
1-line Integration
Prompt Templates
Chat Replays
Analytics
Topic Classification
Agent Tracing
Custom Dashboards
Score LLM responses
PII Masking
Feedback Tracking



