What are the differences?
Between the Mistral 8x7B Instruct and GPT-3.5 Turbo 1106 LLM models, which follows best instructions?
Compare
to
Mistral 8x7B Instruct
Mistral
GPT-3.5 Turbo 1106
OpenAI
Overview
Mistral 8x7B Instruct | GPT-3.5 Turbo 1106 | |
|---|---|---|
Provider Organization responsible for this model. | Mistral | OpenAI |
Input Context Window The total number of tokens that the input context window can accommodate. | 32K | 16K |
Maximum Output Tokens The maximum number of tokens this model can produce in one operation. | 4.1K | 16K |
Release Date The initial release date of the model. | December 11, 2023 23 months ago | November 6, 2023 24 months ago |
Knowledge Cutoff The latest date for which the information provided is considered reliable and current. | 2023/12 |
Pricing
Mistral 8x7B Instruct | GPT-3.5 Turbo 1106 | |
|---|---|---|
Input Costs associated with the data input to the model. | $0.70 | $0.00 |
Output Costs associated with the tokens produced by the model. | $0.70 | $0.00 |
Benchmark
Mistral 8x7B Instruct | GPT-3.5 Turbo 1106 | |
|---|---|---|
MMLU Assesses LLMs' ability to acquire knowledge in zero-shot and few-shot scenarios. | 70.6 | |
MMMU Comprehensive benchmark covering multiple disciplines and modalities. | ||
HellaSwag A demanding benchmark for sentence completion tasks. | 87.6 | |
Arena Elo Ranking metric for LMSYS Chatbot Arena. | 1114 | 1068 |
5000+ teams use Lunary to build reliable AI applications
Compare more models
Building an AI chatbot?
Open-source GenAI monitoring, prompt management, and magic.
Open Source
Self Hostable
1-line Integration
Prompt Templates
Chat Replays
Analytics
Topic Classification
Agent Tracing
Custom Dashboards
Score LLM responses
PII Masking
Feedback Tracking
Open Source
Self Hostable
1-line Integration
Prompt Templates
Chat Replays
Analytics
Topic Classification
Agent Tracing
Custom Dashboards
Score LLM responses
PII Masking
Feedback Tracking




