What are the differences?
Between the OpenChat 3.5 0106 and Mixtral 8x22b Instruct v0.1 LLM models, which follows best instructions?
Compare

to

OpenChat 3.5 0106
OpenChat
Mixtral 8x22b Instruct v0.1
Mistral
Overview
![]() OpenChat 3.5 0106 | Mixtral 8x22b Instruct v0.1 | |
|---|---|---|
Provider Organization responsible for this model. | ![]() OpenChat | Mistral |
Input Context Window The total number of tokens that the input context window can accommodate. | 8.2K | 64K |
Maximum Output Tokens The maximum number of tokens this model can produce in one operation. | 4.1K | Not specified. |
Release Date The initial release date of the model. | November 1, 2023 24 months ago | Not specified. |
Knowledge Cutoff The latest date for which the information provided is considered reliable and current. | 2024/1 |
Pricing
![]() OpenChat 3.5 0106 | Mixtral 8x22b Instruct v0.1 | |
|---|---|---|
Input Costs associated with the data input to the model. | Not specified. | $2.00 |
Output Costs associated with the tokens produced by the model. | Not specified. | $6.00 |
Benchmark
![]() OpenChat 3.5 0106 | Mixtral 8x22b Instruct v0.1 | |
|---|---|---|
MMLU Assesses LLMs' ability to acquire knowledge in zero-shot and few-shot scenarios. | 65.8 | 77.8 |
MMMU Comprehensive benchmark covering multiple disciplines and modalities. | ||
HellaSwag A demanding benchmark for sentence completion tasks. | ||
Arena Elo Ranking metric for LMSYS Chatbot Arena. | 1091 | 1147 |
5000+ teams use Lunary to build reliable AI applications
Building an AI chatbot?
Open-source GenAI monitoring, prompt management, and magic.
Open Source
Self Hostable
1-line Integration
Prompt Templates
Chat Replays
Analytics
Topic Classification
Agent Tracing
Custom Dashboards
Score LLM responses
PII Masking
Feedback Tracking
Open Source
Self Hostable
1-line Integration
Prompt Templates
Chat Replays
Analytics
Topic Classification
Agent Tracing
Custom Dashboards
Score LLM responses
PII Masking
Feedback Tracking



