Mixtral 8x7b
32K context
Description
The Mixtral 8x7b Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts.
Pricing
- Dedicated Endpoints: Calculated by the instance type and the number of GPUs, you can find the details in pricing page. You can also contact us to reserve GPUs.
- Serverless Endpoints: $0.5 / M tokens for using Mixtral 8x7b, pay as you go.
Create a Dedicated Endpoint
Beyond the serverless endpoints, Lepton provides a simple way to create a dedicated endpoint for Mixtral 8x7b, which is a fully managed endpoint for your own use cases. If this model is what you are looking for, head over our dashboard to create your endpoint.
Playground
System prompt
Temperature
Max tokens
Top P