Serverless endpoints
Built on top of the Lepton platform, we provide a variety of serverless endpoints for popular open source models. You can experiment with the models on our Built With Lepton directly, or use the APIs to integrate such models into your own application.
Sample Usage of Mistral-7b with Serverless Endpoints
1 Install dependencies for using LLM Serverless Endpoints
Our LLM Serverless Endpoints are fully compatible with OpenAI's API spec, so you can use the OpenAI Python SDK to call it. To begin with, let's install the OpenAI Python SDK.
pip install -U openai
2 Import dependencies and set up the ENV variables
Simply redirecting the request to service hosted by Lepton with your API token will get the setup process done.
import os
import openai
client = openai.OpenAI(
base_url="https://mistral-7b.lepton.run/api/v1/",
api_key="<YOUR_LEPTONAI_TOKEN>"
)
You can find your API token in Dashboard - Setting. And more LLM models are available at Serverless Endpoints.
3 Make chat completion requests
Now let's make a completion request to the model and see the response.
completion = client.chat.completions.create(
model="mistral-7b",
messages=[
{"role": "user", "content": "say hello"},
],
max_tokens=128,
stream=True,
)
for chunk in completion:
if not chunk.choices:
continue
content = chunk.choices[0].delta.content
if content:
print(content, end="")
This is a simple example of making a completion request to the model. And as mentioned above, there are other SOTA models available for you to use. You may check Serverless Endpoints for more details or experiment with them on our Built With Lepton.
Usage and billing
Serverless Endpoints usage will be shown in Dashboard - Setting - Billing. Usage will be billed by the amount of tokens processed.
For the pricing of each model, please refer to Pricing Page.
Rate Limit
The rate limit for the Serverless Endpoints is 10 requests per minute across all models under Basic Plan
. If you need a higher rate limit, please add a payment mothod udner Settings
and upgrade to Standard Plan
. If you are looking for a tailored model API service or have any other questions, please contact us.