Integration with Vercel

This guide shows how to integrate Lepton AI into Vercel. This allows you to call Public Models or your own Model from Vercel projects.

Vercel with Lepton

Installation and Configuration

  1. Go to the Vercel Integration page and click the Add Integration button.
  2. Select your Vercel account and the project(s) you want to integrate and click the Install button.
  3. In the opened installation page, select your Lepton workspace and click the Continue button.
  4. Select the project(s) you want to connect and click the Install button.
  5. Afterwards, you will be redirected to the integration page, where you can connect or disconnect project(s) by clicking the Configure button. You can also manage project(s) access by clicking the Manage Access button.

How Integration Works

We will create an environment variable LEPTON_API_TOKEN in your connected Vercel project, which contains your Lepton API key. You can use this environment variable in your project to call Public Models or your own Model.

Guide: LLM Chatbot

Here is an example of how to use the LLM API from Public Models in a Vercel project with Next.js. All source code is in this GitHub repository.

Deploy on Vercel

The easiest way to do this is to click on the Deploy button below and follow the step-by-step instructions to create the project and add the integration.

Deploy with Vercel

Manual setup locally

1. Create a Next.js project

npx create-next-app@latest nextjs-with-lepton
cd nextjs-with-lepton

2. Install Dependencies

npm install ai openai

3. Add your Lepton API key to the .env.local file

Create a .env.local file and add the following:

LEPTON_API_TOKEN=your_lepton_api_key

You can get your Lepton API token from the Lepton AI Dashboard.

4. Create API route handler and page

Put the following code in the app/api/chat/route.ts and app/page.tsx files.

import OpenAI from 'openai';
import { OpenAIStream, StreamingTextResponse } from 'ai';

export const runtime = 'edge';

const openai = new OpenAI({
  // Use the LEPTON_API_TOKEN environment variable
  apiKey: process.env.LEPTON_API_TOKEN!,
  // Here is the LLM API provided by Lepton AI
  baseURL: 'https://mixtral-8x7b.lepton.run/api/v1/',
});

export async function POST(req: Request) {
  // Extract the `messages` from the body of the request
  const { messages } = await req.json();

  // Request the OpenAI API for the response based on the prompt
  const response = await openai.chat.completions.create({
    model: 'mixtral-8x7b',
    stream: true,
    messages: messages,
  });

  // Convert the response into a friendly text-stream
  const stream = OpenAIStream(response);

  // Respond with the stream
  return new StreamingTextResponse(stream);
}

5. Start the development server

npm run dev

Open http://localhost:3000 with your browser to see the result.

Resources

Lepton AI

© 2024