Lepton AI
Lepton enables developers and enterprises to run AI applications efficiently in minutes, and at a production ready scale. It allows one to:
- Build models in a python native way without needing to learn containers or Kubernetes,
- Debug and test models locally, and deploy them to the cloud with a single command,
- Consume models in any application with a simple, flexible API,
- Choose heterogeneous hardware that best suits the application,
- Scale up horizontally to handle large workloads.
If you are new to Lepton, we recommend you start with the Quickstart guide:
More resources you can check out are:
Our SDK codebase lives on GitHub. Whenever you have a question, feel free to file an issue.