The anatomy of a photon

In Quickstart, we used a pre-packaged photon hf:gpt2. Building your own photon takes only a few steps and decorations over regular Python classes and functions. In this section, we will walk through the anatomy of a photon, including the python class of the same name, and the packaging and deployment process.

The python Photon class

Lepton defines a basic python class called Photon. A photon class defines a set of handlers, which can be used as python methods in the python clients, and are usually also referred to as "endpoints" in web frameworks like FastAPI. The Photon class also defines a set of python dependencies, which are automatically installed when the photon is deployed. The dependencies are defined as a list of strings, each string is a valid pip package name.

To write one's own photon, we can derive from the Photon class, and implement the following functions:

  • an init(self) function. This function is called by the photon runtime when the photon is first loaded. It is used to initialize the photon, and can be used to load models, initialize variables, etc.
  • several methods decorated by @photon.handler. This will be the handlers of the photon, and can be called by the python client. The @photon.handler decorator takes an optional string argument, which is the name of the handler. The handler takes arguments and returns values that are FastAPI compatible.

Here is a simple example implementing a counter photon, that accepts two handlers add and sub. They add or subtract the input number from the counter, and return the new counter value.

from leptonai.photon import Photon

class Counter(Photon):
    # The init method implements any custom initialization logic we need.
    def init(self):
        self.counter = 0

    # When no name is specified, the handler name is the method name.
    @Photon.handler
    def add(self, x: int) -> int:
        self.counter += x
        return self.counter

    # Or, we can specify a name for the handler.
    @Photon.handler("sub")
    def sub(self, x: int) -> int:
        return self.add(-x)

Locally debug photons

When we debug the photon class, we can directly run it locally in the same python process, and the handlers will be available as class methods. For example, we can run the following code to test the Counter class:

c = Counter()
# we can call c.init() manually, but it is not necessary.
# The init() method will be called automatically when a handler is called for the first time.
c.add(x=3) # This will return 3
c.add(x=5) # This will return 8
c.sub(x=2) # This will return 6

All the calls are simple wrapped Python methods, and local pdb works as well for us to debug the code with ease.

Packaging and deploying

When we are satisfied with the code logic, we can package the photon up with the following command:

lep photon create -n counter -m counter.py

Think of a photon as a container image, but much lightweighted: it records the absolute necessary information to run an algorithm, a python program, etcs. Unlike a container, it doesn't have to be a full-fledged operating system image, which usually takes hundreds of megabytes if not gigabytes. Instead, it only records the python class and the dependencies, which is usually only a few kilobytes.

This creates a photon with the name counter, and the model specs being the counter.py file. Lepton will automatically find the python class Counter in the file. If we have multiple classes defined in the same file, we can specify the class we want to run after the filename, like -m counter.py:Counter, but it's not necessary if we have only one class.

Once we have the photon packaged, we can run it locally with:

lep photon run -n counter --local

The difference between a local run and directly creating a Counter class in a python program is that, a local run is in a separate process, and we can test it with the python client:

from leptonai.client import Client, local

c = Client(local())
print(f"Add 3, result: {c.add(x=3)}") # This will return 3
print(f"Add 5, result: {c.add(x=5)}") # This will return 8
print(f"Sub 2, result: {c.sub(x=2)}") # This will return 6

Now that it is running as a standalone service, you can also use the curl command to access it:

curl -X POST -H "Content-Type: application/json" -d '{"x": 3}' http://localhost:8080/add

Remote Deployment

When we are satisfied with the photon, we can deploy it to the remote platform with the following command, given that we have already logged in with lep login:

lep photon push -n counter

This will push the photon to the remote platform. Think of this as the equivalent of pushing an image to a container image, but much faster. We can then list the photons in the workspace with lep photon list, and we should see the photon we just pushed:

lep photon list

Let's launch the photon with lep photon run:

lep photon run -n counter

This will create a deployment for the photon. In default, if we don't specify a name, a default name will be chosen for the deployment, with the first choice being the same as the photon name. We can then inspect the status of the deployment with:

lep deployment status -n counter

Calling the deployment is the same as calling the local photon run with the client, except that we need to specify how we access the deployment: the workspace, the deployment name, and the token to access it:

from leptonai.client import Client

# You can obtain your current workspace id via CLI as `lep workspace id`
workspace_id = "xxxxxxxx"
deployment_name = "counter"
# You can obtain your current workspace's token via CLI as `lep workspace token`
token = "xxxxxxxx"

c = Client(workspace_id, deployment_name, token)
print(f"Add 3, result: {c.add(x=3)}") # This will return 3
print(f"Add 5, result: {c.add(x=5)}") # This will return 8
print(f"Sub 2, result: {c.sub(x=2)}") # This will return 6

Adding Dependencies

The counter class does not have any dependencies, but most of the time we will need to import other packages. Instead of going as far as creating a full Docker image, we provide a lightweighted way for everyday python and Linux packages to be installed. This is done via the requirement_dependency (for python pip) and the system_dependency (for system apt) fields in the photon class. Here's a minimal example to demonstrate how to use them:

import subprocess
from leptonai.photon import Photon

class My(Photon):
    # The dependencies you usually write in requirements.txt
    requirement_dependency = ["numpy", "torch"]

    # The dependencies you usually do `apt install` with
    system_dependency = ["wget"]

    @Photon.handler
    def run(self):
        """
        A example handler to verify that the dependencies are installed.
        """
        import numpy  # This should succeed
        import torch  # This should succeed
        p = subprocess.Popen(["wget", "--version"],
                             stdout=subprocess.PIPE,
                             stderr=subprocess.PIPE)
        out, _ = p.communicate()
        return out

When we push and run the photon remote, the required dependencies are installed automatically.

Note that you can use most formats supported by pip to install packages. For example, to install the segment anything library, you can use the format suggested by the library's installation instruction:

requirement_dependency = ["git+https://github.com/facebookresearch/segment-anything.git"]

Adding extra files

Sometimes we need to add extra files to the photon, such as a model checkpoint, a configuration file, etc. We can do this by adding the extra_files field to the photon class. It takes two forms:

A dictionary of {"remote_path": "local_path"} pairs, where the remote_path is relative to the cwd of the photon at runtime, and local_path is the path pointing to the file in the local file system. If local_path is relative, it is relative to the current working directory of the local environment.

A list of paths (relative to the cwd of the local directory) to be included. The remote path will be the same as the local path.

Here's an example to demonstrate how to use them:

.
├── counter.py
└── sample.txt

Using self-defined docker images

Sometimes we need to use a self-defined docker image for flexibility. We can do this by adding the docker_image field to the photon class. It takes a string, which is the name of the docker image. The image should be available in the docker registry of the platform. For example, if we have a docker image with name and tag like ssusb/anaconda3:latest, we can use it with the following code:

from leptonai.photon import Photon

class Counter(Photon):
    # The init method implements any custom initialization logic we need.
    image = "ssusb/anaconda3:latest"

    def init(self):
        pass

Once the docker image is specified, the dependencies and extra files will be installed in the docker image. The photon will be run as the user lepton in the docker image, and the working directory will be /home/lepton.

Congrats! You now know the anatomy of photons and how to write and deploy your own photon. For more details about photons, you can check out the advanced sections of the documentation.

Lepton AI

© 2024