File System

File System is a persistent remote storage service that offers POSIX file system semantics. It can be mounted as a POSIX file system to workloads like Endpoints, Dev Pods, and Batch Jobs.

Default File System

Every workspace comes with a shared file system by default. This means all your workloads can access it, no matter where they’re running in the workspace. Think of it as a network drive—similar to systems like NFS or Amazon EFS—where you can store and retrieve files from any workload.

default file system

Node Group Specific File System

For enterprise users with reserved computing capacities (which we call node groups in Lepton), we provide dedicated file systems. These deliver much better performance than a typical shared NFS setup or the default file system mentioned above.

Node group specific file systems also comes with ability of local caching, which makes it much better for big tasks, like training large models with terabytes or even petabytes of data, or with more frequent read/write operations.

node group specific file system

Local Storage

Local storage uses the SSDs or HDDs directly attached to a physical server. It’s optimized for tasks that need quick data access—like reading or writing large files during model training or inference. For example, if you’re working with big models or need to load multiple components on the fly, local storage can give you a noticeable performance boost.

We support local storage for reserved node group currently, and if you want to use on-demand node group with local storage, please contact us.

local

File System Cache

With file system cache, you can keep copies of frequently used files and folders on your node group’s dedicated file system. This will make those files faster to access while still letting you use the shared file system for other workloads.

Integration with Your Own Storage System

Besides the two types of file systems we provided, you can also integrate with your own existing file system by contacting us.

Usage and Management

File System Mount

When creating workload, you can specify the file system to be mounted to the workload. A workload can have multiple file system mounts.

Here are few parameters you need to specify when mounting a file system:

  • File System: The file system to be mounted. You can choose from the list of available file systems. To mount local storage, you need to make sure the node group option is selected when creating the workload. Otherwise, the list of local storage options will not be visible in the file system dropdown.
  • Mount From: The path within the file system to be mounted. For example, if you want to mount the root of the file system, you can specify /. Or if you want to mount a specific folder, you can specify the path to the folder within the file system. For local storage, you will not be able to specify the mount from path as it is always mounted the whole local storage block.
  • Mount As: The path within the workload where the file system will be mounted. For example, if you want to mount the file system to /mnt/data, you can specify /mnt/data as the mount as path.

File and Folder Management

1. Using the Dashboard

To manage files and folders within the defualt or node group specific file storage, you can use the dashboard. You can upload, download, delete, and create files and folders within the file storage using the dashboard directly from your local machine.

You can also upload files from cloud storage services such as AWS S3 and Cloudflare R2. On the File System page, click Upload File then choose From Cloud will open a dialog for you to select the cloud storage service and fill in the required information.

For AWS S3, you need to provide the following information:

  • Bucket Name : The name of the bucket you want to upload from.
  • Access Key ID : The access key of the AWS account.
  • Secret Access Key : The secret access key of the AWS account.
  • Destination Path : The path in the file system you want to upload to.

For Cloudflare R2, you need to provide the following information:

  • Endpoint URL : The S3 API URL of the Cloudflare R2 bucket. It can be found in the bucket's settings page uder Bucket Details. Do not include the bucket name in the URL. It should look like https://xxxxxxxxx.r2.cloudflarestorage.com.
  • Bucket Name : The name of the bucket you want to upload from.
  • Access Key ID : The access key of the Cloudflare R2 API Token. You can manage and create R2 tokens by clickin the Manage R2 API Tokens button in the bucket's settings page.
  • Secret Access Key : The secret access key of the Cloudflare R2 API Token.
  • Destination Path : The path in the file system you want to upload to.

The local storage is not accessible via the dashboard. You can only manage files within the local storage with a workload. For example, you can create a pod with local storage mounted and then use the pod to manage files within the local storage.

2. Using CLI tools

You can also manage files and folders within the file storage using the CLI tools. You can upload, download, delete, and create files and folders within the file storage using the CLI tools. Here are a few examples of how you can manage files and folders using the CLI tools:

# Upload a file to the file storage
$ lep storage upload /local/path/to/a.txt /remote/path/to/a.txt
# Upload a folder to the file storage, rsync is only available for the standard and enterprise plan
$ lep storage upload -r -p --rsync /local/path/to/folder /remote/path/to/folder
# Download a file from the file storage
$ lep storage download /remote/path/to/a.txt /local/path/to/a.txt
# Remove a file from the file storage
$ lep storage rm /remote/path/to/a.txt

For more information on how to manage files and folders using the CLI tools, refer to the CLI documentation.

Lepton AI

© 2025