Experimental GPU SMR on Kubernetes
SMR of GPU workloads in Kubernetes is still experimental! Some of the jank involved in setting up nodes is planned to be automated/smoothed out soon.
Unsupported Configurations
Ubuntu 24.04: While technically supported, we cannot guarantee functionality due to its use of a newer
libelf
version, which requires linking against an 2.38+libc
. Most containers may not operate correctly on this system as a result as we use the system libelf inside the containers, hence enforcing a minimum libc version requirement on container images.CUDA Versions >12.4: We officially support CUDA up to version 12.4 from 12.0. Newer versions may work, but we have not conducted thorough testing. Due to the nature of our APIs, it can be challenging to determine if issues arise from version mismatches or other factors.
glibc Versions <2.31: We do not support glibc versions lower than or equal to 2.31. While we plan to transition to static binaries for some components, a minimum of glibc 2.31 will still be required for the short term. Additionally, our Kubernetes systems currently use CRIU 4.0, which also mandates at least glibc 2.31.
Setup Cedana Shim for GPU Support
Step 0: Get access to a K8s Node
We support Ubuntu 22.04 as default currently, however centOS based systems are partially supported and being actively tested.
If you have access to EKS, start by creating a cluster with a GPU node.
If you don't have a Kubernetes cluster, but have a gpu vps or gpu installed on a ubuntu linux box:
Update your drivers to match the pre-requisites.
Ensure you have libcuda.so on your system,
ldconfig -v | grep libcuda
Install kubernetes on it.
We recommend using
k3sup
a.k3sup install --local
will setup a k3s cluster for you. Note:/var/lib/rancher/k3s/data/current/bin/
contains the containerd-shim we would need to replace. b. Ensure you copy theexport KUBECONFIG
commands from the output and paste it in your.bashrc
Step 1: Download the Cedana Containerd Shim
First, download the Cedana fork for the containerd shim.
Step 2: Stop the Kubernetes Containerd Service
If you are using Shim v2, stop the containerd service before replacing the shim.
Note: it's not required in a test cluster, but in a production cluster ensure containerd is down so that no requests get miss assigned and dropped, just in case.
Step 3: Install the New Shim Binary
Move the downloaded shim to the appropriate directory.
Step 4: Restart the Containerd Service
Start the containerd service again.
Step 5: Using Shim v1 (if applicable)
If you are using Shim v1, replace the binary with Shim v2, and then update the containerd config:
Prerequisites
Before installing Cedana using the Helm chart, ensure that the following are installed:
NVIDIA base drivers and CUDA drivers (version 12.1 to 12.4)
nvidia-smi
is available
Follow instructions in Cedana Cluster Installation, and ensure you have cedana setup before proceeding further.
Verify that the Cedana helper pod logs indicate a valid CUDA version and display the message "GPU Enabled."
Running a Container with CUDA
Once everything is set up, you can run a container with CUDA support. Make sure to set the CEDANA_GPU
environment variable in the container spec:
Notes:
Using
tty: true
is recommended, as the container might take a little time to start, and logs may not appear immediately if the buffer isn't flushed correctly.If your program expects newline buffering, ensure that it is compatible with the delayed log output during initial startup.
Performing a Save
Now to perform a save there are two ways, either you use our CLI and run cedana dump runc --id <container-id> --path <checkpoint-path-on-local-filesystem>
. Or you can setup ingres/port-forward our manager to get access to a basic set of APIs for performing checkpoint restore.
Performing a Resume
For our resumes we require first creating a new container with the same image but with root PID in sleep so that it can be replaced by us. We plan to improve this workflow soon, but until then it's still considered a requirement.
After the restore pod is setup and running you can attempt your restore which should resume from a previously taken checkpoint:
With these steps completed, you should be able to leverage Cedana GPU support within your Kubernetes containers.
We recommend using this new service as the default only for experimental purposes.
For other workloads, use the
runtime:
label in the pod spec configuration.
Last updated