FAQ
General Compatibility
What version of the kernel is supported?
Any kernel above v6.1.x will work with Cedana.
What is the minimum glibc
version supported?
We currently support glibc
v2.35 or higher (ships with Ubuntu 22.04). It is straightforward for us to support lower versions based on customer interest.
What versions of CUDA are supported?
CUDA Versions 11.8 through 12.4 are officially tested (PyTorch is at 12.4), however we can support newer versions with very little lift. We have a pathway for broader backwards compatibility if needed.
Do you support installing without Kubernetes?
Yes! We can install on standalone nodes with no issues, you just need to download and run our daemon on your machines. Here’s some documentation to show how you’d run GPU SMR on a single node: https://docs.cedana.ai/setup/gpu-checkpointing.
What is your support for cgroups
v1, v2 or either?
We only support cgroups
v2 right now.
What would install and management look like from a local rather than managed API?
For a local rather than managed API, we expose gRPC endpoints that you can use and call from anywhere. You can find the API here: https://github.com/cedana/cedana-api.
Do you support AMD GPUs?
We don’t currently support AMD. However, we’ve been exploring AMD support and developing a solution would be based on volume and demand from a customer. An AMD solution would fit seamlessly and transparently within our architecture, and aligns with our vision of being a unified compute platform for GPU Save, Migrate, and Resume (SMR).
Last updated