Below are examples for using Ray Train with a variety of frameworks and use cases. Ray Train makes it easy to scale out each of these examples to a large cluster of GPUs.
| Framework | Example |
|---|---|
| PyTorch | Distributing your PyTorch Training Code with Ray Train and Ray Data |
| Lightning | Train an image classifier with Lightning |
| Accelerate, PyTorch, Hugging Face | Train a text classifier with Hugging Face Accelerate |
| TensorFlow | Train an image classifier with TensorFlow |
| Horovod | Train with Horovod and PyTorch |
| PyTorch | Train ResNet model with Intel Gaudi |
| Transformers | Train BERT model with Intel Gaudi |
| PyTorch | Profiling a Ray Train Workload with PyTorch Profiler |
| XGBoost | Train a tabular model with XGBoost |
| Framework | Example |
|---|---|
| PyTorch | Get started with PyTorch Fully Sharded Data Parallel (FSDP2) and Ray Train |
| PyTorch, DeepSpeed | Fine-tune an LLM with Ray Train and DeepSpeed |
| DeepSpeed, PyTorch | Train a text classifier with DeepSpeed |
| PyTorch | Fine-tune a personalized Stable Diffusion model |
| Accelerate, Transformers | Finetune Stable Diffusion and generate images with Intel Gaudi |
| Lightning | Train a text classifier with PyTorch Lightning and Ray Data |
| Transformers | Train a text classifier with Hugging Face Transformers |
| Accelerate, Transformers | Fine-tune Llama-2-7b and Llama-2-70b with Intel Gaudi |
| Accelerate, Transformers, DeepSpeed | Pre-train Llama-2 with Intel Gaudi |
| Framework | Example |
|---|---|
| PyTorch, AWS Neuron | Fine-tune Llama3.1 with AWS Trainium |
| Accelerate, DeepSpeed, Hugging Face | Fine-tune a Llama-2 text generation model with DeepSpeed and Hugging Face Accelerate |
| Hugging Face, DeepSpeed | Fine-tune a GPT-J-6B text generation model with DeepSpeed and Hugging Face Transformers |
| Lightning, DeepSpeed | Fine-tune a vicuna-13b text generation model with PyTorch Lightning and DeepSpeed |
| Lightning | Fine-tune a dolly-v2-7b text generation model with PyTorch Lightning and FSDP |