To whom it may concern:
The current training script uses gpus from a single host node (i.e., all 4 gpus are on the same machine):
python ./src/Train_scRef.py \
--ckpt_path ./Ckpts_scRefs/Heart_D2 \
--scRef ./Ckpts_scRefs/Heart_D2/Ref_Heart_sanger_D2.h5ad \
--cell_class_column cell_type \
--gpus 0,1,2,3 \
Does SpatialScope support distributed training using GPUs from different compute nodes (e.g., 4 gpus from two different nodes, 2 GPUs per node) which is common under a cluster environment (similar to https://pytorch.org/docs/stable/nn.html#module-torch.nn.parallel)? Thanks a lot!
Feng