Skip to content

fix: correct boolean logic in zero_rank_print that prevented all output#436

Open
Mr-Neutr0n wants to merge 1 commit intoguoyww:mainfrom
Mr-Neutr0n:fix/zero-rank-print-logic
Open

fix: correct boolean logic in zero_rank_print that prevented all output#436
Mr-Neutr0n wants to merge 1 commit intoguoyww:mainfrom
Mr-Neutr0n:fix/zero-rank-print-logic

Conversation

@Mr-Neutr0n
Copy link

Summary

The zero_rank_print function in animatediff/utils/util.py contains a boolean logic error that causes it to never print anything, regardless of the distributed process state.

The Bug

The current condition is:

if (not dist.is_initialized()) and (dist.is_initialized() and dist.get_rank() == 0):

This follows the pattern (NOT A) AND (A AND B), which is a logical contradiction — it requires dist.is_initialized() to be both False and True simultaneously. The condition evaluates to False for every possible input, so the print statement is dead code.

The Fix

Change the and between the two clauses to or:

if (not dist.is_initialized()) or (dist.is_initialized() and dist.get_rank() == 0):

This correctly implements the intended behavior:

  • If distributed training is NOT initialized: print the message (single-process mode).
  • If distributed training IS initialized and this is rank 0: print the message (only the main process prints).

This is the standard pattern used across PyTorch distributed training codebases for rank-guarded logging.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant

Comments