-
Notifications
You must be signed in to change notification settings - Fork 0
Open
Description
Python: 3.11
Pytorch: 2.8
CUDA: 12.8
Hello, I try to run eval.py directly in using diffusers, bit it shows this
Keyword arguments {'device': 'cuda'} are not expected by FluxPipeline and will be ignored.
Loading pipeline components...: 0%| | 0/7 [00:00<?, ?it/s]`torch_dtype` is deprecated! Use `dtype` instead!
You set `add_prefix_space`. The tokenizer needs to be converted from the slow tokenizers
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████| 3/3 [00:00<00:00, 15.40it/s]
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████| 2/2 [00:00<00:00, 122.57it/s]
Loading pipeline components...: 100%|█████████████████████████████████████████████████| 7/7 [00:00<00:00, 11.89it/s]
Fetching 3 files: 100%|█████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 6879.67it/s]
/workspace/FLUX-Makeup/diffusers/models/embeddings.py:2609: FutureWarning: `FluxPosEmbed` is deprecated and will be removed in version 1.0.0. Importing and using `FluxPosEmbed` from `diffusers.models.embeddings` is deprecated. Please import it from `diffusers.models.transformers.transformer_flux`.
deprecate("FluxPosEmbed", "1.0.0", deprecation_message)
/workspace/FLUX-Makeup/diffusers/models/attention_processor.py:5510: FutureWarning: `FluxAttnProcessor2_0` is deprecated and will be removed in version 1.0.0. `FluxAttnProcessor2_0` is deprecated and this will be removed in a future version. Please use `FluxAttnProcessor`
deprecate("FluxAttnProcessor2_0", "1.0.0", deprecation_message)
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████| 3/3 [00:00<00:00, 40.00it/s]
Processing src images: 0%| | 0/1 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/workspace/FLUX-Makeup/eval.py", line 245, in <module>
main()
File "/workspace/FLUX-Makeup/eval.py", line 227, in main
result = wrappedPipe.generate(
^^^^^^^^^^^^^^^^^^^^^
File "/workspace/FLUX-Makeup/eval.py", line 82, in generate
ref_hidden_states = self.pipe.vae.encode(ref_img_tensor).latent_dist.sample()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/FLUX-Makeup/diffusers/utils/accelerate_utils.py", line 46, in wrapper
return method(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/FLUX-Makeup/diffusers/models/autoencoders/autoencoder_kl.py", line 278, in encode
h = self._encode(x)
^^^^^^^^^^^^^^^
File "/workspace/FLUX-Makeup/diffusers/models/autoencoders/autoencoder_kl.py", line 252, in _encode
enc = self.encoder(x)
^^^^^^^^^^^^^^^
File "/venv/main/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1773, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/venv/main/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1784, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/FLUX-Makeup/diffusers/models/autoencoders/vae.py", line 168, in forward
sample = down_block(sample)
^^^^^^^^^^^^^^^^^^
File "/venv/main/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1773, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/venv/main/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1784, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/FLUX-Makeup/diffusers/models/unets/unet_2d_blocks.py", line 1442, in forward
hidden_states = resnet(hidden_states, temb=None)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/venv/main/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1773, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/venv/main/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1784, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/FLUX-Makeup/diffusers/models/resnet.py", line 371, in forward
output_tensor = (input_tensor + hidden_states) / self.output_scale_factor
~~~~~~~~~~~~~^~~~~~~~~~~~~~~
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 128.00 MiB. GPU 0 has a total capacity of 44.39 GiB of which 45.31 MiB is free. Process 310097 has 44.34 GiB memory in use. Of the allocated memory 43.57 GiB is allocated by PyTorch, and 278.71 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels