-
-
Notifications
You must be signed in to change notification settings - Fork 4
Description
I am able to run the workflow once with some loras, and the result is quite fine, but upon running it a second time, I encounter the following error.
Clearly, its some minor issue since it runs perfectly fine, once. However, I just don't know anything about the node and how it operates so I all can do is provide the following.
Also, I'm using the modified version of this custome node found in the following link. Not sure if this is what's causing the issue, but I ran the particular workflow once WITHOUT modification and it also spat out this error.
https://note.com/tori29umai/n/n3aac565f5ace?sub_rt=share_pb
I ran a simple pose transfer lora from before with one reference image, and you can see the results in the link below. Works perfectly, but only once.
https://x.com/SlipperyGem/status/1959651582800544009
Here's the code.
`got prompt
2025-08-25 00:53:44.012681 Resolution: 1024 x 1536
Requested to load CLIPVisionModelProjection
loaded completely 19216.42197113037 787.7150573730469 True
2025-08-25 00:53:44.774456 Created reference embeds list with 1 items
2025-08-25 00:53:45.152639 Created reference latent list with 1 items
2025-08-25 00:53:45.557286 === 1フレーム推論モード(musubi-tuner完全互換) ===
2025-08-25 00:53:45.557286 Kisekaeichi(着せ替え)モード有効 (自動検出)
2025-08-25 00:53:45.558282 target_index: 4
2025-08-25 00:53:45.558282 control_index raw value: '0,7,8,9,10'
2025-08-25 00:53:45.558282 control_index type: <class 'str'>
2025-08-25 00:53:45.559280 parsed control_index: [0, 7, 8, 9, 10]
2025-08-25 00:53:45.559280 HiDream: ComfyUI is unloading all models, cleaning HiDream cache...
2025-08-25 00:53:45.559280 HiDream: Cleaning up all cached models...
2025-08-25 00:53:46.485340 HiDream: Cache cleared
2025-08-25 00:53:46.883009 参照画像latent 1: torch.Size([1, 16, 1, 192, 128])
2025-08-25 00:53:46.883009 start_latent torch.Size([1, 16, 1, 192, 128])
2025-08-25 00:53:46.885003 参照画像 1 のCLIP embeddingを設定しました
2025-08-25 00:53:46.886997 === Kisekaeichi モード設定(完全版) ===
2025-08-25 00:53:46.886997 target_index設定 (musubi-tuner仕様): 4
2025-08-25 00:53:46.890982 Add zero latents as clean latents post for one frame inference.
2025-08-25 00:53:46.891979 control_indices動的適用開始: [0, 7, 8, 9, 10]
2025-08-25 00:53:46.892977 clean_latent_indices[:, 0] = 0
2025-08-25 00:53:46.892977 clean_latent_indices[:, 1] = 7
2025-08-25 00:53:46.892977 clean_latent_indices[:, 2] = 8
2025-08-25 00:53:46.893972 最終的なclean_latent_indices: tensor([[0, 7, 8]])
2025-08-25 00:53:46.893972 Kisekaeichi: 2x/4xインデックスを無効化しました
2025-08-25 00:53:46.893972 musubi-tuner互換:最初の参照画像エンベッディングを使用 (参照画像数: 1)
2025-08-25 00:53:46.894970 Kisekaeichi設定完了(musubi-tuner完全互換):
2025-08-25 00:53:46.894970 - clean_latents.shape: torch.Size([1, 16, 3, 192, 128]) (入力+参照1個+ゼロlatent)
2025-08-25 00:53:46.895967 - latent_indices: tensor([[4]]) (初期値: 9 -> target: 4)
2025-08-25 00:53:46.895967 - clean_latent_indices: tensor([[0, 7, 8]]) (control_indices適用済み)
2025-08-25 00:53:46.905938 - sample_num_frames: 1
2025-08-25 00:53:46.905938 - control_indices適用: [0, 7, 8, 9, 10]
2025-08-25 00:53:46.906934 - control_latents数: 3
2025-08-25 00:53:46.906934 - マスク適用: clean_latents生成後
2025-08-25 00:53:46.906934 - 2x/4x無効化: True
2025-08-25 00:53:46.907932 初期サンプルを設定しました
model_type FLOW
2025-08-25 00:53:46.908927 Moving HunyuanVideoTransformer3DModel to cuda:0 with preserved memory: 6.0 GB
2025-08-25 00:53:51.223652 === サンプリング開始 ===
2025-08-25 00:53:51.223652 sample_num_frames: 1
2025-08-25 00:53:51.224648 clean_latents使用フレーム数: 3
2025-08-25 00:53:51.224648 clean_latent_2x_indices: None
2025-08-25 00:53:51.225646 clean_latent_4x_indices: None
0%| | 0/10 [00:00<?, ?it/s]
!!! Exception during processing !!! Input type (struct c10::BFloat16) and bias type (float) should be the same
Traceback (most recent call last):
File "D:\ComfyUI_windows_portable_20250319\ComfyUI\execution.py", line 496, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable_20250319\ComfyUI\execution.py", line 315, in get_output_data
return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable_20250319\ComfyUI\execution.py", line 289, in _async_map_node_over_list
await process_inputs(input_dict, i)
File "D:\ComfyUI_windows_portable_20250319\ComfyUI\execution.py", line 277, in process_inputs
result = f(**inputs)
^^^^^^^^^^^
File "D:\ComfyUI_windows_portable_20250319\ComfyUI\custom_nodes\ComfyUI-FramePackWrapper_PlusOne\nodes_one.py", line 538, in process
generated_latents = sample_hunyuan(
^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable_20250319\python_embeded\Lib\site-packages\torch\utils_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable_20250319\ComfyUI\custom_nodes\ComfyUI-FramePackWrapper_PlusOne\diffusers_helper\pipelines\k_diffusion_hunyuan.py", line 119, in sample_hunyuan
results = sample_unipc(k_model, latents, sigmas, extra_args=sampler_kwargs, disable=False, variant=variant, callback=callback)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable_20250319\ComfyUI\custom_nodes\ComfyUI-FramePackWrapper_PlusOne\diffusers_helper\k_diffusion\uni_pc_fm.py", line 149, in sample_unipc
return FlowMatchUniPC(model, extra_args=extra_args, variant=variant).sample(noise, sigmas=sigmas, callback=callback, disable_pbar=disable)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable_20250319\ComfyUI\custom_nodes\ComfyUI-FramePackWrapper_PlusOne\diffusers_helper\k_diffusion\uni_pc_fm.py", line 119, in sample
model_prev_list = [self.model_fn(x, vec_t)]
^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable_20250319\ComfyUI\custom_nodes\ComfyUI-FramePackWrapper_PlusOne\diffusers_helper\k_diffusion\uni_pc_fm.py", line 23, in model_fn
return self.model(x, t, **self.extra_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable_20250319\ComfyUI\custom_nodes\ComfyUI-FramePackWrapper_PlusOne\diffusers_helper\k_diffusion\wrapper.py", line 37, in k_model
pred_positive = transformer(hidden_states=hidden_states, timestep=timestep, return_dict=False, **extra_args['positive'])[0].float()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable_20250319\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable_20250319\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable_20250319\ComfyUI\custom_nodes\ComfyUI-FramePackWrapper_PlusOne\diffusers_helper\models\hunyuan_video_packed.py", line 967, in forward
hidden_states, rope_freqs = self.process_input_hidden_states(hidden_states, latent_indices, clean_latents, clean_latent_indices, clean_latents_2x, clean_latent_2x_indices, clean_latents_4x, clean_latent_4x_indices)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable_20250319\ComfyUI\custom_nodes\ComfyUI-FramePackWrapper_PlusOne\diffusers_helper\models\hunyuan_video_packed.py", line 865, in process_input_hidden_states
clean_latents = self.gradient_checkpointing_method(self.clean_x_embedder.proj, clean_latents)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable_20250319\ComfyUI\custom_nodes\ComfyUI-FramePackWrapper_PlusOne\diffusers_helper\models\hunyuan_video_packed.py", line 817, in gradient_checkpointing_method
result = block(*args)
^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable_20250319\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable_20250319\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable_20250319\python_embeded\Lib\site-packages\torch\nn\modules\conv.py", line 725, in forward
return self._conv_forward(input, self.weight, self.bias)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable_20250319\python_embeded\Lib\site-packages\torch\nn\modules\conv.py", line 720, in _conv_forward
return F.conv3d(
^^^^^^^^^
RuntimeError: Input type (struct c10::BFloat16) and bias type (float) should be the same
Prompt executed in 8.94 seconds`