Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
47 commits
Select commit Hold shift + click to select a range
bedc67c
[Docs] Add guide for AutoModel with custom code (#13099)
DN6 Feb 10, 2026
5bf248d
[SkyReelsV2] Fix ftfy import (#13113)
asomoza Feb 10, 2026
4d00980
[lora] fix non-diffusers lora key handling for flux2 (#13119)
sayakpaul Feb 11, 2026
c3a4cd1
[CI] Refactor Wan Model Tests (#13082)
DN6 Feb 11, 2026
64e2adf
docs: improve docstring scheduling_edm_dpmsolver_multistep.py (#13122)
delmalih Feb 11, 2026
d324839
[Fix]Allow `prompt` and `prior_token_ids` to be provided simultaneous…
JaredforReal Feb 11, 2026
06a0f98
docs: improve docstring scheduling_flow_match_euler_discrete.py (#13127)
delmalih Feb 12, 2026
a181616
Cosmos Transfer2.5 inference pipeline: general/{seg, depth, blur, edg…
miguelmartin75 Feb 12, 2026
ed77a24
[modular] add tests for robust model loading. (#13120)
sayakpaul Feb 12, 2026
985d83c
Fix LTX-2 Inference when `num_videos_per_prompt > 1` and CFG is Enabl…
dg845 Feb 12, 2026
427472e
[CI] Fix `setuptools` `pkg_resources` Errors (#13129)
dg845 Feb 12, 2026
5f3ea22
docs: improve docstring scheduling_flow_match_heun_discrete.py (#13130)
delmalih Feb 12, 2026
277e305
[CI] Fix `setuptools` `pkg_resources` Bug for PR GPU Tests (#13132)
dg845 Feb 13, 2026
76af013
fix cosmos transformer typing. (#13134)
sayakpaul Feb 13, 2026
2843b3d
Sunset Python 3.8 & get rid of explicit `typing` exports where possib…
sayakpaul Feb 13, 2026
8abcf35
feat: implement apply_lora_scale to remove boilerplate. (#12994)
sayakpaul Feb 13, 2026
3c1c62e
[docs] fix ltx2 i2v docstring. (#13135)
sayakpaul Feb 14, 2026
6141ae2
[Modular] add different pipeine blocks to init (#13145)
yiyixuxu Feb 14, 2026
5b00a18
fix MT5Tokenizer (#13146)
yiyixuxu Feb 14, 2026
19ab0ec
fix guider (#13147)
yiyixuxu Feb 14, 2026
3c7506b
[Modular] update doc for `ModularPipeline` (#13100)
yiyixuxu Feb 14, 2026
c919ec0
[Modular] add explicit workflow support (#13028)
yiyixuxu Feb 15, 2026
b0dc51d
[LTX2] Fix wrong lora mixin (#13144)
asomoza Feb 15, 2026
59e7a46
[Pipelines] Remove k-diffusion (#13152)
DN6 Feb 16, 2026
e390646
[tests] accept recompile_limit from the user in tests (#13150)
sayakpaul Feb 16, 2026
35086ac
[core] support device type device_maps to work with offloading. (#12811)
sayakpaul Feb 16, 2026
bcbbded
[Bug] Fix QwenImageEditPlus Series on NPU (#13017)
zhangtao0408 Feb 17, 2026
f81e653
[CI] Add ftfy as a test dependency (#13155)
DN6 Feb 18, 2026
64734b2
docs: improve docstring scheduling_flow_match_lcm.py (#13160)
delmalih Feb 18, 2026
6875490
[docs] add docs for qwenimagelayered (#13158)
stevhliu Feb 18, 2026
a577ec3
Flux2: Tensor tuples can cause issues for checkpointing (#12777)
dxqb Feb 19, 2026
53e1d0e
[CI] Revert `setuptools` CI Fix as the Failing Pipelines are Deprecat…
dg845 Feb 19, 2026
fe78a7b
Fix `ftfy` import for PRX Pipeline (#13154)
dg845 Feb 19, 2026
99daaa8
[core] Enable CP for kernels-based attention backends (#12812)
sayakpaul Feb 19, 2026
f8d3db9
remove deps related to test from ci (#13164)
sayakpaul Feb 20, 2026
db2d7e7
[CI] Fix new LoRAHotswap tests (#13163)
DN6 Feb 20, 2026
01de02e
[gguf][torch.compile time] Convert to plain tensor earlier in dequant…
anijain2305 Feb 20, 2026
a80b192
Support Flux Klein peft (fal) lora format (#13169)
asomoza Feb 21, 2026
f1e5914
Fix T5GemmaEncoder loading for transformers 5.x composite T5GemmaConf…
DavidBert Feb 23, 2026
4890e9b
Allow Automodel to use `from_config` with custom code. (#13123)
DN6 Feb 23, 2026
7ab2011
Fix AutoModel `typing` Import Error (#13178)
dg845 Feb 24, 2026
5e94d62
migrate to `transformers` v5 (#12976)
sayakpaul Feb 24, 2026
1f6ac1c
fix: graceful fallback when attention backends fail to import (#13060)
sym-bot Feb 24, 2026
aac94be
[docs] Fix torchrun command argument order in docs (#13181)
sayakpaul Feb 24, 2026
734f045
AR
miguelmartin75 Feb 7, 2026
03b666a
address comments
miguelmartin75 Feb 24, 2026
a66a12a
address comments 2
miguelmartin75 Feb 25, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
2 changes: 1 addition & 1 deletion .github/workflows/notify_slack_about_release.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ jobs:
- name: Setup Python
uses: actions/setup-python@v6
with:
python-version: '3.8'
python-version: '3.10'

- name: Notify Slack about the release
env:
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/pr_dependency_test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ jobs:
- name: Set up Python
uses: actions/setup-python@v6
with:
python-version: "3.8"
python-version: "3.10"
- name: Install dependencies
run: |
pip install -e .
Expand Down
17 changes: 6 additions & 11 deletions .github/workflows/pr_tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ jobs:
- name: Set up Python
uses: actions/setup-python@v6
with:
python-version: "3.8"
python-version: "3.10"
- name: Install dependencies
run: |
pip install --upgrade pip
Expand All @@ -55,7 +55,7 @@ jobs:
- name: Set up Python
uses: actions/setup-python@v6
with:
python-version: "3.8"
python-version: "3.10"
- name: Install dependencies
run: |
pip install --upgrade pip
Expand Down Expand Up @@ -92,7 +92,6 @@ jobs:
runner: aws-general-8-plus
image: diffusers/diffusers-pytorch-cpu
report: torch_example_cpu

name: ${{ matrix.config.name }}

runs-on:
Expand All @@ -115,8 +114,7 @@ jobs:
- name: Install dependencies
run: |
uv pip install -e ".[quality]"
#uv pip uninstall transformers huggingface_hub && uv pip install --prerelease allow -U transformers@git+https://github.com/huggingface/transformers.git
uv pip uninstall transformers huggingface_hub && uv pip install transformers==4.57.1
uv pip uninstall transformers huggingface_hub && uv pip install --prerelease allow -U transformers@git+https://github.com/huggingface/transformers.git
uv pip uninstall accelerate && uv pip install -U accelerate@git+https://github.com/huggingface/accelerate.git --no-deps

- name: Environment
Expand Down Expand Up @@ -218,8 +216,6 @@ jobs:

run_lora_tests:
needs: [check_code_quality, check_repository_consistency]
strategy:
fail-fast: false

name: LoRA tests with PEFT main

Expand Down Expand Up @@ -247,9 +243,8 @@ jobs:
uv pip install -U peft@git+https://github.com/huggingface/peft.git --no-deps
uv pip install -U tokenizers
uv pip uninstall accelerate && uv pip install -U accelerate@git+https://github.com/huggingface/accelerate.git --no-deps
#uv pip uninstall transformers huggingface_hub && uv pip install --prerelease allow -U transformers@git+https://github.com/huggingface/transformers.git
uv pip uninstall transformers huggingface_hub && uv pip install transformers==4.57.1

uv pip uninstall transformers huggingface_hub && uv pip install --prerelease allow -U transformers@git+https://github.com/huggingface/transformers.git

- name: Environment
run: |
python utils/print_env.py
Expand All @@ -275,6 +270,6 @@ jobs:
if: ${{ always() }}
uses: actions/upload-artifact@v6
with:
name: pr_main_test_reports
name: pr_lora_test_reports
path: reports

13 changes: 5 additions & 8 deletions .github/workflows/pr_tests_gpu.yml
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ jobs:
- name: Set up Python
uses: actions/setup-python@v6
with:
python-version: "3.8"
python-version: "3.10"
- name: Install dependencies
run: |
pip install --upgrade pip
Expand All @@ -56,7 +56,7 @@ jobs:
- name: Set up Python
uses: actions/setup-python@v6
with:
python-version: "3.8"
python-version: "3.10"
- name: Install dependencies
run: |
pip install --upgrade pip
Expand Down Expand Up @@ -131,8 +131,7 @@ jobs:
run: |
uv pip install -e ".[quality]"
uv pip uninstall accelerate && uv pip install -U accelerate@git+https://github.com/huggingface/accelerate.git
#uv pip uninstall transformers huggingface_hub && uv pip install --prerelease allow -U transformers@git+https://github.com/huggingface/transformers.git
uv pip uninstall transformers huggingface_hub && uv pip install transformers==4.57.1
uv pip uninstall transformers huggingface_hub && uv pip install --prerelease allow -U transformers@git+https://github.com/huggingface/transformers.git

- name: Environment
run: |
Expand Down Expand Up @@ -202,8 +201,7 @@ jobs:
uv pip install -e ".[quality]"
uv pip install peft@git+https://github.com/huggingface/peft.git
uv pip uninstall accelerate && uv pip install -U accelerate@git+https://github.com/huggingface/accelerate.git
#uv pip uninstall transformers huggingface_hub && uv pip install --prerelease allow -U transformers@git+https://github.com/huggingface/transformers.git
uv pip uninstall transformers huggingface_hub && uv pip install transformers==4.57.1
uv pip uninstall transformers huggingface_hub && uv pip install --prerelease allow -U transformers@git+https://github.com/huggingface/transformers.git

- name: Environment
run: |
Expand Down Expand Up @@ -264,8 +262,7 @@ jobs:
nvidia-smi
- name: Install dependencies
run: |
#uv pip uninstall transformers huggingface_hub && uv pip install --prerelease allow -U transformers@git+https://github.com/huggingface/transformers.git
uv pip uninstall transformers huggingface_hub && uv pip install transformers==4.57.1
uv pip uninstall transformers huggingface_hub && uv pip install --prerelease allow -U transformers@git+https://github.com/huggingface/transformers.git
uv pip install -e ".[quality,training]"

- name: Environment
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/pr_torch_dependency_test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ jobs:
- name: Set up Python
uses: actions/setup-python@v6
with:
python-version: "3.8"
python-version: "3.10"
- name: Install dependencies
run: |
pip install -e .
Expand Down
9 changes: 3 additions & 6 deletions .github/workflows/push_tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -76,8 +76,7 @@ jobs:
run: |
uv pip install -e ".[quality]"
uv pip uninstall accelerate && uv pip install -U accelerate@git+https://github.com/huggingface/accelerate.git
#uv pip uninstall transformers huggingface_hub && uv pip install --prerelease allow -U transformers@git+https://github.com/huggingface/transformers.git
uv pip uninstall transformers huggingface_hub && uv pip install transformers==4.57.1
uv pip uninstall transformers huggingface_hub && uv pip install --prerelease allow -U transformers@git+https://github.com/huggingface/transformers.git
- name: Environment
run: |
python utils/print_env.py
Expand Down Expand Up @@ -129,8 +128,7 @@ jobs:
uv pip install -e ".[quality]"
uv pip install peft@git+https://github.com/huggingface/peft.git
uv pip uninstall accelerate && uv pip install -U accelerate@git+https://github.com/huggingface/accelerate.git
#uv pip uninstall transformers huggingface_hub && uv pip install --prerelease allow -U transformers@git+https://github.com/huggingface/transformers.git
uv pip uninstall transformers huggingface_hub && uv pip install transformers==4.57.1
uv pip uninstall transformers huggingface_hub && uv pip install --prerelease allow -U transformers@git+https://github.com/huggingface/transformers.git

- name: Environment
run: |
Expand Down Expand Up @@ -182,8 +180,7 @@ jobs:
- name: Install dependencies
run: |
uv pip install -e ".[quality,training]"
#uv pip uninstall transformers huggingface_hub && uv pip install --prerelease allow -U transformers@git+https://github.com/huggingface/transformers.git
uv pip uninstall transformers huggingface_hub && uv pip install transformers==4.57.1
uv pip uninstall transformers huggingface_hub && uv pip install --prerelease allow -U transformers@git+https://github.com/huggingface/transformers.git
- name: Environment
run: |
python utils/print_env.py
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/push_tests_mps.yml
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ jobs:
shell: arch -arch arm64 bash {0}
run: |
${CONDA_RUN} python -m pip install --upgrade pip uv
${CONDA_RUN} python -m uv pip install -e ".[quality,test]"
${CONDA_RUN} python -m uv pip install -e ".[quality]"
${CONDA_RUN} python -m uv pip install torch torchvision torchaudio
${CONDA_RUN} python -m uv pip install accelerate@git+https://github.com/huggingface/accelerate.git
${CONDA_RUN} python -m uv pip install transformers --upgrade
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/pypi_publish.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ jobs:
- name: Set up Python
uses: actions/setup-python@v6
with:
python-version: '3.8'
python-version: '3.10'

- name: Fetch latest branch
id: fetch_latest_branch
Expand All @@ -47,7 +47,7 @@ jobs:
- name: Setup Python
uses: actions/setup-python@v6
with:
python-version: "3.8"
python-version: "3.10"

- name: Install dependencies
run: |
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/stale.yml
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ jobs:
- name: Setup Python
uses: actions/setup-python@v6
with:
python-version: 3.8
python-version: 3.10

- name: Install requirements
run: |
Expand Down
2 changes: 1 addition & 1 deletion LICENSE
Original file line number Diff line number Diff line change
Expand Up @@ -144,7 +144,7 @@
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
implied, including, without limitation, Any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
Expand Down
14 changes: 7 additions & 7 deletions benchmarks/benchmarking_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
import threading
from contextlib import nullcontext
from dataclasses import dataclass
from typing import Any, Callable, Dict, Optional, Union
from typing import Any, Callable

import pandas as pd
import torch
Expand Down Expand Up @@ -91,10 +91,10 @@ def model_init_fn(model_cls, group_offload_kwargs=None, layerwise_upcasting=Fals
class BenchmarkScenario:
name: str
model_cls: ModelMixin
model_init_kwargs: Dict[str, Any]
model_init_kwargs: dict[str, Any]
model_init_fn: Callable
get_model_input_dict: Callable
compile_kwargs: Optional[Dict[str, Any]] = None
compile_kwargs: dict[str, Any] | None = None


@require_torch_gpu
Expand Down Expand Up @@ -176,7 +176,7 @@ def run_benchmark(self, scenario: BenchmarkScenario):
result["fullgraph"], result["mode"] = None, None
return result

def run_bencmarks_and_collate(self, scenarios: Union[BenchmarkScenario, list[BenchmarkScenario]], filename: str):
def run_bencmarks_and_collate(self, scenarios: BenchmarkScenario | list[BenchmarkScenario], filename: str):
if not isinstance(scenarios, list):
scenarios = [scenarios]
record_queue = queue.Queue()
Expand Down Expand Up @@ -214,10 +214,10 @@ def _run_phase(
*,
model_cls: ModelMixin,
init_fn: Callable,
init_kwargs: Dict[str, Any],
init_kwargs: dict[str, Any],
get_input_fn: Callable,
compile_kwargs: Optional[Dict[str, Any]],
) -> Dict[str, float]:
compile_kwargs: dict[str, Any] | None = None,
) -> dict[str, float]:
# setup
self.pre_benchmark()

Expand Down
3 changes: 1 addition & 2 deletions docs/source/en/_toctree.yml
Original file line number Diff line number Diff line change
Expand Up @@ -625,8 +625,7 @@
title: Image-to-image
- local: api/pipelines/stable_diffusion/inpaint
title: Inpainting
- local: api/pipelines/stable_diffusion/k_diffusion
title: K-Diffusion

- local: api/pipelines/stable_diffusion/latent_upscale
title: Latent upscaler
- local: api/pipelines/stable_diffusion/ldm3d_diffusion
Expand Down
20 changes: 14 additions & 6 deletions docs/source/en/api/pipelines/cosmos.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,6 +46,20 @@ output = pipe(
output.save("output.png")
```

## Cosmos2_5_TransferPipeline

[[autodoc]] Cosmos2_5_TransferPipeline
- all
- __call__


## Cosmos2_5_PredictBasePipeline

[[autodoc]] Cosmos2_5_PredictBasePipeline
- all
- __call__


## CosmosTextToWorldPipeline

[[autodoc]] CosmosTextToWorldPipeline
Expand All @@ -70,12 +84,6 @@ output.save("output.png")
- all
- __call__

## Cosmos2_5_PredictBasePipeline

[[autodoc]] Cosmos2_5_PredictBasePipeline
- all
- __call__

## CosmosPipelineOutput

[[autodoc]] pipelines.cosmos.pipeline_output.CosmosPipelineOutput
Expand Down
8 changes: 7 additions & 1 deletion docs/source/en/api/pipelines/qwenimage.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ Qwen-Image comes in the following variants:
| Qwen-Image-Edit Plus | [Qwen/Qwen-Image-Edit-2509](https://huggingface.co/Qwen/Qwen-Image-Edit-2509) |

> [!TIP]
> [Caching](../../optimization/cache) may also speed up inference by storing and reusing intermediate outputs.
> See the [Caching](../../optimization/cache) guide to speed up inference by storing and reusing intermediate outputs.

## LoRA for faster inference

Expand Down Expand Up @@ -190,6 +190,12 @@ For detailed benchmark scripts and results, see [this gist](https://gist.github.
- all
- __call__

## QwenImageLayeredPipeline

[[autodoc]] QwenImageLayeredPipeline
- all
- __call__

## QwenImagePipelineOutput

[[autodoc]] pipelines.qwenimage.pipeline_output.QwenImagePipelineOutput
30 changes: 0 additions & 30 deletions docs/source/en/api/pipelines/stable_diffusion/k_diffusion.md

This file was deleted.

16 changes: 1 addition & 15 deletions docs/source/en/modular_diffusers/guiders.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,29 +89,15 @@ t2i_pipeline.guider

## Changing guider parameters

The guider parameters can be adjusted with either the [`~ComponentSpec.create`] method or with [`~ModularPipeline.update_components`]. The example below changes the `guidance_scale` value.
The guider parameters can be adjusted with the [`~ComponentSpec.create`] method and [`~ModularPipeline.update_components`]. The example below changes the `guidance_scale` value.

<hfoptions id="switch">
<hfoption id="create">

```py
guider_spec = t2i_pipeline.get_component_spec("guider")
guider = guider_spec.create(guidance_scale=10)
t2i_pipeline.update_components(guider=guider)
```

</hfoption>
<hfoption id="update_components">

```py
guider_spec = t2i_pipeline.get_component_spec("guider")
guider_spec.config["guidance_scale"] = 10
t2i_pipeline.update_components(guider=guider_spec)
```

</hfoption>
</hfoptions>

## Uploading custom guiders

Call the [`~utils.PushToHubMixin.push_to_hub`] method on a custom guider to share it to the Hub.
Expand Down
Loading