Skip to content

Improving tv_tensors wrap API for extensibility.#9398

Draft
gabrielfruet wants to merge 2 commits intopytorch:mainfrom
gabrielfruet:feat/extend-wrap
Draft

Improving tv_tensors wrap API for extensibility.#9398
gabrielfruet wants to merge 2 commits intopytorch:mainfrom
gabrielfruet:feat/extend-wrap

Conversation

@gabrielfruet
Copy link
Contributor

@gabrielfruet gabrielfruet commented Feb 20, 2026

Addresses #9333

Adopt a method-based wrapping, enabling user to extend functionality on subclasses of TVTensor.

This is the pythonic approach, since we have many built-ins that rely on this on pattern (e.g len, iter, next ...)

This does not break backwards compatibility.

@pytorch-bot
Copy link

pytorch-bot bot commented Feb 20, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/vision/9398

Note: Links to docs will display an error until the docs builds have been completed.

❌ 10 New Failures, 1 Unrelated Failure

As of commit 5b0311d with merge base 6940e19 (image):

NEW FAILURES - The following jobs have failed:

BROKEN TRUNK - The following job failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the cla signed label Feb 20, 2026
@gabrielfruet gabrielfruet changed the title Improving tv_tensors wrap API for extensability. Improving tv_tensors wrap API for extensibility. Feb 22, 2026
Copy link
Contributor

@zy1git zy1git left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gabrielfruet Thanks a lot for the PR. I left two comments.

it seems that there are some test failures which need to be addressed. If you are still working on this PR, feel free to convert it to a draft. When the PR is ready to be reviewed, just convert it back.

Comment on lines 10 to +11
from ._tv_tensor import TVTensor
from torchvision.tv_tensors._tv_tensor import TVTensor
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These two lines are duplicated.

Comment on lines +126 to +133
check_dims: bool | None = None,
) -> BoundingBoxes:
return BoundingBoxes._wrap(
tensor,
format=format if format is not None else self.format,
canvas_size=canvas_size if canvas_size is not None else self.canvas_size,
clamping_mode=clamping_mode if clamping_mode is not None else self.clamping_mode,
check_dims=False,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The check_dims parameter is accepted but ignored by always passing False. It should either be removed from the signature or used.

@gabrielfruet gabrielfruet marked this pull request as draft February 25, 2026 11:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants