Improving tv_tensors wrap API for extensibility.#9398
Improving tv_tensors wrap API for extensibility.#9398gabrielfruet wants to merge 2 commits intopytorch:mainfrom
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/vision/9398
Note: Links to docs will display an error until the docs builds have been completed. ❌ 10 New Failures, 1 Unrelated FailureAs of commit 5b0311d with merge base 6940e19 ( NEW FAILURES - The following jobs have failed:
BROKEN TRUNK - The following job failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
b23ac61 to
5b0311d
Compare
There was a problem hiding this comment.
@gabrielfruet Thanks a lot for the PR. I left two comments.
it seems that there are some test failures which need to be addressed. If you are still working on this PR, feel free to convert it to a draft. When the PR is ready to be reviewed, just convert it back.
| from ._tv_tensor import TVTensor | ||
| from torchvision.tv_tensors._tv_tensor import TVTensor |
There was a problem hiding this comment.
These two lines are duplicated.
| check_dims: bool | None = None, | ||
| ) -> BoundingBoxes: | ||
| return BoundingBoxes._wrap( | ||
| tensor, | ||
| format=format if format is not None else self.format, | ||
| canvas_size=canvas_size if canvas_size is not None else self.canvas_size, | ||
| clamping_mode=clamping_mode if clamping_mode is not None else self.clamping_mode, | ||
| check_dims=False, |
There was a problem hiding this comment.
The check_dims parameter is accepted but ignored by always passing False. It should either be removed from the signature or used.
Addresses #9333
Adopt a method-based wrapping, enabling user to extend functionality on subclasses of
TVTensor.This is the pythonic approach, since we have many built-ins that rely on this on pattern (e.g
len,iter,next...)This does not break backwards compatibility.