Skip to content

Conversation

@SS-JIA
Copy link
Contributor

@SS-JIA SS-JIA commented Dec 27, 2025

Stack from ghstack (oldest at bottom):

Context

With the introduction of block-packed memory layouts for quantized tensors, the metadata stored by vTensor used to describe the data layout within a texture/buffer was no longer sufficient to form a complete description of the data layout. This created an awkward pattern of needing to estimate the GPUMemoryLayout which would be needed to compute storage descriptors such as image extents.

This diff addresses the problem by introducing the PackedDimInfo struct to vTensor, which provides a complete description of how data in a tensor may be organized in the GPU buffer/texture used to store the tensor data, and allows simplification of the functions used to compute buffer numel or texture extents.

PackedDimInfo

Introduced the PackedDimInfo struct that encapsulates all information about packed dimensions in GPU tensors. This improves code organization and makes the relationship between related metadata fields explicit.

The PackedDimInfo struct contains:

  • packed_dim: Which dimension is tightly packed (WHCN index) / contiguous in memory
  • packed_dim_padded: Whether the packed dimension is padded to multiple of 4; some layouts will do this to accomodate vectorized load/stores
  • outer_packed_dim: Second-level packing for block-packed layouts (4W4C, 4H4W); for layouts with only a single level of packing, will be equal to packed_dim
  • outer_packed_dim_padded: Whether outer packed dim is padded (tiled only)

Changes

  • Added PackedDimInfo struct with helper function calculate_packed_dim_info()
  • Replaced packed_dim_ member with packed_dim_info_ in vTensor class
  • Updated function signatures to accept PackedDimInfo& instead of packed_dim_:
    • create_hashed_layout
    • calculate_dim_order
    • calculate_padded_sizes
    • calculate_logical_limits
    • TextureMetadata constructor/update
    • vTensorStorage constructor
  • Added packed_dim_info() accessor to vTensor and ComputeGraph classes
  • Store an additional padded_sizes_ member in vTensor which is now used to for strides/image extents/GPU buffer numel computation, instead of using sizes_ directly
  • Introduce memory layouts for kInt8x4 type that only use a single level of packing

Differential Revision: D89832382

… metadata

## Context

With the introduction of block-packed memory layouts for quantized tensors, the metadata stored by `vTensor` used to describe the data layout within a texture/buffer was no longer sufficient to form a complete description of the data layout. This created an awkward pattern of needing to estimate the `GPUMemoryLayout` which would be needed to compute storage descriptors such as image extents.

This diff addresses the problem by introducing the `PackedDimInfo` struct to `vTensor`, which provides a complete description of how data in a tensor may be organized in the GPU buffer/texture used to store the tensor data, and allows simplification of the functions used to compute buffer numel or texture extents.

## `PackedDimInfo`

Introduced the `PackedDimInfo` struct that encapsulates all information about packed dimensions in GPU tensors. This improves code organization and makes the relationship between related metadata fields explicit.

The `PackedDimInfo` struct contains:
- `packed_dim`: Which dimension is tightly packed (WHCN index) / contiguous in memory
- `packed_dim_padded`: Whether the packed dimension is padded to multiple of 4; some layouts will do this to accomodate vectorized load/stores
- `outer_packed_dim`: Second-level packing for block-packed layouts (4W4C, 4H4W); for layouts with only a single level of packing, will be equal to `packed_dim`
- `outer_packed_dim_padded`: Whether outer packed dim is padded (tiled only)

## Changes

- Added PackedDimInfo struct with helper function calculate_packed_dim_info()
- Replaced packed_dim_ member with packed_dim_info_ in vTensor class
- Updated function signatures to accept PackedDimInfo& instead of packed_dim_:
  * create_hashed_layout
  * calculate_dim_order
  * calculate_padded_sizes
  * calculate_logical_limits
  * TextureMetadata constructor/update
  * vTensorStorage constructor
- Added packed_dim_info() accessor to vTensor and ComputeGraph classes
- Store an additional `padded_sizes_` member in vTensor which is now used to for strides/image extents/GPU buffer numel computation, instead of using `sizes_` directly
- Introduce memory layouts for kInt8x4 type that only use a single level of packing

Differential Revision: [D89832382](https://our.internmc.facebook.com/intern/diff/D89832382/)

[ghstack-poisoned]
@pytorch-bot
Copy link

pytorch-bot bot commented Dec 27, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/16403

Note: Links to docs will display an error until the docs builds have been completed.

❌ 2 New Failures, 1 Unrelated Failure

As of commit 8aeb5f4 with merge base 9a30bd3 (image):

NEW FAILURES - The following jobs have failed:

UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

SS-JIA pushed a commit that referenced this pull request Dec 27, 2025
… metadata

## Context

With the introduction of block-packed memory layouts for quantized tensors, the metadata stored by `vTensor` used to describe the data layout within a texture/buffer was no longer sufficient to form a complete description of the data layout. This created an awkward pattern of needing to estimate the `GPUMemoryLayout` which would be needed to compute storage descriptors such as image extents.

This diff addresses the problem by introducing the `PackedDimInfo` struct to `vTensor`, which provides a complete description of how data in a tensor may be organized in the GPU buffer/texture used to store the tensor data, and allows simplification of the functions used to compute buffer numel or texture extents.

## `PackedDimInfo`

Introduced the `PackedDimInfo` struct that encapsulates all information about packed dimensions in GPU tensors. This improves code organization and makes the relationship between related metadata fields explicit.

The `PackedDimInfo` struct contains:
- `packed_dim`: Which dimension is tightly packed (WHCN index) / contiguous in memory
- `packed_dim_padded`: Whether the packed dimension is padded to multiple of 4; some layouts will do this to accomodate vectorized load/stores
- `outer_packed_dim`: Second-level packing for block-packed layouts (4W4C, 4H4W); for layouts with only a single level of packing, will be equal to `packed_dim`
- `outer_packed_dim_padded`: Whether outer packed dim is padded (tiled only)

## Changes

- Added PackedDimInfo struct with helper function calculate_packed_dim_info()
- Replaced packed_dim_ member with packed_dim_info_ in vTensor class
- Updated function signatures to accept PackedDimInfo& instead of packed_dim_:
  * create_hashed_layout
  * calculate_dim_order
  * calculate_padded_sizes
  * calculate_logical_limits
  * TextureMetadata constructor/update
  * vTensorStorage constructor
- Added packed_dim_info() accessor to vTensor and ComputeGraph classes
- Store an additional `padded_sizes_` member in vTensor which is now used to for strides/image extents/GPU buffer numel computation, instead of using `sizes_` directly
- Introduce memory layouts for kInt8x4 type that only use a single level of packing

Differential Revision: [D89832382](https://our.internmc.facebook.com/intern/diff/D89832382/)

ghstack-source-id: 331218513
Pull Request resolved: #16403
@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Dec 27, 2025
@github-actions
Copy link

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported meta-exported

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants