Skip to content

[PyTorch][torch.compile] Add value object support for quantizers#2792

Draft
pggPL wants to merge 3 commits intoNVIDIA:mainfrom
pggPL:quantizers_as_value_objects
Draft

[PyTorch][torch.compile] Add value object support for quantizers#2792
pggPL wants to merge 3 commits intoNVIDIA:mainfrom
pggPL:quantizers_as_value_objects

Conversation

@pggPL
Copy link
Collaborator

@pggPL pggPL commented Mar 23, 2026

Declare Float8CurrentScalingQuantizer, MXFP8Quantizer, Float8BlockQuantizer and NVFP4Quantizer as torch.compile value-typed opaque objects by adding eq, hash and fx_repr methods.

Type of change

  • Documentation change (change only to the documentation, either a fix or a new content)
  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Infra/Build change
  • Code refactoring

Changes

Please list the changes introduced in this PR:

  • Change A
  • Change B

Checklist:

  • I have read and followed the contributing guidelines
  • The functionality is complete
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes

pggPL and others added 3 commits March 23, 2026 16:39
Declare Float8CurrentScalingQuantizer, MXFP8Quantizer,
Float8BlockQuantizer and NVFP4Quantizer as torch.compile
value-typed opaque objects by adding __eq__, __hash__ and
__fx_repr__ methods. This lets Dynamo bake them as constants
and guard on equality instead of treating them as graph inputs.

Key changes:
- Add _TEQuantizerMeta combining OpaqueBaseMeta + ABCMeta
  (graceful fallback on PyTorch < 2.14)
- Float8CurrentScalingQuantizer: lazy workspace allocation
  for scale/amax via __getattr__ so __fx_repr__ produces
  a tensor-free reconstruction
- NVFP4Quantizer: store with_random_sign_mask for __fx_repr__
- New dynamo.py with register_opaque_type calls (no-op on
  older PyTorch)
- Test: test_torch_compile.py with roundtrip value object test

Signed-off-by: Pawel Gadzinski <pgadzinski@nvidia.com>
Made-with: Cursor
…_objects

Signed-off-by: Pawel Gadzinski <pgadzinski@nvidia.com>
Made-with: Cursor

# Conflicts:
#	transformer_engine/pytorch/quantized_tensor.py
#	transformer_engine/pytorch/tensor/float8_tensor.py
#	transformer_engine/pytorch/tensor/nvfp4_tensor.py
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant