Skip to content

🐛 [export][llama2] AssertionError: Currently we don't support unsqueeze with more than one dynamic dims #2916

@peri044

Description

@peri044

Bug Description

Configuration: llm_examples_main branch, current torch version: 2.4, transformers==4.41.2
AssertionError: Currently we don't support unsqueeze with more than one dynamic dims
Error :

    return impl.unsqueeze.unsqueeze(
  File "/work/TensorRT/py/torch_tensorrt/dynamo/conversion/impl/unsqueeze.py", line 36, in unsqueeze
    len(get_dynamic_dims(input_val.shape)) <= 1
AssertionError: Currently we don't support unsqueeze with more than one dynamic dims.

While executing %unsqueeze_8 : [num_users=1] = call_function[target=torch.ops.aten.unsqueeze.default](args = (%mul, 0), kwargs = {_itensor_to_tensor_meta: {<tensorrt_bindings.tensorrt.ITensor object at 0x7f0bdd34b070>: ((s0, s0 + 1), torch.bool, False, (s0 + 1, 1), torch.contiguous_format, False, {}), <tensorrt_bindings.tensorrt.ITensor object at 0x7f0bdd34b1b0>: ((s0, s0 + 1), torch.float32, False, (s0 + 1, 1), torch.contiguous_format, False, {}), <tensorrt_bindings.tensorrt.ITensor object at 0x7f0bd714a570>: ((s0, s0 + 1), torch.float32, False, (s0 + 1, 1), torch.contiguous_format, False, {}), <tensorrt_bindings.tensorrt.ITensor object at 0x7f0bdd1f7270>: ((s0, s0 + 1), torch.bool, False, (s0 + 1, 1), torch.contiguous_format, False, {}), <tensorrt_bindings.tensorrt.ITensor object at 0x7f0bdd1f5330>: ((s0, s0 + 1), torch.float32, False, (s0 + 1, 1), torch.contiguous_format, False, {}), <tensorrt_bindings.tensorrt.ITensor object at 0x7f0bdd1f7070>: ((1, s0), torch.int64, False, (s0, 1), torch.contiguous_format, False, {}), <tensorrt_bindings.tensorrt.ITensor object at 0x7f0bdd1f4bb0>: ((1, 1, s0), torch.int64, False, (s0, s0, 1), torch.contiguous_format, False, {}), <tensorrt_bindings.tensorrt.ITensor object at 0x7f0bdd1f52f0>: ((1, 32, s0, 64), torch.float32, False, (2048*s0, 64, 2048, 1), None, False, {}), <tensorrt_bindings.tensorrt.ITensor object at 0x7f0bdd1f5bb0>: ((1, 32, s0, 64), torch.float32, False, (4096*s0, 128, 4096, 1), None, False, {}), <tensorrt_bindings.tensorrt.ITensor object at 0x7f0bdd6eb830>: ((1, 32, s0, 128), torch.float32, False, (4096*s0, 128*s0, 128, 1), torch.contiguous_format, False, {}), <tensorrt_bindings.tensorrt.ITensor object at 0x7f0bdd2faab0>: ((1, 32, s0, 64), torch.float32, False, (2048*s0, 64, 2048, 1), None, False, {}), <tensorrt_bindings.tensorrt.ITensor object at 0x7f0bdd2d7ab0>: ((1, 32, s0, 64), torch.float32, False, (4096*s0, 128, 4096, 1), None, False, {}), <tensorrt_bindings.tensorrt.ITensor object at 0x7f0bdd367ab0>: ((1, 32, s0, 128), torch.float32, False, (4096*s0, 128*s0, 128, 1), torch.contiguous_format, False, {})}})

To Reproduce

Expected behavior

Environment

Build information about Torch-TensorRT can be found by turning on debug messages

  • Torch-TensorRT Version (e.g. 1.0.0):
  • PyTorch Version (e.g. 1.0):
  • CPU Architecture:
  • OS (e.g., Linux):
  • How you installed PyTorch (conda, pip, libtorch, source):
  • Build command you used (if compiling from source):
  • Are you using local sources or building from archives:
  • Python version:
  • CUDA version:
  • GPU models and configuration:
  • Any other relevant information:

Additional context

Metadata

Metadata

Assignees

Labels

bugSomething isn't working

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions