Skip to content

Use of dynamic flow control in blurpool prevents quantisation. #1985

@BrettRyland

Description

@BrettRyland

Environment

$ composer_collect_env
Collecting system information...
---------------------------------
System Environment Report        
Created: 2023-02-21 11:21:52 CET
---------------------------------

PyTorch information
-------------------
PyTorch version: 2.0.0.dev20230127+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: Could not collect
CMake version: version 3.25.0
Libc version: glibc-2.35

Python version: 3.10.6 (main, Nov 14 2022, 16:10:14) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-58-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.0.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1080 Ti
Nvidia driver version: 525.85.12
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.8.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.22.4
[pip3] numpy-quaternion==2022.4.2
[pip3] pytorch-ranger==0.1.1
[pip3] pytorch-triton==2.0.0+0d7e753227
[pip3] torch==2.0.0.dev20230127+cu118
[pip3] torch-optimizer==0.3.0
[pip3] torch-tensorrt==1.4.0.dev0+18ba2cb0
[pip3] torchaudio==2.0.0.dev20230127+cu118
[pip3] torchmetrics==0.9.3
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.15.0.dev20230127+cu118
[conda] Could not collect


Composer information
--------------------
Composer version: 0.12.0
Composer commit hash: None
Host processor model name: Intel(R) Core(TM) i7-7700K CPU @ 4.20GHz
Host processor core count: 4
Number of nodes: 1
Accelerator model name: NVIDIA GeForce GTX 1080 Ti
Accelerators per node: 1
CUDA Device Count: 1

To reproduce

Attempting to quantise a model with Torch FX graph mode post-training dynamic quantisation (https://pytorch.org/docs/stable/quantization.html#prototype-fx-graph-mode-quantization) that has had blurpool applied to it causes the exception

symbolically traced variables cannot be used as inputs to control flow

https://pytorch.org/docs/stable/fx.html#dynamic-control-flow
This is due to the shape of the filter being dependent on the input shape (n_in_channels, h and w) in blur_2d.

Here is a small script for reproducing the bug: blurpool_quantisation_bug.py.txt

Expected behavior

Applying the blurpool operator to a model should not break symbolic tracing due to dynamic flow control.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions