You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
How do we specify fused operator patterns like (conv+relu) in the quantization config? I see such options are available in pytorch but not in onnx static_quantize.
Right now I see different scales at output of conv and relu which is not suitable for us as it will require additional requantize step.
Thanks!
The text was updated successfully, but these errors were encountered:
How do we specify fused operator patterns like (conv+relu) in the quantization config? I see such options are available in pytorch but not in onnx static_quantize.
Right now I see different scales at output of conv and relu which is not suitable for us as it will require additional requantize step.
Thanks!
The text was updated successfully, but these errors were encountered: