- Sort Score
- Result 10 results
- Languages All
Results 1 - 2 of 2 for Motivation (0.09 sec)
-
tensorflow/compiler/mlir/quantization/common/quantization_lib/quantization.td
there's any, and set it to True. The reason behind this decision is that generally activations of these ops show better accuracy with asymmetric input quantization so we want to deprecate symmetric activation quantization for those ops eventually. - Unlike to the old quantizer, per-channel quantization is supported for weight-only TransposeConvOp. }]; let methods = [ InterfaceMethod<
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue Mar 05 07:39:40 UTC 2024 - 8.3K bytes - Viewed (0) -
tensorflow/compiler/mlir/quantization/tensorflow/quantization_options.proto
// optimizations in the pipeline. METHOD_NO_QUANTIZE = 1; // Static range quantization. Quantized tensor values' ranges are statically // determined. The activation and weight are quantized to INT8 while bias is // quantized to INT32. METHOD_STATIC_RANGE_INT8 = 2; // Dynamic range quantization. Quantized tensor values' ranges are
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue Mar 19 06:31:19 UTC 2024 - 9.2K bytes - Viewed (0)