Search Options

Results per page
Sort
Preferred Languages
Advance

Results 1 - 1 of 1 for to_xla_per_channel (0.15 sec)

  1. tensorflow/compiler/mlir/quantization/tensorflow/python/integration_test/quantize_model_test.py

          # only Quantization
          # Enable this back once new weight-only quantizer is supported for per-
          # channel quantization.
          # ('to_xla_per_channel', quant_opts_pb2.XLA, True),
      )
      @test_util.run_in_graph_and_eager_modes
      def test_conv_model(
          self,
          target_opset: quant_opts_pb2.OpSet,
          enable_per_channel_quantization: bool,
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Fri May 17 03:36:50 UTC 2024
    - 235.6K bytes
    - Viewed (0)
Back to top