Search Options

Results per page
Sort
Preferred Languages
Advance

Results 1 - 10 of 17 for quantized (0.39 sec)

  1. tensorflow/compiler/mlir/quantization/tensorflow/passes/quantize_composite_functions.cc

        }
        lines.push_back("");
        lines.push_back(absl::StrFormat(
            "Number of quantized layers with quantized outputs: %d/%d",
            total_quantized_func_count - float_output_func_count,
            total_quantized_func_count));
        lines.push_back(absl::StrFormat("Number of quantize layers added: %d",
                                        quantize_func_count));
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Thu Apr 25 16:01:03 UTC 2024
    - 54.5K bytes
    - Viewed (0)
  2. tensorflow/compiler/mlir/lite/stablehlo/transforms/compose_uniform_quantized_type_pass.cc

    };
    
    // Matches the pattern for quantized convolution op and rewrites it to use
    // uniform quantized types.
    //
    // Currently assumes asymmetric per-tensor quantization for activations and
    // symmetric per-channel quantization for filters.
    //
    // This pattern represents the following derived equation, where:
    // * rn = real (expressed) value for tensor n
    // * qn = quantized value for tensor n
    // * sn = scale for tensor n
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Thu Apr 25 16:01:03 UTC 2024
    - 64.6K bytes
    - Viewed (0)
  3. tensorflow/compiler/mlir/quantization/stablehlo/tests/passes/quantize_composite_functions.mlir

        return %0 : tensor<1x3xf32>
      }
    // Checks that the entry function is quantized for dot_general. Quantized
    // dot_general outputs an i32 quantized tensor, followed by requantization to
    // i8 quantized tensor.
    
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Thu May 09 05:56:10 UTC 2024
    - 91.6K bytes
    - Viewed (0)
  4. tensorflow/compiler/mlir/quantization/stablehlo/python/integration_test/quantize_model_test.py

        if bias_fn:
          self.assertTrue(re.search('stablehlo.add.*xi32>', module_str))
        # Consider if there is a way to check if activation fusion is properly
        # done in MLIR level.
        # Tests that the quantized graph outputs similar values. The rtol and atol
        # values are arbitrary.
        self.assertAllClose(new_outputs, expected_outputs, rtol=0.3, atol=0.2)
    
        # Due to other meta data, the compression is not exactly 1/4.
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Tue May 14 06:31:57 UTC 2024
    - 51.4K bytes
    - Viewed (0)
  5. tensorflow/compiler/mlir/lite/quantization/lite/quantize_model_test.cc

                  Eq(TensorType_INT8));
    
      // Verify FC bias should be int32 quantized.
      ASSERT_THAT(float_graph->tensors()->Get(float_op->inputs()->Get(2))->type(),
                  Eq(TensorType_FLOAT32));
      EXPECT_THAT(subgraph->tensors[op->inputs[2]].get()->type,
                  Eq(TensorType_INT32));
    
      // The output tensor of FC should be int8 quantized.
      ASSERT_THAT(float_graph->tensors()->Get(float_op->outputs()->Get(0))->type(),
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Wed Jun 12 23:15:24 UTC 2024
    - 73.9K bytes
    - Viewed (0)
  6. tensorflow/compiler/mlir/tensorflow/transforms/lower_tf.cc

    };
    
    // This pass performs a manual conversion with FakeQuant, converting between
    // floating point and quantized space. It is designed to reproduce TF's
    // implementation, mirroring the previous XLA implementation.
    //
    // 1. Computing proper quantized bounds. This involves nudging the input bounds.
    // 2. Converting the input bounds to quantized space, rounding values.
    // 3. Convert back into floating point space.
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Thu Apr 25 16:01:03 UTC 2024
    - 74.9K bytes
    - Viewed (0)
  7. tensorflow/compiler/mlir/quantization/tensorflow/python/integration_test/quantize_model_test_base.py

        )
    
      def _is_quantized_function(self, func: function_pb2.FunctionDef) -> bool:
        """Determine whether a FunctionDef is quantized.
    
        Args:
          func: A FunctionDef object.
    
        Returns:
          True iff `func` is quantized.
        """
        return func.signature.name.startswith('quantized_')
    
      def _is_composite_function(self, func: function_pb2.FunctionDef) -> bool:
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Thu Mar 21 08:51:46 UTC 2024
    - 51.2K bytes
    - Viewed (0)
  8. tensorflow/compiler/mlir/quantization/tensorflow/tests/quantize_composit_functions_debugging.mlir

    // RUN: tf-quant-opt %s -split-input-file -quant-insert-quantized-functions -quant-quantize-composite-functions | FileCheck --check-prefix=TF %s
    // RUN: tf-quant-opt %s -split-input-file -quant-insert-quantized-functions -quant-quantize-composite-functions='target-opset=XLA' | FileCheck --check-prefix=XLA %s
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Mon Nov 06 01:23:21 UTC 2023
    - 80.5K bytes
    - Viewed (0)
  9. tensorflow/compiler/mlir/lite/flatbuffer_import.cc

    // If the input `tensor` has scale/zero_point, `res` should have quantized
    // type, thus none stats op is required and nullptr is returned.
    // If the min max information is invalid, nullptr is returned.
    mlir::Operation* ConvertMinMaxToStatsOp(const TensorT& tensor, OpBuilder b,
                                            Value res) {
      // If the `tensor` has scale/zero_point, it must have been quantized, then the
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Tue May 21 18:21:50 UTC 2024
    - 66.8K bytes
    - Viewed (0)
  10. tensorflow/compiler/mlir/g3doc/_includes/tf_passes.md

    pruned using DCE.
    ### `-tf-lower-quantized`
    
    _Lowers ops that require quantized input or output._
    
    This pass rewrites all ops that have at least one input or output that must
    be a quantized type to ops whose inputs and outputs allow non-quantized
    types. Examples of quantized types are TF_Qint8 or TF_Quint8.
    
    An example is TF_DequantizeOp, which converts a quantized type to a float.
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Wed Aug 02 02:26:39 UTC 2023
    - 96.4K bytes
    - Viewed (0)
Back to top