Search Options

Results per page
Sort
Preferred Languages
Advance

Results 1 - 2 of 2 for quantize (0.1 sec)

  1. tensorflow/compiler/mlir/tensorflow/ir/tf_generated_ops.td

      let summary = [{
    Perform quantized dot of quantized Tensor `lhs` and quantized Tensor `rhs` to make quantized `output`.
      }];
    
      let description = [{
    Given quantized `lhs` and quantized `rhs`, performs quantized dot on `lhs` and `rhs` to make quantized `output`.
    `lhs` and `rhs` must be 2D Tensors and the lhs.dim_size(1) must match rhs.dim_size(0).
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Tue Jun 11 23:24:08 UTC 2024
    - 793K bytes
    - Viewed (0)
  2. RELEASE.md

            `inference_output_type` for full integer quantized models. This allows
            users to modify the model input and output type to integer types
            (`tf.int8`, `tf.uint8`) instead of defaulting to float type
            (`tf.float32`).
    *   NNAPI
        *   Adds NNAPI Delegation support for requantization use cases by converting
            the operation into a dequantize-quantize pair.
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Tue Jun 11 23:24:08 UTC 2024
    - 730.3K bytes
    - Viewed (0)
Back to top