Search Options

Results per page
Sort
Preferred Languages
Advance

Results 1 - 1 of 1 for Quantize (0.14 sec)

  1. RELEASE.md

            `inference_output_type` for full integer quantized models. This allows
            users to modify the model input and output type to integer types
            (`tf.int8`, `tf.uint8`) instead of defaulting to float type
            (`tf.float32`).
    *   NNAPI
        *   Adds NNAPI Delegation support for requantization use cases by converting
            the operation into a dequantize-quantize pair.
    Registered: Tue Sep 09 12:39:10 UTC 2025
    - Last Modified: Mon Aug 18 20:54:38 UTC 2025
    - 740K bytes
    - Viewed (1)
Back to top