Search Options

Results per page
Sort
Preferred Languages
Advance

Results 1 - 2 of 2 for Quantize (0.04 sec)

  1. RELEASE.md

            `inference_output_type` for full integer quantized models. This allows
            users to modify the model input and output type to integer types
            (`tf.int8`, `tf.uint8`) instead of defaulting to float type
            (`tf.float32`).
    *   NNAPI
        *   Adds NNAPI Delegation support for requantization use cases by converting
            the operation into a dequantize-quantize pair.
    Registered: Tue Sep 09 12:39:10 UTC 2025
    - Last Modified: Mon Aug 18 20:54:38 UTC 2025
    - 740K bytes
    - Viewed (1)
  2. docs/fr/docs/async.md

    Cela prendrait autant de temps pour finir avec ou sans sections (concurrence) et vous auriez effectué la même quantité de travail.
    
    Registered: Sun Sep 07 07:19:17 UTC 2025
    - Last Modified: Sun Aug 31 09:56:21 UTC 2025
    - 25.4K bytes
    - Viewed (0)
Back to top