Search Options

Results per page
Sort
Preferred Languages
Advance

Results 1 - 1 of 1 for type (0.28 sec)

  1. RELEASE.md

    ### `tf.lite`:
    
    *   `TFLiteConverter`:
        *   Support optional flags `inference_input_type` and
            `inference_output_type` for full integer quantized models. This allows
            users to modify the model input and output type to integer types
            (`tf.int8`, `tf.uint8`) instead of defaulting to float type
            (`tf.float32`).
    *   NNAPI
    Plain Text
    - Registered: Tue Apr 30 12:39:09 GMT 2024
    - Last Modified: Mon Apr 29 19:17:57 GMT 2024
    - 727.7K bytes
    - Viewed (8)
Back to top