Search Options

Results per page
Sort
Preferred Languages
Advance

Results 1 - 2 of 2 for quantized_type (0.18 sec)

  1. RELEASE.md

            `saved_model.load` and `saved_model.main_op`, which will be replaced by
            `saved_model.main_op` in V2.
        *   Deprecate tf.QUANTIZED_DTYPES. The official new symbol is
            tf.dtypes.QUANTIZED_DTYPES.
        *   Update sklearn imports for deprecated packages.
        *   Deprecate `Variable.count_up_to` and `tf.count_up_to` in favor of
            `Dataset.range`.
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Tue Jun 11 23:24:08 UTC 2024
    - 730.3K bytes
    - Viewed (0)
  2. tensorflow/compiler/mlir/tensorflow/ir/tf_generated_ops.td

    0-6.  The min_range and max_range values are therefore 0.0 and 6.0.
    Dequantize on quint8 will take each value, cast to float, and multiply
    by 6 / 255.
    Note that if quantizedtype is qint8, the operation will additionally add
    each value by 128 prior to casting.
    
    If the mode is 'MIN_FIRST', then this approach is used:
    
    ```c++
    num_discrete_values = 1 << (# of bits in T)
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Tue Jun 11 23:24:08 UTC 2024
    - 793K bytes
    - Viewed (0)
Back to top