Search Options

Results per page
Sort
Preferred Languages
Advance

Results 1 - 2 of 2 for quantize_qat_model (0.17 sec)

  1. tensorflow/compiler/mlir/quantization/tensorflow/python/pywrap_quantize_model.pyi

    from tensorflow.compiler.mlir.quantization.tensorflow.python import py_function_lib
    from tensorflow.compiler.mlir.quantization.tensorflow.python import representative_dataset as rd
    
    # LINT.IfChange(quantize_qat_model)
    def quantize_qat_model(
        src_saved_model_path: str,
        dst_saved_model_path: str,
        quantization_options_serialized: bytes,
        *,
        signature_keys: list[str],
        signature_def_map_serialized: dict[str, bytes],
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Thu Mar 07 03:47:17 UTC 2024
    - 2.5K bytes
    - Viewed (0)
  2. tensorflow/compiler/mlir/quantization/tensorflow/python/quantize_model.h

    inline constexpr absl::string_view kTfQuantPtqDynamicRangeStepName =
        "tf_quant_ptq_dynamic_range";
    inline constexpr absl::string_view kTfQuantWeightOnlyStepName =
        "tf_quant_weight_only";
    
    absl::StatusOr<ExportedModel> QuantizeQatModel(
        absl::string_view saved_model_path,
        const std::vector<std::string>& signature_keys,
        const std::unordered_set<std::string>& tags,
        const QuantizationOptions& quantization_options);
    
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Thu Mar 28 15:31:08 UTC 2024
    - 3.3K bytes
    - Viewed (0)
Back to top