Search Options

Results per page
Sort
Preferred Languages
Advance

Results 1 - 1 of 1 for QuantizeModelAllOperators (0.76 sec)

  1. tensorflow/compiler/mlir/lite/quantization/lite/quantize_model_test.cc

    TfLiteStatus QuantizeModel(ModelT* model, std::string& output_buffer) {
      return QuantizeModel(model, TensorType_FLOAT32, TensorType_FLOAT32,
                           /*allow_float=*/true, output_buffer);
    }
    
    TfLiteStatus QuantizeModelAllOperators(
        ModelT* model, const TensorType& input_type, const TensorType& output_type,
        bool allow_float, const TensorType& activations_type,
        bool disable_per_channel, std::string& output_buffer) {
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Wed Jun 12 23:15:24 UTC 2024
    - 73.9K bytes
    - Viewed (0)
Back to top