- Sort Score
- Result 10 results
- Languages All
Results 1 - 2 of 2 for quant_type (0.09 sec)
-
tensorflow/compiler/mlir/lite/quantization/lite/quantize_weights.cc
TfLiteStatus QuantizeWeights(flatbuffers::FlatBufferBuilder* builder, const tflite::Model* input_model, BufferType quant_type, bool use_updated_hybrid_scheme) { tflite::TensorType inference_type; switch (quant_type) { case BufferType::QUANTIZED_FLOAT16: inference_type = tflite::TensorType_FLOAT16; break; default:
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed Jun 12 23:15:24 UTC 2024 - 9.5K bytes - Viewed (0) -
tensorflow/compiler/mlir/lite/quantization/lite/quantize_weights.h
TfLiteStatus QuantizeWeights(flatbuffers::FlatBufferBuilder* builder, const tflite::Model* input_model, BufferType quant_type = BufferType::QUANTIZED_INT8, bool use_updated_hybrid_scheme = true); TfLiteStatus QuantizeWeights(flatbuffers::FlatBufferBuilder* builder,
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed Jun 12 23:15:24 UTC 2024 - 4.2K bytes - Viewed (0)