- Sort Score
- Result 10 results
- Languages All
Results 1 - 3 of 3 for enable_two_input_tensors (0.43 sec)
-
tensorflow/compiler/mlir/quantization/tensorflow/passes/lift_quantizable_spots_as_functions.cc
" except matmul and einsum."); } else if (!quant_options_.enable_two_input_tensors() && !is_unitwise_quantization_enabled) { return absl::InternalError( "Quantization is disabled for this op due to the non-constant " "weight. You can enable it by setting `enable_two_input_tensors` " "to true or using unit-wise quantization config."); }
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Fri May 10 04:07:09 UTC 2024 - 16.4K bytes - Viewed (0) -
tensorflow/compiler/mlir/quantization/tensorflow/quantization_options.proto
// Enables two inputs of an operation to be both tensors. // Currently supports MatMul and BatchMatMul ops for XLA. // TODO(b/263528090): Check the condition when this feature is beneficial. bool enable_two_input_tensors = 11; // Supports TPU model quantization. If the target model for the quantization // is already converted for TPU, this flag may be helpful. Note that this
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue Mar 19 06:31:19 UTC 2024 - 9.2K bytes - Viewed (0) -
tensorflow/compiler/mlir/quantization/tensorflow/python/integration_test/quantize_model_test.py
preset_method=_PresetMethod.METHOD_STATIC_RANGE_INT8 ), tags=tags, signature_keys=[signature_key], op_set=quant_opts_pb2.XLA, enable_two_input_tensors=not use_kernel, ) converted_model = quantize_model.quantize( self._input_saved_model_path, self._output_saved_model_path_2, quantization_options, )
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Fri May 17 03:36:50 UTC 2024 - 235.6K bytes - Viewed (0)