- Sort Score
- Result 10 results
- Languages All
Results 1 - 9 of 9 for quantized_type (0.2 sec)
-
tensorflow/compiler/mlir/quantization/stablehlo/quantization_config.proto
message StaticRangePtq { // Operand index -> QuantizedType mapping. Operands that are not specified // here will be quantized with best effort. map<int32, QuantizedType> input_quantized_types = 1; } message WeightOnlyPtq { // Operand index -> QuantizedType mapping. Operands that are not specified // here will be quantized with best effort. map<int32, QuantizedType> input_quantized_types = 1; }
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Fri May 17 03:36:50 UTC 2024 - 14.3K bytes - Viewed (0) -
tensorflow/compiler/mlir/quantization/stablehlo/passes/bridge/convert_tf_quant_ops_to_mhlo.cc
op->getLoc(), *output_type, op.getInput()); rewriter.replaceOpWithNewOp<mhlo::BitcastConvertOp>( op, output_type->clone( mlir::dyn_cast<quant::QuantizedType>(output_type->getElementType()) .getStorageType()), result); return success(); } }; // UniformDequantizeOp takes TF quantized types as input which would have been
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Fri May 17 17:58:54 UTC 2024 - 30.9K bytes - Viewed (0) -
tensorflow/compiler/mlir/quantization/stablehlo/cc/config.cc
// Matches all convolution quantizable unit family. spec.mutable_matcher()->mutable_function_name()->set_regex( "composite_conv.*"); // Enable per-channel quantization for convolution weights. QuantizedType conv_weight_quantized_type{}; // Assumes NHWC format, specifying the channel dimension (3) as the // quantized axis. conv_weight_quantized_type.mutable_dimension_specs()->set_dimension(3);
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Fri May 17 03:36:50 UTC 2024 - 8.3K bytes - Viewed (0) -
tensorflow/compiler/mlir/quantization/tensorflow/passes/prepare_lifting.cc
ArrayRef<double> multiplier_array(multiplier_values.data(), multiplier_values.size()); // Multiply the quantization parameters by the multiplier. QuantizedType new_qtype; auto element_type = mlir::cast<TensorType>(q_op.getType()).getElementType(); if (auto uniform_type = llvm::dyn_cast<UniformQuantizedType>(element_type)) { if (multiplier_attr.isSplat()) {
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Fri May 17 17:58:54 UTC 2024 - 13.3K bytes - Viewed (0) -
tensorflow/compiler/mlir/lite/transforms/legalize_patterns.td
"].cast<IntegerAttr>().getInt())">; // Use the tensor type information from $0 and convert min $1, max $2 and // numBits $3 and narrowRange $4 to a QuantizedType. def ConvertToQuantTypeFromAttrs : NativeCodeCall< "quant::GetQuantizedTypeAttr($_builder, $0.getType(), $1, $2, -1, $3, $4, /*is_signed=*/false)">; // Converts an integer attribute $0 to 32-bit with builder.
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue Jun 04 13:30:42 UTC 2024 - 28.5K bytes - Viewed (0) -
tensorflow/compiler/mlir/lite/ir/tfl_ops.td
And<[ SubstLeaves<"$_self", "getElementTypeOrSelf($_op.getOperand(" # j # "))", quant_QuantizedType.predicate>, CPred<"quant::QuantizedType::castToStorageType(" "getElementTypeOrSelf($_op.getResult(" # i # "))) == " "quant::QuantizedType::castToStorageType(" "getElementTypeOrSelf($_op.getOperand(" # j # ")))">]>]>]>; def TFL_SameFirstOperandAndFirstResultElementType :
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu Jun 06 19:09:08 UTC 2024 - 186K bytes - Viewed (0) -
tensorflow/compiler/mlir/lite/flatbuffer_import.cc
TF_ASSIGN_OR_RETURN(value, tfl::ConvertIntBuffer(shaped_type, buffer, truncate)); TF_ASSIGN_OR_RETURN( mlir::quant::QuantizedType type, tfl::GetQuantizedType(tensor, builder, /*is_constant=*/true, /*storage_type=*/value.getElementType())); shaped_type = shaped_type.clone(type);
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue May 21 18:21:50 UTC 2024 - 66.8K bytes - Viewed (0) -
RELEASE.md
`saved_model.load` and `saved_model.main_op`, which will be replaced by `saved_model.main_op` in V2. * Deprecate tf.QUANTIZED_DTYPES. The official new symbol is tf.dtypes.QUANTIZED_DTYPES. * Update sklearn imports for deprecated packages. * Deprecate `Variable.count_up_to` and `tf.count_up_to` in favor of `Dataset.range`.
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue Jun 11 23:24:08 UTC 2024 - 730.3K bytes - Viewed (0) -
tensorflow/compiler/mlir/tensorflow/ir/tf_generated_ops.td
0-6. The min_range and max_range values are therefore 0.0 and 6.0. Dequantize on quint8 will take each value, cast to float, and multiply by 6 / 255. Note that if quantizedtype is qint8, the operation will additionally add each value by 128 prior to casting. If the mode is 'MIN_FIRST', then this approach is used: ```c++ num_discrete_values = 1 << (# of bits in T)
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue Jun 11 23:24:08 UTC 2024 - 793K bytes - Viewed (0)