- Sort Score
- Result 10 results
- Languages All
Results 71 - 80 of 200 for requantize (0.13 sec)
-
tensorflow/compiler/mlir/quantization/common/quantization_lib/quantization.td
left as is for weight-only which means the weight is dequantized at runtime. For example, if the kernel does not support dynamic range quantization the graph will be converted into the following IR: %q_w = "tfl.pseudo_qconst"() { qtype = tensor<64x3x3x3x!quant.uniform<i8<-127:127>:f32, 1.000000e+00>> %w = "tfl.dequantize"(%q_w) :
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue Mar 05 07:39:40 UTC 2024 - 8.3K bytes - Viewed (0) -
tensorflow/compiler/mlir/quantization/tensorflow/tests/propagate_quantize_type.mlir
// CHECK: %[[IDENTITY:.*]] = "tf.Identity"(%cst_0) : (tensor<200x100x300xi8>) -> tensor<200x100x300xi8> // CHECK: %[[DEQUANTIZED:.*]] = "tf.PartitionedCall"(%[[IDENTITY]]) <{config = "", config_proto = "", executor_type = "", f = @composite_dequantize_uniform}> : (tensor<200x100x300xi8>) -> tensor<200x100x300xf32>
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Mon Oct 30 06:52:55 UTC 2023 - 6.6K bytes - Viewed (0) -
tensorflow/compiler/aot/quantize.h
Jake Harmon <******@****.***> 1694027275 -0700
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed Sep 06 19:12:29 UTC 2023 - 1.4K bytes - Viewed (0) -
tensorflow/compiler/mlir/lite/tf_tfl_translate_cl.cc
// going forward. // NOLINTNEXTLINE llvm::cl::list<std::string> custom_opdefs( "tf-custom-opdefs", llvm::cl::desc("List of custom opdefs when importing " "graphdef")); // Quantize and Dequantize ops pair can be optionally emitted before and after // the quantized model as the adaptors to receive and produce floating point // type data with the quantized model. Set this to `false` if the model input is
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue Mar 05 20:53:17 UTC 2024 - 7.9K bytes - Viewed (0) -
tensorflow/compiler/mlir/lite/experimental/tac/tests/get-alternative-subgraph.mlir
// CHECK-DAG: %[[VAL_8:.*]] = "tfl.pseudo_const"(){{.*}}dense<[384, 128]> : tensor<2xi32> // CHECK: %[[VAL_9:.*]] = "tfl.dequantize"(%[[VAL_0]]) {tac.device = "GPU", tac.inference_type = "FLOAT"} : (tensor<384x512x!quant.uniform<i8:f32, 1.000000e-01>>) -> tensor<384x512xf32>
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu May 02 09:41:17 UTC 2024 - 20.1K bytes - Viewed (0) -
tensorflow/compiler/mlir/lite/stablehlo/transforms/passes.td
* A tensor is dequantized using a `func::FuncOp` whose name contains "uniform_dequantize". The first argument is the tensor to be quantized, the second argument is the zero point constant (element type: int) and the third argument is the inverse scale constant (element type: float). * Inputs to the target quantized op is quantized and the outputs are dequantized.
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu Apr 25 21:59:06 UTC 2024 - 5.6K bytes - Viewed (0) -
tensorflow/compiler/mlir/lite/transforms/quantize_variables.cc
llvm::make_early_inc_range(var_handle_op.getResult().getUsers())) { auto read_variable_op = dyn_cast_or_null<ReadVariableOp>(var_handle_user); if (!read_variable_op) continue; // Add dequantize. builder.setInsertionPointAfter(read_variable_op); auto new_read_variable_op = builder.create<ReadVariableOp>(read_variable_op.getLoc(), ref_qtype,
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu Apr 25 16:01:03 UTC 2024 - 8.5K bytes - Viewed (0) -
tensorflow/compiler/mlir/lite/tests/prepare-tf.mlir
^bb0(%arg0: tensor<1x2xf32>): %cst_0 = arith.constant dense<[1, 0]> : tensor<2xi32> %0 = "tfl.quantize"(%arg0){qtype = tensor<1x2x!quant.uniform<u8:f32, 1.0>>}: (tensor<1x2xf32>) -> (tensor<1x2x!quant.uniform<u8:f32, 1.0>>) %1 = "tfl.dequantize"(%0): (tensor<1x2x!quant.uniform<u8:f32, 1.0>>) -> (tensor<1x2xf32>) %2 = "tf.Transpose"(%1, %cst_0): (tensor<1x2xf32>, tensor<2xi32>) -> tensor<2x1xf32>
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed May 29 07:26:59 UTC 2024 - 59.8K bytes - Viewed (0) -
tensorflow/compiler/mlir/lite/stablehlo/transforms/compose_uniform_quantized_type_pass.cc
if (!combined_scale_constant_op) { LLVM_DEBUG(llvm::dbgs() << "Failed to match combined_scale_constant_op.\n"); return failure(); } // Quantize -> Dequantize following r3. auto output_uniform_quantize_call_op = dyn_cast_or_null<func::CallOp>( *combined_scale_multiply_op.getResult().user_begin()); if (!output_uniform_quantize_call_op->hasOneUse()) {
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu Apr 25 16:01:03 UTC 2024 - 64.6K bytes - Viewed (0) -
tensorflow/compiler/mlir/quantization/tensorflow/passes/prepare_lifting.cc
per_axis_type.getStorageTypeMin(), per_axis_type.getStorageTypeMax()); } auto quantize = builder.create<quantfork::QuantizeCastOp>( q_op.getLoc(), new_value_type.clone(new_qtype), new_value); auto dequantize = builder.create<quantfork::DequantizeCastOp>( dq_op.getLoc(), new_value_type, quantize.getResult()); return dequantize.getResult(); }
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Fri May 17 17:58:54 UTC 2024 - 13.3K bytes - Viewed (0)