- Sort Score
- Result 10 results
- Languages All
Results 71 - 80 of 323 for quantized (0.15 sec)
-
tensorflow/compiler/mlir/lite/stablehlo/transforms/passes.h
std::unique_ptr<Pass> createOptimizePass(); // Creates a pass that finds quantization patterns and compose them to uniform // quantized types. std::unique_ptr<OperationPass<ModuleOp>> CreateComposeUniformQuantizedTypePass(); // Creates a pass that finds stablehlo ops that accept or produce uniform // quantized typed tensors and converts them to equivalent ops in the TFLite // dialect. std::unique_ptr<OperationPass<func::FuncOp>>
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu Apr 25 21:59:06 UTC 2024 - 3.2K bytes - Viewed (0) -
tensorflow/compiler/mlir/lite/common/tfl_pass_config.h
bool reduce_type_precision = false; // Whether to consider this model a quantized model with quantize/dequantize // ops and to convert kernels to quantized kernels wherever appropriate. quant::QDQConversionMode qdq_conversion_mode = quant::QDQConversionMode::kQDQNone; // When set to true, StableHLO Quantizer is run. The full configuration for // the quantizer is at `TocoFlags::quantization_config`.
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed May 08 19:05:30 UTC 2024 - 6.5K bytes - Viewed (0) -
tensorflow/compiler/mlir/quantization/tensorflow/passes/quantized_function_library_uniform_quantized_drq.mlir
// limitations under the License. // Quantization as a function library with Uniform Quantized Ops for Dynamic // PTQ // // Internal functions should be marked as private. They will be inlined and // deleted in `InsertQuantizedFunctionsPass`. // // For Uniform Quantized op case, attributes are generated during quantize // composite pass. Therefore, attr_map is set to an empty string. module {
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu Dec 01 12:06:54 UTC 2022 - 3.9K bytes - Viewed (0) -
tensorflow/compiler/mlir/lite/transforms/quantize.cc
}; class QuantizeConstPattern : public OpRewritePattern<QuantizeOp> { public: explicit QuantizeConstPattern(MLIRContext* context, bool legacy_float_scale) : OpRewritePattern<QuantizeOp>(context), legacy_float_scale_(legacy_float_scale) {} LogicalResult matchAndRewrite(QuantizeOp op, PatternRewriter& rewriter) const override {
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed Apr 24 20:30:06 UTC 2024 - 13.3K bytes - Viewed (0) -
tensorflow/compiler/mlir/quantization/tensorflow/utils/tf_to_uniform_attribute_utils.cc
attrs.push_back(rewriter.getNamedAttr( attr_minmax, rewriter.getI64IntegerAttr(quant_val))); } } return success(); } // This LogicalResult covers both the hybrid and fully quantized op cases. LogicalResult FillAttributesForUniformQuantizedDotOp( PatternRewriter& rewriter, Operation* op, llvm::StringMap<Attribute>& identifier_to_attr,
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu Apr 25 16:01:03 UTC 2024 - 18.7K bytes - Viewed (0) -
tensorflow/compiler/mlir/quantization/tensorflow/calibrator/calibration_algorithm.py
this, we quantize hist_mids using quant_min and quant_max and dequantize them again. Then the difference between hist_mids and dequantized hist_mids equates to quantization error when using quant_min and quant_max. Args: quant_min: The minimum real value that can be represented by a quantized value.
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Mon Mar 11 19:29:56 UTC 2024 - 14.7K bytes - Viewed (0) -
tensorflow/compiler/mlir/quantization/stablehlo/cc/pass_pipeline.h
// exported as a TF SavedModel. void AddCallModuleSerializationPasses(OpPassManager& pm); // Passes for unpacking quantized ops to int valued StableHLO ops. This is // useful when uniform quantized types are suboptimal for the hardware. It goes // through a StableHLO <-> MHLO roundtrip to utilize the MHLOQuantToInt pass. void AddStablehloQuantToIntPasses(OpPassManager& pm);
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Mon Apr 15 12:53:33 UTC 2024 - 3.6K bytes - Viewed (0) -
tensorflow/compiler/mlir/lite/quantization/ir/Passes.h
std::unique_ptr<OperationPass<func::FuncOp>> createConvertSimulatedQuantPass(); /// Creates a pass that converts constants followed by a qbarrier to a /// constant whose value is quantized. This is typically one of the last /// passes done when lowering to express actual quantized arithmetic in a /// low level representation. Because it modifies the constant, it is /// destructive and cannot be undone.
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Fri Jul 29 18:55:28 UTC 2022 - 2.3K bytes - Viewed (0) -
tensorflow/compiler/mlir/tensorflow/translate/tf_mlir_translate_cl.cc
"float and quantized types"), llvm::cl::init("")); // NOLINTNEXTLINE opt<std::string> min_values( "tf-input-min-values", llvm::cl::desc( "Sets the lower bound of the input data. Separated by ','; Each entry " "in the list should match an entry in -tf-input-arrays. This is " "used when -tf-inference-type is a quantized type."), llvm::cl::Optional, llvm::cl::init(""));
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu Aug 10 20:59:50 UTC 2023 - 5.5K bytes - Viewed (0) -
tensorflow/compiler/mlir/tensorflow/transforms/lower_tf.h
// Populates TensorFlow lowering patterns to lower some of the TensorFlow // operations that can be represented using other TensorFlow operations. // Patterns are from ops with some inputs or outputs that are quantized types // only to ops that allow non-quantized types on all inputs and outputs. void PopulateLoweringQuantizedPatterns(MLIRContext *context, RewritePatternSet *patterns); } // namespace TF
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu Jan 27 15:05:02 UTC 2022 - 2.4K bytes - Viewed (0)