- Sort Score
- Result 10 results
- Languages All
Results 11 - 13 of 13 for debugger_config (0.19 sec)
-
tensorflow/compiler/mlir/quantization/tensorflow/passes/add_dump_tensor_op.cc
*this, "debugger_type", llvm::cl::init(DebuggerConfig::DEBUGGER_TYPE_UNSPECIFIED), llvm::cl::values( clEnumValN(DebuggerConfig::DEBUGGER_TYPE_WHOLE_MODEL, "whole_model", "Whole model verify"), clEnumValN(DebuggerConfig::DEBUGGER_TYPE_INT_PER_LAYER, "int_per_layer", "Int Per-layer verify"), clEnumValN(DebuggerConfig::DEBUGGER_TYPE_FLOAT_PER_LAYER,
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Fri Mar 22 22:55:22 UTC 2024 - 13K bytes - Viewed (0) -
tensorflow/compiler/mlir/quantization/tensorflow/passes/passes.h
// Create a pass that inserts dump tensor to quantizable layer's output. std::unique_ptr<OperationPass<ModuleOp>> CreateAddDumpTensorOpPass( ::stablehlo::quantization::DebuggerConfig::DebuggerType debugger_type, std::string log_dir_path); // Creates a pass that add QuantizationUnitLoc to quantizable layers. std::unique_ptr<OperationPass<func::FuncOp>> CreateAddQuantizationUnitLocPass();
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Fri May 10 04:07:09 UTC 2024 - 12.3K bytes - Viewed (0) -
RELEASE.md
* GPU * Support for NVIDIA GPUs with compute capability 8.9 (e.g. L4 & L40) has been added to TF binary distributions (Python wheels). * Replace `DebuggerOptions` of TensorFlow Quantizer, and migrate to `DebuggerConfig` of StableHLO Quantizer. * Add TensorFlow to StableHLO converter to TensorFlow pip package. * TensorRT support: this is the last release supporting TensorRT. It will be removed in the next release.
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue Jun 11 23:24:08 UTC 2024 - 730.3K bytes - Viewed (0)