- Sort Score
- Result 10 results
- Languages All
Results 1 - 10 of 8,920 for FusedN (0.34 sec)
-
tensorflow/compiler/mlir/tfr/tests/decompose.mlir
// expected-error@+1 {{Denied unregistered attribute was found: denied_attr}} %0:2 = "tf.FusedN"(%arg0, %arg1, %arg2) {A=0:index, denied_attr} : (tensor<1x2x3x4x!tf_type.string>, tensor<f32>, tensor<f32>) -> (tensor<1x2x3x4x!tf_type.string>, tensor<f32>) func.return %0#1 : tensor<f32> // CHECK-NEXT: "tf.FusedN"(%arg0, %arg1, %arg2) {A = 0 : index, denied_attr} } // CHECK-LABEL: quantized_tensor
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Mon Oct 30 06:52:55 UTC 2023 - 16.7K bytes - Viewed (0) -
test/used.go
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Mon Dec 28 08:39:17 UTC 2020 - 6K bytes - Viewed (0) -
tensorflow/compiler/mlir/tfr/README.md
this kernel in the inference graph. * *Solution*: The user should define a new TF op, which corresponds to the fused kernel, with composition, and use this op to build the model for both training and inference. For the platforms where a fused kernel is not available, the execution will use the composition instead. ## Gradient (TODO) ## Authoring Op Composition in Python
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue Mar 29 18:32:13 UTC 2022 - 6.2K bytes - Viewed (0) -
tensorflow/compiler/mlir/lite/quantization/quantization_info.proto
// tensor_name := op_name | op_name ’:’ port_number. // If the op has only one port, op_name can be used. // If the op has internal states, such as fused LSTM, the port_number should // follow a predefined convention. oneof name_oneof { string name = 1; // An regex can be used to match multiple tensors. string name_regex = 2; }
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue Oct 08 03:45:04 UTC 2019 - 2.3K bytes - Viewed (0) -
tensorflow/compiler/mlir/tensorflow/transforms/fused_kernel_matcher.cc
contraction, "fused operation must be nested inside a function, If or While"); } // If the contraction is used in multiple places, fusing it will only create // more contraction nodes, which is slower. if (!contraction.getResult().hasOneUse()) return rewriter.notifyMatchFailure(contraction, "result is used by multiple ops");
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu Apr 25 16:01:03 UTC 2024 - 14.9K bytes - Viewed (0) -
tensorflow/compiler/mlir/lite/utils/lstm_utils.h
constexpr char kCoupleInputForgetGates[] = "CoupleInputForgetGates"; // A utility class that enables the conversion of the LSTMCellSimple composite // op into a fused TFL LSTM op. The fused op is contained within a FuncOp // that also contains other supporting ops needed to construct the operands for // the fused op. The caller provides the containing FuncOp as input with // arguments specifying the input, weight, projection and bias.
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Sat Jun 03 00:14:05 UTC 2023 - 7.3K bytes - Viewed (0) -
.github/ISSUE_TEMPLATE/tflite-converter-issue.md
- Model produces wrong results and/or has lesser accuracy. - Model produces correct results, but it is slower than expected. ### 4. (optional) RNN conversion support If converting TF RNN to TFLite fused RNN ops, please prefix [RNN] in the title. ### 5. (optional) Any other info / logs
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed Jun 15 03:35:58 UTC 2022 - 2.1K bytes - Viewed (0) -
tensorflow/compiler/mlir/quantization/common/quantization_lib/quantization_config.h
// DT_FLOAT, DT_HALF, DT_QINT8, and DT_QUINT8. When DT_HALF is used, the // `weight_quantization` flag needs to set to true. When DT_QUINT8 is used, // the `weight_quantization` flag needs to set to false. tensorflow::DataType inference_type = tensorflow::DT_FLOAT; // The input and output data type during inference. This flag is only used // when `inference_type` is different from DT_FLOAT. This flag can only be set
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed Mar 13 10:16:19 UTC 2024 - 10.8K bytes - Viewed (0) -
src/vendor/golang.org/x/sys/cpu/cpu.go
HasAVX512IFMA bool // Advanced vector extension 512 Integer Fused Multiply Add HasAVX512VBMI bool // Advanced vector extension 512 Vector Byte Manipulation Instructions HasAVX5124VNNIW bool // Advanced vector extension 512 Vector Neural Network Instructions Word variable precision HasAVX5124FMAPS bool // Advanced vector extension 512 Fused Multiply Accumulation Packed Single precision
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Wed May 08 16:12:58 UTC 2024 - 12.1K bytes - Viewed (0) -
tensorflow/compiler/mlir/lite/transforms/prepare_composite_functions_tf.cc
// TFLite fused embedding_lookup op. ConvertEmbeddedLookupFunc convert_embedded_lookup(func); if (failed(convert_embedded_lookup.VerifySignature())) return; func.eraseBody(); func.addEntryBlock(); convert_embedded_lookup.RewriteFunc(); } else if (attr.getValue() == mlir::TFL::kLstmCellSimple) { // Check if the lstm cell simple can be fused, if not, we just don't do // anything.
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu Apr 25 16:01:03 UTC 2024 - 17.6K bytes - Viewed (0)