- Sort Score
- Result 10 results
- Languages All
Results 71 - 79 of 79 for RELU (0.11 sec)
-
tensorflow/compiler/jit/mark_for_compilation_pass.cc
"TanhGrad", "Pow", "SquaredDifference", "ApproximateEqual", // Others "AddN", "Bitcast", "Cast", "ClipByValue", "Const", "Empty", "Identity", "IdentityN", "Relu", "Relu6", "ReluGrad", "Relu6Grad", "LeakyReluGrad", "Elu", "EluGrad", "Selu", "SeluGrad", "Select", "SelectV2", "Transpose", "ConjugateTranspose",
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed Feb 21 12:19:41 UTC 2024 - 85.3K bytes - Viewed (0) -
tensorflow/compiler/mlir/tensorflow/tests/tf-ops.mlir
"tf.Yield"(%t0, %t1, %t2) : (tensor<2xf32>, tensor<2xf32>, tensor<2xf32>) -> () }, { %e0 = "tf.Neg"(%arg1) : (tensor<2xf32>) -> tensor<2xf32> %e1 = "tf.Relu"(%arg1) : (tensor<2xf32>) -> tensor<2xf32> %e2 = "tf.Sin"(%arg1) : (tensor<2xf32>) -> tensor<2xf32> "tf.Yield"(%e0, %e1, %e2) : (tensor<2xf32>, tensor<2xf32>, tensor<2xf32>) -> ()
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Mon Oct 23 14:40:35 UTC 2023 - 236.4K bytes - Viewed (0) -
tensorflow/compiler/mlir/tensorflow/ir/tf_generated_ops.td
let summary = "Computes rectified linear gradients for a Relu operation."; let arguments = (ins Arg<TF_IntOrFpTensor, [{The backpropagated gradients to the corresponding Relu operation.}]>:$gradients, Arg<TF_IntOrFpTensor, [{The features passed as input to the corresponding Relu operation, OR the outputs of that operation (both work equivalently).}]>:$features );
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue Jun 11 23:24:08 UTC 2024 - 793K bytes - Viewed (0) -
tensorflow/compiler/mlir/lite/transforms/optimize.cc
// The actual Optimize Pass. namespace { #define GEN_PASS_DEF_OPTIMIZEPASS #include "tensorflow/compiler/mlir/lite/transforms/passes.h.inc" constexpr char kRelu[] = "RELU"; constexpr char kRelu6[] = "RELU6"; constexpr char kRelu1[] = "RELU_N1_TO_1"; ElementsAttr FlattenTo1D(Attribute a) { auto elements = mlir::cast<DenseElementsAttr>(a);
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue Apr 30 00:40:15 UTC 2024 - 102.3K bytes - Viewed (0) -
tensorflow/compiler/mlir/quantization/stablehlo/tests/passes/quantize_composite_functions.mlir
// CHECK-PER-TENSOR: return %[[UNIFORM_QUANTIZE_0]] : tensor<?x3x4x2x!quant.uniform<i8:f32, {{.*}}>> // ----- // Tests that fused pattern for convolution + bias + relu with // dynamic batch dimension is properly quantized. // Note that this checks for identical condition as // quantize_conv_with_bias_dynamic_fn, omitting stablehlo.maximum.
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu May 09 05:56:10 UTC 2024 - 91.6K bytes - Viewed (0) -
tensorflow/compiler/mlir/tensorflow/ir/tf_ops_a_m.cc
//===----------------------------------------------------------------------===// OpFoldResult LeakyReluOp::fold(FoldAdaptor adaptor) { auto operands = adaptor.getOperands(); assert(operands.size() == 1 && "leaky relu has one operand"); // leaky_relu(x, alpha: 1) -> x if (getAlpha().convertToFloat() == 1.0f && getOperand().getType() == getType()) return getOperand();
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu Apr 25 16:01:03 UTC 2024 - 146.7K bytes - Viewed (0) -
tensorflow/compiler/mlir/lite/stablehlo/tests/legalize_hlo.mlir
// CHECK: } func.func @const() -> tensor<2xi32> { %0 = mhlo.constant dense<0> : tensor<2xi32> func.return %0 : tensor<2xi32> } // CHECK-LABEL: func @relu( // CHECK-SAME: %[[VAL_0:.*]]: tensor<1xi32>) -> tensor<1xi32> { // CHECK: %[[VAL_1:.*]] = "tf.Const"() <{value = dense<0> : tensor<i32>}> : () -> tensor<i32>
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed May 29 07:26:59 UTC 2024 - 340.2K bytes - Viewed (0) -
tensorflow/compiler/mlir/lite/schema/schema_generated.h
"HASHTABLE_LOOKUP", "L2_NORMALIZATION", "L2_POOL_2D", "LOCAL_RESPONSE_NORMALIZATION", "LOGISTIC", "LSH_PROJECTION", "LSTM", "MAX_POOL_2D", "MUL", "RELU", "RELU_N1_TO_1", "RELU6", "RESHAPE", "RESIZE_BILINEAR", "RNN", "SOFTMAX", "SPACE_TO_DEPTH", "SVDF", "TANH", "CONCAT_EMBEDDINGS",
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue May 21 18:21:50 UTC 2024 - 1M bytes - Viewed (0) -
RELEASE.md
to matrix multiplication and convolution, these building blocks include: Direct batched convolution Pooling: maximum, minimum, average Normalization: LRN, batch normalization Activation: rectified linear unit (ReLU) Data manipulation: multi-dimensional transposition (conversion), split, concat, sum and scale. * TensorForest Estimator now supports SavedModel export for serving.
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue Jun 11 23:24:08 UTC 2024 - 730.3K bytes - Viewed (0)