- Sort Score
- Result 10 results
- Languages All
Results 41 - 50 of 111 for RELU (0.03 sec)
-
tensorflow/compiler/jit/tests/keras_imagenet_main_graph_mode.golden_summary
Conv2DBackpropInput 52 DivNoNan 1 Equal 1 FusedBatchNorm 53 FusedBatchNormGrad 53 Identity 2 MatMul 3 MaxPool 1 MaxPoolGrad 1 Mean 1 Mul 164 Pad 1 ReadVariableOp 646 Relu 49 ReluGrad 49 Reshape 2 ResourceApplyKerasMomentum 161 ShapeN 50 Softmax 1 SparseSoftmaxCrossEntropyWithLogits 1 Square 55 Squeeze 1 Sub 106 Sum 57 Tile 1
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Fri Jan 06 10:38:14 UTC 2023 - 740 bytes - Viewed (0) -
tensorflow/compiler/mlir/tfr/README.md
attrs=['act: {"", "RELU", "RELU6", "TANH"} = ""'], derived_attrs=['T: {float, int8}'], outputs=['o: T']) def _composite_fully_connected(input_, filter_, bias, act): res = tf.raw_ops.MatMul( a=input_, b=filter_, transpose_a=False, transpose_b=True) res = tf.raw_ops.Add(x=res, y=bias) if act == 'RELU': return tf.raw_ops.Relu(features=res) elif act == 'RELU6':
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue Mar 29 18:32:13 UTC 2022 - 6.2K bytes - Viewed (0) -
tensorflow/compiler/jit/tests/keras_imagenet_main.golden_summary
Conv2D 53 Conv2DBackpropFilter 53 Conv2DBackpropInput 52 Equal 1 FusedBatchNormGradV2 53 FusedBatchNormV2 53 MatMul 3 MaxPool 1 MaxPoolGrad 1 Mean 1 Mul 218 Pad 2 ReadVariableOp 538 Relu 49 ReluGrad 49 Reshape 2 ResourceApplyKerasMomentum 161 Slice 1 Softmax 1 SparseSoftmaxCrossEntropyWithLogits 1 Squeeze 1 Sum 1 Tile 1 Transpose 1 cluster 1 size 815 AddN 1
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Fri Jan 06 10:38:14 UTC 2023 - 874 bytes - Viewed (0) -
tensorflow/compiler/mlir/tfrt/tests/ir/fallback_opt.mlir
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Fri Mar 25 11:03:04 UTC 2022 - 4.8K bytes - Viewed (0) -
tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer/basic_lstm.mlir
// CHECK-NEXT: outputs: [ 5, 6, 7, 8 ], // CHECK-NEXT: builtin_options_type: LSTMOptions, // CHECK-NEXT: builtin_options: { // CHECK-NEXT: fused_activation_function: RELU, // CHECK-NEXT: cell_clip: 1.0, // CHECK-NEXT: proj_clip: 2.0, // CHECK-NEXT: kernel_type: BASIC // CHECK-NEXT: }, // CHECK-NEXT: intermediates: [ ] // CHECK-NEXT: } ],
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu Jul 14 16:41:28 UTC 2022 - 4.4K bytes - Viewed (0) -
tensorflow/compiler/mlir/tensorflow/transforms/canonicalize.td
// Canonicalize tf.Maximum of zero to tf.Relu //===----------------------------------------------------------------------===// def IsInteger32Pred: CPred< "getElementTypeOrSelf($0.getType()).isInteger(32)">; // Whether the transformation is compatible with the device if given. // Currently, Relu with int32 is not supported on GPU. def IsDeviceCompatible: Constraint<
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed Dec 06 18:42:28 UTC 2023 - 17K bytes - Viewed (0) -
tensorflow/compiler/mlir/tensorflow/tests/fused_kernel_matcher.mlir
// CHECK: %[[VAL_0:.*]] = "tf._FusedConv2D"(%arg2, %arg1, %arg0) <{data_format = "NHWC", dilations = [1, 1, 1, 1], epsilon = 0.000000e+00 : f32, explicit_paddings = [], fused_ops = ["BiasAdd", "Relu"], num_args = 1 : i64, operandSegmentSizes = array<i32: 1, 1, 1, 0>, padding = "SAME", strides = [1, 1, 1, 1], use_cudnn_on_gpu = true}> {TArgs = [f32]} : (tensor<8x32x32x3xf32>, tensor<1x1x3x128xf32>, tensor<128xf32>) -> tensor<*xf32>
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Mon Oct 30 06:52:55 UTC 2023 - 13.2K bytes - Viewed (0) -
tensorflow/compiler/mlir/tfr/resources/composite_ops.cc
.SetIsAggregate(); REGISTER_OP("MyBiasedDense") .Input("input: T") .Input("weight: T") .Input("bias: T") .Output("out: T") .Attr("T: {float, int8}") .Attr("act: {'', 'relu', 'relu6'} = ''");
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed Sep 23 21:28:23 UTC 2020 - 1.3K bytes - Viewed (0) -
tensorflow/compiler/mlir/quantization/tensorflow/passes/quantized_function_library.mlir
{"quantized_ops": ["${main_op}", "Relu"], "act_func": "internal_requantize_and_relu_fn", "output_type": "i8"}, {"quantized_ops": ["${main_op}", "Relu6"], "act_func": "internal_requantize_and_relu6_fn", "output_type": "i8"}, {"quantized_ops": ["${main_op}"], "act_func": "internal_dequantize_no_activation_fn", "output_type": "f32"},
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Mon Jan 08 01:16:10 UTC 2024 - 30.6K bytes - Viewed (0) -
tensorflow/compiler/mlir/tfr/passes/decompose_patterns.td
(TFR_ConstantTensorOp (Arith_ConstantOp ConstantAttr<I32Attr, "127">))]>; def QuantActRangeReluPattern : Pattern< (TFR_TFRQuantActRangeOp (TFR_ConstOp HasStringAttr<"RELU">:$act), (ConstantLikeMatcher F32Attr:$scale), (ConstantLikeMatcher I64Attr:$zp)), [(TFR_ConstantTensorOp (Arith_ConstantOp (Quantize<"0.0f"> $scale, $zp))),
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu Sep 29 21:02:21 UTC 2022 - 2.4K bytes - Viewed (0)