- Sort Score
- Result 10 results
- Languages All
Results 1 - 5 of 5 for pad_delta (0.16 sec)
-
tensorflow/compiler/mlir/lite/stablehlo/transforms/legalize_hlo.cc
for (size_t i = 0; i < num_spatial_dims; ++i) { // In some cases the total padding is odd, so we have 1 leftover, which is // why below we check pad_delta > 1. int64_t pad_delta = std::abs(padding[2 * i] - padding[2 * i + 1]); if (pad_delta > 1) { return false; } int64_t stride = strides[i + 1]; int64_t input_size = mlir::cast<ShapedType>(conv_op.getLhs().getType())
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu Apr 25 16:01:03 UTC 2024 - 154.9K bytes - Viewed (0) -
src/cmd/link/internal/loong64/asm.go
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Tue Feb 27 17:26:07 UTC 2024 - 7.5K bytes - Viewed (0) -
tensorflow/compiler/mlir/tensorflow/tests/decompose_resource_ops.mlir
// CHECK: [[SQRT_NEW_V_EPSILON:%.*]] = "tf.AddV2"([[SQRT_NEW_V]], [[EPSILON]]) // CHECK: [[VAR_DELTA:%.*]] = "tf.Div"([[ALPHA_NEW_M]], [[SQRT_NEW_V_EPSILON]]) // CHECK: [[OLD_VAR:%.*]] = "tf.ReadVariableOp"([[VAR_HANDLE]]) : (tensor<*x!tf_type.resource<tensor<*xf32>>>) -> tensor<*xf32> // CHECK: [[NEW_VAR:%.*]] = "tf.Sub"([[OLD_VAR]], [[VAR_DELTA]]) // CHECK: "tf.AssignVariableOp"([[VAR_HANDLE]], [[NEW_VAR]])
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed May 22 19:47:48 UTC 2024 - 51.3K bytes - Viewed (0) -
tensorflow/compiler/mlir/tensorflow/ir/tf_generated_ops.td
executed. }]; let arguments = (ins Arg<TF_Float32Tensor, [{Value of parameters used in the Adadelta optimization algorithm.}]>:$parameters, Arg<TF_Float32Tensor, [{Value of accumulators used in the Adadelta optimization algorithm.}]>:$accumulators, Arg<TF_Float32Tensor, [{Value of updates used in the Adadelta optimization algorithm.}]>:$updates, DefaultValuedOptionalAttr<I64Attr, "-1">:$table_id,
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue Jun 11 23:24:08 UTC 2024 - 793K bytes - Viewed (0) -
RELEASE.md
* Added `tf.keras.optimizers.experimental.Optimizer`. The reworked optimizer gives more control over different phases of optimizer calls, and is easier to customize. We provide Adam, SGD, Adadelta, AdaGrad and RMSprop optimizers based on `tf.keras.optimizers.experimental.Optimizer`. Generally the new optimizers work in the same way as the old ones, but support new
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue Jun 11 23:24:08 UTC 2024 - 730.3K bytes - Viewed (0)