- Sort Score
- Result 10 results
- Languages All
Results 1 - 6 of 6 for LogSoftmax (0.41 sec)
-
tensorflow/compiler/mlir/tensorflow/transforms/lower_tf.td
// computes loss and backprop of the loss with respect to 'features'. // // Softmax cross entropy loss is defined as follows: // // loss = Sum(-labels * Log(Exp(features) / Sum(Exp(features))) // loss = Sum(-labels * LogSoftmax(features)) // // Computing gradient of the loss with respect to features gives us, // // backprop = (Exp(features) / Sum(Exp(features))) - labels // backprop = Softmax(features) - labels //
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue Jun 04 13:30:42 UTC 2024 - 24.7K bytes - Viewed (0) -
tensorflow/compiler/mlir/tensorflow/transforms/lower_tf.cc
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu Apr 25 16:01:03 UTC 2024 - 74.9K bytes - Viewed (0) -
tensorflow/compiler/mlir/tensorflow/tests/canonicalize.mlir
%1 = "tf.Log"(%0) {device = "/job:localhost/replica:0/task:0/device:GPU:0"} : (tensor<8x16xf32>) -> tensor<8x16xf32> func.return %1: tensor<8x16xf32> // CHECK: %0 = "tf.LogSoftmax"(%arg0) {device = "/job:localhost/replica:0/task:0/device:GPU:0"} : (tensor<8x16xf32>) -> tensor<8x16xf32> // CHECK: return %0 } // CHECK-LABEL: testLogToLog1p
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu May 09 22:07:10 UTC 2024 - 132.1K bytes - Viewed (0) -
tensorflow/compiler/mlir/lite/tests/legalize-tf.mlir
func.return %0 : tensor<8x16xf32> // CHECK-LABEL: log // CHECK: "tfl.log"(%arg0) : (tensor<8x16xf32>) -> tensor<8x16xf32> } func.func @log_softmax(%arg0: tensor<8x16xf32>) -> tensor<8x16xf32> { %0 = "tf.LogSoftmax"(%arg0) : (tensor<8x16xf32>) -> tensor<8x16xf32> func.return %0 : tensor<8x16xf32> // CHECK-LABEL: log_softmax // CHECK: "tfl.log_softmax"(%arg0) : (tensor<8x16xf32>) -> tensor<8x16xf32> }
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed Jun 05 01:54:33 UTC 2024 - 153.4K bytes - Viewed (0) -
tensorflow/compiler/mlir/tensorflow/ir/tf_generated_ops.td
return ArraysAreCastCompatible(inferred, actual); } }]; } def TF_LogSoftmaxOp : TF_Op<"LogSoftmax", [Pure, TF_SameOperandsAndResultTypeResolveRef]> { let summary = "Computes log softmax activations."; let description = [{ For each batch `i` and class `j` we have logsoftmax[i, j] = logits[i, j] - log(sum(exp(logits[i]))) }]; let arguments = (ins
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue Jun 11 23:24:08 UTC 2024 - 793K bytes - Viewed (0) -
RELEASE.md
stateless and do not touch any resources. * Refactors code in Quant8 LSTM support to reduce TFLite binary size. * Add support of local soft device placement for eager op. * Add HW acceleration support for `LogSoftMax`. * Added a function `nested_value_rowids` for ragged tensors. * Add guard to avoid acceleration of L2 Normalization with input rank != 4 * Add `tf.math.cumulative_logsumexp operation`. * Add `tf.ragged.stack`.
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue Jun 11 23:24:08 UTC 2024 - 730.3K bytes - Viewed (0)