- Sort Score
- Result 10 results
- Languages All
Results 1 - 4 of 4 for trainable_variables (0.42 sec)
-
tensorflow/compiler/mlir/tfr/examples/mnist/mnist_train.py
tf.nn.softmax_cross_entropy_with_logits(labels, logits)) grads = tape.gradient(loss_value, model.trainable_variables) correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(labels, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) optimizer.apply_gradients(zip(grads, model.trainable_variables)) return accuracy, loss_value @tf.function def distributed_train_step(dist_inputs):
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed Oct 20 03:05:18 UTC 2021 - 6.5K bytes - Viewed (0) -
tensorflow/cc/saved_model/testdata/half_plus_two_pbtxt/00000123/saved_model.pbtxt
value { type_url: "type.googleapis.com/tensorflow.AssetFileDef" value: "\n\t\n\007Const:0\022\007foo.txt" } } } } collection_def { key: "trainable_variables" value { bytes_list { value: "\n\003a:0\022\010a/Assign\032\010a/read:0" value: "\n\003b:0\022\010b/Assign\032\010b/read:0" value: "\n\003c:0\022\010c/Assign\032\010c/read:0"
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Fri May 26 01:10:27 UTC 2017 - 46.9K bytes - Viewed (0) -
tensorflow/compiler/mlir/tensorflow/translate/import_model.cc
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed May 01 11:17:36 UTC 2024 - 183.2K bytes - Viewed (0) -
RELEASE.md
`apply_gradients()` or `minimize()` call. If your workflow calls optimizer to update different parts of model in multiple stages, please call `optimizer.build(model.trainable_variables)` before the training loop. * **Performance regression on `ParameterServerStrategy`.** This could be significant if you have many PS servers. We are aware of this issue and
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue Jun 11 23:24:08 UTC 2024 - 730.3K bytes - Viewed (0)