- Sort Score
- Result 10 results
- Languages All
Results 1 - 3 of 3 for train_step (0.17 sec)
-
tensorflow/compiler/mlir/tfr/examples/mnist/mnist_train.py
with strategy.scope(): # Create an mnist float model with the specified float state. model = FloatModel() optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate) def train_step(features): inputs = tf.image.convert_image_dtype( features['image'], dtype=tf.float32, saturate=False) labels = tf.one_hot(features['label'], num_classes) with tf.GradientTape() as tape:
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed Oct 20 03:05:18 UTC 2021 - 6.5K bytes - Viewed (0) -
tensorflow/compiler/jit/tests/auto_clustering_test.cc
// bazel run -c opt --config=cuda ${TARGET_PATH}:resnet_imagenet_main \ // -- --skip_eval --num_gpus=1 --dtype=fp16 --batch_size=192 \ // --train_steps=210 --enable_xla --enable_eager=true // // At CL 245846452 TF_ASSERT_OK(RunAutoClusteringTestWithPbtxt("keras_imagenet_main")); } TEST_F(AutoClusteringTestImpl, KerasImagenetMainGraphMode) {
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu Jan 13 20:13:03 UTC 2022 - 3.6K bytes - Viewed (0) -
RELEASE.md
deprecated and will be removed in a future release. * Metrics update and collection logic in default `Model.train_step()` is now customizable via overriding `Model.compute_metrics()`. * Losses computation logic in default `Model.train_step()` is now customizable via overriding `Model.compute_loss()`. * `jit_compile` added to `Model.compile()` on an opt-in basis to compile
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue Jun 11 23:24:08 UTC 2024 - 730.3K bytes - Viewed (0)