Search Options

Results per page
Sort
Preferred Languages
Advance

Results 1 - 3 of 3 for had (0.16 sec)

  1. tensorflow/c/experimental/filesystem/plugins/gcs/ram_file_block_cache_test.cc

      TF_EXPECT_OK(ReadCache(&cache, "", block_size, block_size, &out));
      EXPECT_EQ(out.size(), 1);
      // Now read the first block; this should yield an INTERNAL error because we
      // had already cached a partial block at a later position.
      Status status = ReadCache(&cache, "", 0, block_size, &out);
      EXPECT_EQ(status.code(), error::INTERNAL);
    }
    
    TEST(RamFileBlockCacheTest, LRU) {
    C++
    - Registered: Tue Apr 23 12:39:09 GMT 2024
    - Last Modified: Fri Oct 15 03:16:57 GMT 2021
    - 23.2K bytes
    - Viewed (0)
  2. tensorflow/c/eager/parallel_device/parallel_device_lib.cc

                "Computing the shape of a ParallelTensor when the components do "
                "not all have the same rank is not supported. One tensor had "
                "shape ",
                first_shape.DebugString(), " and another had shape ",
                component_shape.DebugString()));
          } else {
            // Generalize differing axis lengths to "variable"/"unknown".
    C++
    - Registered: Tue Apr 30 12:39:09 GMT 2024
    - Last Modified: Fri Feb 09 07:47:20 GMT 2024
    - 25.4K bytes
    - Viewed (1)
  3. RELEASE.md

        * Disabling TensorFloat-32 execution now causes TPUs to use float32 precision for float32 matmuls and other ops. TPUs have always used bfloat16 precision for certain ops, like matmul, when such ops had float32 inputs. Now, disabling TensorFloat-32 by calling `tf.config.experimental.enable_tensor_float_32_execution(False)` will cause TPUs to use float32 precision for such ops instead of bfloat16.
    
    *  `tf.experimental.dtensor`
    Plain Text
    - Registered: Tue May 07 12:40:20 GMT 2024
    - Last Modified: Mon Apr 29 19:17:57 GMT 2024
    - 727.7K bytes
    - Viewed (8)
Back to top