Search Options

Results per page
Sort
Preferred Languages
Advance

Results 1 - 4 of 4 for GPUs (0.02 sec)

  1. WORKSPACE

    load(
        "@rules_ml_toolchain//third_party/gpus/cuda/hermetic:cuda_json_init_repository.bzl",
        "cuda_json_init_repository",
    )
    
    cuda_json_init_repository()
    
    load(
        "@cuda_redist_json//:distributions.bzl",
        "CUDA_REDISTRIBUTIONS",
        "CUDNN_REDISTRIBUTIONS",
    )
    load(
        "@rules_ml_toolchain//third_party/gpus/cuda/hermetic:cuda_redist_init_repositories.bzl",
    Registered: Tue Dec 30 12:39:10 UTC 2025
    - Last Modified: Fri Dec 26 23:20:26 UTC 2025
    - 5.1K bytes
    - Viewed (0)
  2. .bazelrc

    # See https://developer.nvidia.com/cuda-gpus#compute
    # `compute_XY` enables PTX embedding in addition to SASS. PTX
    # is forward compatible beyond the current compute capability major
    # release while SASS is only forward compatible inside the current
    # major release. Example: sm_80 kernels can run on sm_89 GPUs but
    # not on sm_90 GPUs. compute_80 kernels though can also run on sm_90 GPUs.
    Registered: Tue Dec 30 12:39:10 UTC 2025
    - Last Modified: Fri Dec 26 23:20:26 UTC 2025
    - 56.8K bytes
    - Viewed (0)
  3. tensorflow/c/c_test_util.cc

      TF_SetAttrType(desc, "T", TF_INT32);
      // Set device to CPU since there is no version of split for int32 on GPU
      // TODO(iga): Convert all these helpers and tests to use floats because
      // they are usually available on GPUs. After doing this, remove TF_SetDevice
      // call in c_api_function_test.cc
      TF_SetDevice(desc, "/cpu:0");
      *op = TF_FinishOperation(desc, s);
      ASSERT_EQ(TF_OK, TF_GetCode(s)) << TF_Message(s);
    Registered: Tue Dec 30 12:39:10 UTC 2025
    - Last Modified: Sat Oct 04 05:55:32 UTC 2025
    - 17.8K bytes
    - Viewed (1)
  4. RELEASE.md

            on Ampere based GPUs.TensorFloat-32, or TF32 for short, is a math mode
            for NVIDIA Ampere based GPUs which causes certain float32 ops, such as
            matrix multiplications and convolutions, to run much faster on Ampere
            GPUs but with reduced precision. This reduced precision has not been
    Registered: Tue Dec 30 12:39:10 UTC 2025
    - Last Modified: Tue Oct 28 22:27:41 UTC 2025
    - 740.4K bytes
    - Viewed (3)
Back to top