- Sort Score
- Result 10 results
- Languages All
Results 1 - 4 of 4 for GPUs (0.05 sec)
-
WORKSPACE
load( "@rules_ml_toolchain//third_party/gpus/cuda/hermetic:cuda_json_init_repository.bzl", "cuda_json_init_repository", ) cuda_json_init_repository() load( "@cuda_redist_json//:distributions.bzl", "CUDA_REDISTRIBUTIONS", "CUDNN_REDISTRIBUTIONS", ) load( "@rules_ml_toolchain//third_party/gpus/cuda/hermetic:cuda_redist_init_repositories.bzl",
Registered: Tue Sep 09 12:39:10 UTC 2025 - Last Modified: Wed Sep 03 23:57:17 UTC 2025 - 4.4K bytes - Viewed (0) -
configure.py
Args: environ_cp: copy of the os.environ. var_name: string for name of environment variable, e.g. "TF_NEED_CUDA". query_item: string for feature related to the variable, e.g. "CUDA for Nvidia GPUs". enabled_by_default: boolean for default behavior. question: optional string for how to ask for user input. yes_reply: optional string for reply when feature is enabled.
Registered: Tue Sep 09 12:39:10 UTC 2025 - Last Modified: Wed Apr 30 15:18:54 UTC 2025 - 48.3K bytes - Viewed (0) -
.bazelrc
# See https://developer.nvidia.com/cuda-gpus#compute # `compute_XY` enables PTX embedding in addition to SASS. PTX # is forward compatible beyond the current compute capability major # release while SASS is only forward compatible inside the current # major release. Example: sm_80 kernels can run on sm_89 GPUs but # not on sm_90 GPUs. compute_80 kernels though can also run on sm_90 GPUs.
Registered: Tue Sep 09 12:39:10 UTC 2025 - Last Modified: Fri Aug 22 21:03:34 UTC 2025 - 56K bytes - Viewed (0) -
RELEASE.md
on Ampere based GPUs.TensorFloat-32, or TF32 for short, is a math mode for NVIDIA Ampere based GPUs which causes certain float32 ops, such as matrix multiplications and convolutions, to run much faster on Ampere GPUs but with reduced precision. This reduced precision has not been
Registered: Tue Sep 09 12:39:10 UTC 2025 - Last Modified: Mon Aug 18 20:54:38 UTC 2025 - 740K bytes - Viewed (1)