- Sort Score
- Result 10 results
- Languages All
Results 11 - 20 of 62 for muda (0.03 sec)
-
ci/official/containers/ml_build/rbe_nvidia.packages.txt
Registered: Tue Dec 30 12:39:10 UTC 2025 - Last Modified: Thu Sep 18 00:19:40 UTC 2025 - 307 bytes - Viewed (0) -
.github/bot_config.yml
**1. Installing **TensorFlow-GPU** (TF) prebuilt binaries** Make sure you are using compatible TF and CUDA versions. Please refer following TF version and CUDA version compatibility table. | TF | CUDA | | :-------------: | :-------------: | | 2.5.0 | 11.2 | | 2.4.0 | 11.0 | | 2.1.0 - 2.3.0 | 10.1 |
Registered: Tue Dec 30 12:39:10 UTC 2025 - Last Modified: Mon Jun 30 16:38:59 UTC 2025 - 4K bytes - Viewed (1) -
ci/official/requirements_updater/nvidia-requirements.txt
nvidia-cublas-cu12>=12.5.3.2,<13.0 nvidia-cuda-cupti-cu12>=12.5.82,<13.0 nvidia-cuda-nvcc-cu12>=12.5.82,<13.0 nvidia-cuda-nvrtc-cu12>=12.5.82,<13.0 nvidia-cuda-runtime-cu12>=12.5.82,<13.0 # The upper bound is set for the CUDNN API compatibility. # See # https://docs.nvidia.com/deeplearning/cudnn/backend/latest/developer/forward-compatibility.html#cudnn-api-compatibility nvidia-cudnn-cu12>=9.3.0.75,<10.0 nvidia-cufft-cu12>=11.2.3.61,<12.0
Registered: Tue Dec 30 12:39:10 UTC 2025 - Last Modified: Wed Sep 03 23:57:17 UTC 2025 - 646 bytes - Viewed (0) -
WORKSPACE
load( "@rules_ml_toolchain//third_party/gpus/cuda/hermetic:cuda_json_init_repository.bzl", "cuda_json_init_repository", ) cuda_json_init_repository() load( "@cuda_redist_json//:distributions.bzl", "CUDA_REDISTRIBUTIONS", "CUDNN_REDISTRIBUTIONS", ) load( "@rules_ml_toolchain//third_party/gpus/cuda/hermetic:cuda_redist_init_repositories.bzl",Registered: Tue Dec 30 12:39:10 UTC 2025 - Last Modified: Fri Dec 26 23:20:26 UTC 2025 - 5.1K bytes - Viewed (0) -
.bazelrc
# release_cpu_linux: Toolchain and CUDA options for Linux CPU builds. # release_gpu_linux: Toolchain and CUDA options for Linux GPU builds. # release_cpu_macos: Toolchain and CUDA options for MacOS CPU builds. # release_cpu_windows: Toolchain and CUDA options for Windows CPU builds. # LINT.IfChange
Registered: Tue Dec 30 12:39:10 UTC 2025 - Last Modified: Fri Dec 26 23:20:26 UTC 2025 - 56.8K bytes - Viewed (0) -
configure.py
write_repo_env_to_bazelrc('cuda', env_var, local_path) def set_other_cuda_vars(environ_cp): """Set other CUDA related variables.""" # If CUDA is enabled, always use GPU during build and test. if environ_cp.get('TF_CUDA_CLANG') == '1': write_to_bazelrc('build --config=cuda_clang') else: write_to_bazelrc('build --config=cuda')
Registered: Tue Dec 30 12:39:10 UTC 2025 - Last Modified: Wed Apr 30 15:18:54 UTC 2025 - 48.3K bytes - Viewed (0) -
ci/official/utilities/code_check_full.bats
done < $BATS_TEST_TMPDIR/missing_deps exit 1 fi } # The Python package is not allowed to depend on any CUDA packages. @test "Pip package doesn't depend on CUDA" { bazel cquery \ --experimental_cc_shared_library \ --@local_config_cuda//:enable_cuda \ --@local_config_cuda//cuda:include_cuda_libs=false \
Registered: Tue Dec 30 12:39:10 UTC 2025 - Last Modified: Fri Dec 19 18:47:57 UTC 2025 - 13.5K bytes - Viewed (0) -
CONTRIBUTING.md
flag. ```bash export flags="--config=linux --config=cuda -k" ``` * For TensorFlow versions prior v.2.18.0: Add CUDA paths to LD_LIBRARY_PATH and add the `cuda` option flag. ```bash export LD_LIBRARY_PATH="${LD_LIBRARY_PATH}:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:$LD_LIBRARY_PATH"Registered: Tue Dec 30 12:39:10 UTC 2025 - Last Modified: Sat Jan 11 04:47:59 UTC 2025 - 15.9K bytes - Viewed (0) -
ci/official/containers/ml_build/setup.sources.cudnn.sh
export DEBIAN_FRONTEND=noninteractive # Fetch the NVIDIA key. apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/3bf863cc.pub; # Set up sources for NVIDIA CUDNN. cat >/etc/apt/sources.list.d/nvidia.list <<SOURCES # NVIDIA deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/ /
Registered: Tue Dec 30 12:39:10 UTC 2025 - Last Modified: Tue Feb 18 20:42:21 UTC 2025 - 1.2K bytes - Viewed (0) -
ci/official/utilities/rename_and_verify_wheels.sh
"$python" -m pip install numpy==1.26.4 else "$python" -m pip install numpy==1.26.0 fi fi if [[ "$TFCI_BAZEL_COMMON_ARGS" =~ gpu|cuda ]]; then echo "Checking to make sure tensorflow[and-cuda] is installable..." "$python" -m pip install "$(echo *.whl)[and-cuda]" $TFCI_PYTHON_VERIFY_PIP_INSTALL_ARGS else "$python" -m pip install *.whl $TFCI_PYTHON_VERIFY_PIP_INSTALL_ARGS fiRegistered: Tue Dec 30 12:39:10 UTC 2025 - Last Modified: Mon Sep 22 21:39:32 UTC 2025 - 4.4K bytes - Viewed (0)