Search Options

Results per page
Sort
Preferred Languages
Advance

Results 11 - 20 of 61 for muda (0.02 sec)

  1. ci/official/requirements_updater/nvidia-requirements.txt

    nvidia-cublas-cu12>=12.5.3.2,<13.0
    nvidia-cuda-cupti-cu12>=12.5.82,<13.0
    nvidia-cuda-nvcc-cu12>=12.5.82,<13.0
    nvidia-cuda-nvrtc-cu12>=12.5.82,<13.0
    nvidia-cuda-runtime-cu12>=12.5.82,<13.0
    # The upper bound is set for the CUDNN API compatibility.
    # See
    # https://docs.nvidia.com/deeplearning/cudnn/backend/latest/developer/forward-compatibility.html#cudnn-api-compatibility
    nvidia-cudnn-cu12>=9.3.0.75,<10.0
    nvidia-cufft-cu12>=11.2.3.61,<12.0
    Registered: Tue Sep 09 12:39:10 UTC 2025
    - Last Modified: Wed Sep 03 23:57:17 UTC 2025
    - 646 bytes
    - Viewed (0)
  2. .bazelrc

    #     release_cpu_linux:               Toolchain and CUDA options for Linux CPU builds.
    #     release_gpu_linux:               Toolchain and CUDA options for Linux GPU builds.
    #     release_cpu_macos:               Toolchain and CUDA options for MacOS CPU builds.
    #     release_cpu_windows:             Toolchain and CUDA options for Windows CPU builds.
    # LINT.IfChange
    Registered: Tue Sep 09 12:39:10 UTC 2025
    - Last Modified: Fri Aug 22 21:03:34 UTC 2025
    - 56K bytes
    - Viewed (0)
  3. .github/bot_config.yml

       
       **1. Installing **TensorFlow-GPU** (TF) prebuilt binaries**
       
       
       Make sure you are using compatible TF and CUDA versions.
       Please refer following TF version and CUDA version compatibility table.
       
       | TF  | CUDA |
       
       | :-------------: | :-------------: |
       
       | 2.5.0  | 11.2 |
       
       | 2.4.0  | 11.0 |
       
       | 2.1.0 - 2.3.0  | 10.1 |
       
    Registered: Tue Sep 09 12:39:10 UTC 2025
    - Last Modified: Mon Jun 30 16:38:59 UTC 2025
    - 4K bytes
    - Viewed (0)
  4. ci/official/utilities/rename_and_verify_wheels.sh

      fi
    fi
    # VERY basic check to ensure the [and-cuda] package variant is installable.
    # Checks TFCI_BAZEL_COMMON_ARGS for "gpu" or "cuda", implying that the test is
    # relevant. All of the GPU test machines have CUDA installed via other means,
    # so I am not sure how to verify that the dependencies themselves are valid for
    # the moment.
    if [[ "$TFCI_BAZEL_COMMON_ARGS" =~ gpu|cuda ]]; then
    Registered: Tue Sep 09 12:39:10 UTC 2025
    - Last Modified: Fri Apr 25 00:22:38 UTC 2025
    - 4.7K bytes
    - Viewed (0)
  5. configure.py

        write_repo_env_to_bazelrc('cuda', env_var, local_path)
    
    
    def set_other_cuda_vars(environ_cp):
      """Set other CUDA related variables."""
      # If CUDA is enabled, always use GPU during build and test.
      if environ_cp.get('TF_CUDA_CLANG') == '1':
        write_to_bazelrc('build --config=cuda_clang')
      else:
        write_to_bazelrc('build --config=cuda')
    
    
    Registered: Tue Sep 09 12:39:10 UTC 2025
    - Last Modified: Wed Apr 30 15:18:54 UTC 2025
    - 48.3K bytes
    - Viewed (0)
  6. docs/id/docs/index.md

    * **Intuitif**: Dukungan editor hebat. <abbr title="juga dikenal otomatis-lengkap, pelengkapan otomatis, kecerdasan">Penyelesaian</abbr> di mana pun. Lebih sedikit *debugging*.
    * **Mudah**: Dibuat mudah digunakan dan dipelajari. Sedikit waktu membaca dokumentasi.
    * **Ringkas**: Mengurasi duplikasi kode. Beragam fitur dari setiap deklarasi parameter. Lebih sedikit *bug*.
    Registered: Sun Sep 07 07:19:17 UTC 2025
    - Last Modified: Sun Aug 31 10:49:48 UTC 2025
    - 20.5K bytes
    - Viewed (0)
  7. CONTRIBUTING.md

            flag.
    
            ```bash
            export flags="--config=linux --config=cuda -k"
            ```
    
        *   For TensorFlow versions prior v.2.18.0: Add CUDA paths to
            LD_LIBRARY_PATH and add the `cuda` option flag.
    
            ```bash
            export LD_LIBRARY_PATH="${LD_LIBRARY_PATH}:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:$LD_LIBRARY_PATH"
    Registered: Tue Sep 09 12:39:10 UTC 2025
    - Last Modified: Sat Jan 11 04:47:59 UTC 2025
    - 15.9K bytes
    - Viewed (0)
  8. ci/official/utilities/code_check_full.bats

        done < $BATS_TEST_TMPDIR/missing_deps
        exit 1
      fi
    }
    
    # The Python package is not allowed to depend on any CUDA packages.
    @test "Pip package doesn't depend on CUDA" {
      bazel cquery \
        --experimental_cc_shared_library \
        --@local_config_cuda//:enable_cuda \
        --@local_config_cuda//cuda:include_cuda_libs=false \
    Registered: Tue Sep 09 12:39:10 UTC 2025
    - Last Modified: Wed Aug 06 20:43:08 UTC 2025
    - 13.4K bytes
    - Viewed (0)
  9. ci/official/containers/ml_build/setup.sources.cudnn.sh

    export DEBIAN_FRONTEND=noninteractive
    
    # Fetch the NVIDIA key.
    apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/3bf863cc.pub;
    
    # Set up sources for NVIDIA CUDNN.
    cat >/etc/apt/sources.list.d/nvidia.list <<SOURCES
    # NVIDIA
    deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/ /
    
    Registered: Tue Sep 09 12:39:10 UTC 2025
    - Last Modified: Tue Feb 18 20:42:21 UTC 2025
    - 1.2K bytes
    - Viewed (0)
  10. ci/official/containers/ml_build/README.md

    WIP ML Build Docker container for ML repositories (Tensorflow, JAX and XLA).
    
    This container branches off from
    /tensorflow/tools/tf_sig_build_dockerfiles/. However, since
    hermetic CUDA and hermetic Python is now available for Tensorflow, a lot of the
    requirements installed on the original container can be removed to reduce the
    footprint of the container and make it more reusable across different ML
    Registered: Tue Sep 09 12:39:10 UTC 2025
    - Last Modified: Tue Sep 24 20:45:58 UTC 2024
    - 416 bytes
    - Viewed (0)
Back to top