Search Options

Results per page
Sort
Preferred Languages
Advance

Results 1 - 10 of 22 for GPU (1.99 sec)

  1. .bazelrc

    # CUDA WHEEL
    test:linux_cuda_wheel_test_filters --test_tag_filters=gpu,requires-gpu,-no_gpu,-no_oss,-tf_tosa,-oss_excluded,-oss_serial,-benchmark-test,-no_cuda11,-no_oss_py38,-no_oss_py39,-no_oss_py310,-no_oss_py313
    test:linux_cuda_wheel_test_filters --build_tag_filters=gpu,requires-gpu,-no_gpu,-no_oss,-tf_tosa,-oss_excluded,-oss_serial,-benchmark-test,-no_cuda11,-no_oss_py38,-no_oss_py39,-no_oss_py310,-no_oss_py313
    Registered: Tue Dec 30 12:39:10 UTC 2025
    - Last Modified: Fri Dec 26 23:20:26 UTC 2025
    - 56.8K bytes
    - Viewed (0)
  2. .github/bot_config.yml

    cuda_comment: >
       From the template it looks like you are installing **TensorFlow** (TF) prebuilt binaries:
          * For TF-GPU - See point 1
          * For TF-CPU - See point 2
       -----------------------------------------------------------------------------------------------
       
       **1. Installing **TensorFlow-GPU** (TF) prebuilt binaries**
       
       
       Make sure you are using compatible TF and CUDA versions.
    Registered: Tue Dec 30 12:39:10 UTC 2025
    - Last Modified: Mon Jun 30 16:38:59 UTC 2025
    - 4K bytes
    - Viewed (1)
  3. README.md

    [pip package](https://www.tensorflow.org/install/pip), to
    [enable GPU support](https://www.tensorflow.org/install/gpu), use a
    [Docker container](https://www.tensorflow.org/install/docker), and
    [build from source](https://www.tensorflow.org/install/source).
    
    To install the current release, which includes support for
    [CUDA-enabled GPU cards](https://www.tensorflow.org/install/gpu) *(Ubuntu and
    Windows)*:
    
    ```
    $ pip install tensorflow
    ```
    Registered: Tue Dec 30 12:39:10 UTC 2025
    - Last Modified: Fri Jul 18 14:09:03 UTC 2025
    - 11.6K bytes
    - Viewed (0)
  4. ci/official/envs/linux_x86_cuda

    TFCI_BAZEL_TARGET_SELECTING_CONFIG_PREFIX=linux_cuda
    TFCI_BUILD_PIP_PACKAGE_WHEEL_NAME_ARG="--repo_env=WHEEL_NAME=tensorflow"
    TFCI_DOCKER_ARGS="--gpus all"
    TFCI_LIB_SUFFIX="-gpu-linux-x86_64"
    # TODO: Set back to 610M once the wheel size is fixed.
    Registered: Tue Dec 30 12:39:10 UTC 2025
    - Last Modified: Tue Feb 18 22:52:46 UTC 2025
    - 1.1K bytes
    - Viewed (0)
  5. configure.py

            (environ_cp.get('TF_NEED_ROCM', None) == '1')):
          test_and_build_filters += ['-no_windows_gpu', '-no_gpu']
        else:
          test_and_build_filters.append('-gpu')
      elif is_macos():
        test_and_build_filters += ['-gpu', '-nomac', '-no_mac', '-mac_excluded']
      elif is_linux():
        if ((environ_cp.get('TF_NEED_CUDA', None) == '1') or
            (environ_cp.get('TF_NEED_ROCM', None) == '1')):
    Registered: Tue Dec 30 12:39:10 UTC 2025
    - Last Modified: Wed Apr 30 15:18:54 UTC 2025
    - 48.3K bytes
    - Viewed (0)
  6. RELEASE.md

            `XNNPACK` delegate automatically when the model has a `fp32` operation.
    *   GPU
        *   Allow GPU acceleration starting with internal graph nodes
        *   Experimental support for quantized models with the Android GPU delegate
        *   Add GPU delegate whitelist.
        *   Rename GPU whitelist -> compatibility (list).
        *   Improve GPU compatibility list entries from crash reports.
    *   NNAPI
        *   Set default value for
    Registered: Tue Dec 30 12:39:10 UTC 2025
    - Last Modified: Tue Oct 28 22:27:41 UTC 2025
    - 740.4K bytes
    - Viewed (3)
  7. tensorflow/c/c_api_experimental.cc

      // threadpool of GPU event mgr, as that can trigger more callbacks to be
      // scheduled on that same threadpool, causing a deadlock in cases where the
      // caller of event_mgr->ThenExecute() blocks on the completion of the callback
      // (as in the case of ConstOp kernel creation on GPU, which involves copying a
      // CPU tensor to GPU).
    Registered: Tue Dec 30 12:39:10 UTC 2025
    - Last Modified: Sat Oct 04 05:55:32 UTC 2025
    - 29.4K bytes
    - Viewed (0)
  8. ci/official/wheel.sh

    # limitations under the License.
    # ==============================================================================
    source "${BASH_SOURCE%/*}/utilities/setup.sh"
    
    # Record GPU count and CUDA version status
    if [[ "$TFCI_NVIDIA_SMI_ENABLE" == 1 ]]; then
      tfrun nvidia-smi
    fi
    
    # Update the version numbers for Nightly only
    if [[ "$TFCI_NIGHTLY_UPDATE_VERSION_ENABLE" == 1 ]]; then
    Registered: Tue Dec 30 12:39:10 UTC 2025
    - Last Modified: Mon Mar 03 17:29:53 UTC 2025
    - 3.8K bytes
    - Viewed (0)
  9. ci/official/utilities/rename_and_verify_wheels.sh

      if [[ "$TFCI_PYTHON_VERSION" == "3.13" ]]; then
        "$python" -m pip install numpy==1.26.4
      else
        "$python" -m pip install numpy==1.26.0
      fi
    fi
    if [[ "$TFCI_BAZEL_COMMON_ARGS" =~ gpu|cuda ]]; then
      echo "Checking to make sure tensorflow[and-cuda] is installable..."
      "$python" -m pip install "$(echo *.whl)[and-cuda]" $TFCI_PYTHON_VERIFY_PIP_INSTALL_ARGS
    else
    Registered: Tue Dec 30 12:39:10 UTC 2025
    - Last Modified: Mon Sep 22 21:39:32 UTC 2025
    - 4.4K bytes
    - Viewed (0)
  10. .github/workflows/build.yml

            with:
              api-level: ${{ matrix.api-level }}
              arch: ${{ matrix.api-level == '34' && 'x86_64' || 'x86' }}
              force-avd-creation: false
              emulator-options: -no-window -gpu swiftshader_indirect -noaudio -no-boot-anim -camera-back none
              disable-animations: false
              script: echo "Generated AVD snapshot for caching."
    
          - name: Run Tests
    Registered: Fri Dec 26 11:42:13 UTC 2025
    - Last Modified: Fri Dec 12 04:49:37 UTC 2025
    - 18.6K bytes
    - Viewed (0)
Back to top