Search Options

Results per page
Sort
Preferred Languages
Advance

Results 1 - 10 of 29 for GPU (0.75 sec)

  1. .bazelrc

    # CUDA WHEEL
    test:linux_cuda_wheel_test_filters --test_tag_filters=gpu,requires-gpu,-no_gpu,-no_oss,-tf_tosa,-oss_excluded,-oss_serial,-benchmark-test,-no_cuda11,-no_oss_py38,-no_oss_py39,-no_oss_py310,-no_oss_py313
    test:linux_cuda_wheel_test_filters --build_tag_filters=gpu,requires-gpu,-no_gpu,-no_oss,-tf_tosa,-oss_excluded,-oss_serial,-benchmark-test,-no_cuda11,-no_oss_py38,-no_oss_py39,-no_oss_py310,-no_oss_py313
    Registered: Tue Dec 30 12:39:10 UTC 2025
    - Last Modified: Fri Dec 26 23:20:26 UTC 2025
    - 56.8K bytes
    - Viewed (0)
  2. .github/bot_config.yml

    cuda_comment: >
       From the template it looks like you are installing **TensorFlow** (TF) prebuilt binaries:
          * For TF-GPU - See point 1
          * For TF-CPU - See point 2
       -----------------------------------------------------------------------------------------------
       
       **1. Installing **TensorFlow-GPU** (TF) prebuilt binaries**
       
       
       Make sure you are using compatible TF and CUDA versions.
    Registered: Tue Dec 30 12:39:10 UTC 2025
    - Last Modified: Mon Jun 30 16:38:59 UTC 2025
    - 4K bytes
    - Viewed (1)
  3. README.md

    [pip package](https://www.tensorflow.org/install/pip), to
    [enable GPU support](https://www.tensorflow.org/install/gpu), use a
    [Docker container](https://www.tensorflow.org/install/docker), and
    [build from source](https://www.tensorflow.org/install/source).
    
    To install the current release, which includes support for
    [CUDA-enabled GPU cards](https://www.tensorflow.org/install/gpu) *(Ubuntu and
    Windows)*:
    
    ```
    $ pip install tensorflow
    ```
    Registered: Tue Dec 30 12:39:10 UTC 2025
    - Last Modified: Fri Jul 18 14:09:03 UTC 2025
    - 11.6K bytes
    - Viewed (0)
  4. CONTRIBUTING.md

        and
        [GPU developer Dockerfile](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tools/dockerfiles/dockerfiles/devel-gpu.Dockerfile)
        for the required packages. Alternatively, use the said
        [tensorflow/build Docker images](https://hub.docker.com/r/tensorflow/build)
        (`tensorflow/tensorflow:devel` and `tensorflow/tensorflow:devel-gpu` are no
    Registered: Tue Dec 30 12:39:10 UTC 2025
    - Last Modified: Sat Jan 11 04:47:59 UTC 2025
    - 15.9K bytes
    - Viewed (0)
  5. ci/official/envs/linux_x86_cuda

    TFCI_BAZEL_TARGET_SELECTING_CONFIG_PREFIX=linux_cuda
    TFCI_BUILD_PIP_PACKAGE_WHEEL_NAME_ARG="--repo_env=WHEEL_NAME=tensorflow"
    TFCI_DOCKER_ARGS="--gpus all"
    TFCI_LIB_SUFFIX="-gpu-linux-x86_64"
    # TODO: Set back to 610M once the wheel size is fixed.
    Registered: Tue Dec 30 12:39:10 UTC 2025
    - Last Modified: Tue Feb 18 22:52:46 UTC 2025
    - 1.1K bytes
    - Viewed (0)
  6. ci/official/libtensorflow.sh

    # limitations under the License.
    # ==============================================================================
    source "${BASH_SOURCE%/*}/utilities/setup.sh"
    
    # Record GPU count and CUDA version status
    if [[ "$TFCI_NVIDIA_SMI_ENABLE" == 1 ]]; then
      tfrun nvidia-smi
    fi
    
    # Update the version numbers for Nightly only
    if [[ "$TFCI_NIGHTLY_UPDATE_VERSION_ENABLE" == 1 ]]; then
    Registered: Tue Dec 30 12:39:10 UTC 2025
    - Last Modified: Fri Jan 24 20:17:08 UTC 2025
    - 2K bytes
    - Viewed (0)
  7. configure.py

            (environ_cp.get('TF_NEED_ROCM', None) == '1')):
          test_and_build_filters += ['-no_windows_gpu', '-no_gpu']
        else:
          test_and_build_filters.append('-gpu')
      elif is_macos():
        test_and_build_filters += ['-gpu', '-nomac', '-no_mac', '-mac_excluded']
      elif is_linux():
        if ((environ_cp.get('TF_NEED_CUDA', None) == '1') or
            (environ_cp.get('TF_NEED_ROCM', None) == '1')):
    Registered: Tue Dec 30 12:39:10 UTC 2025
    - Last Modified: Wed Apr 30 15:18:54 UTC 2025
    - 48.3K bytes
    - Viewed (0)
  8. RELEASE.md

            `XNNPACK` delegate automatically when the model has a `fp32` operation.
    *   GPU
        *   Allow GPU acceleration starting with internal graph nodes
        *   Experimental support for quantized models with the Android GPU delegate
        *   Add GPU delegate whitelist.
        *   Rename GPU whitelist -> compatibility (list).
        *   Improve GPU compatibility list entries from crash reports.
    *   NNAPI
        *   Set default value for
    Registered: Tue Dec 30 12:39:10 UTC 2025
    - Last Modified: Tue Oct 28 22:27:41 UTC 2025
    - 740.4K bytes
    - Viewed (3)
  9. SECURITY.md

    ### Hardware attacks
    
    Physical GPUs or TPUs can also be the target of attacks. [Published
    research](https://scholar.google.com/scholar?q=gpu+side+channel) shows that it
    might be possible to use side channel attacks on the GPU to leak data from other
    running models or processes in the same system. GPUs can also have
    implementation bugs that might allow attackers to leave malicious code running
    Registered: Tue Dec 30 12:39:10 UTC 2025
    - Last Modified: Wed Oct 16 16:10:43 UTC 2024
    - 9.6K bytes
    - Viewed (0)
  10. tensorflow/c/c_api_experimental.cc

      // threadpool of GPU event mgr, as that can trigger more callbacks to be
      // scheduled on that same threadpool, causing a deadlock in cases where the
      // caller of event_mgr->ThenExecute() blocks on the completion of the callback
      // (as in the case of ConstOp kernel creation on GPU, which involves copying a
      // CPU tensor to GPU).
    Registered: Tue Dec 30 12:39:10 UTC 2025
    - Last Modified: Sat Oct 04 05:55:32 UTC 2025
    - 29.4K bytes
    - Viewed (0)
Back to top