Search Options

Results per page
Sort
Preferred Languages
Advance

Results 1 - 10 of 30 for gpu (0.06 sec)

  1. .bazelrc

    # CUDA WHEEL
    test:linux_cuda_wheel_test_filters --test_tag_filters=gpu,requires-gpu,-no_gpu,-no_oss,-oss_excluded,-oss_serial,-benchmark-test,-no_cuda11,-no_oss_py38,-no_oss_py39,-no_oss_py310
    test:linux_cuda_wheel_test_filters --build_tag_filters=gpu,requires-gpu,-no_gpu,-no_oss,-oss_excluded,-oss_serial,-benchmark-test,-no_cuda11,-no_oss_py38,-no_oss_py39,-no_oss_py310
    Registered: Tue Nov 05 12:39:12 UTC 2024
    - Last Modified: Mon Oct 28 22:02:31 UTC 2024
    - 51.3K bytes
    - Viewed (0)
  2. ci/official/containers/linux_arm64/devel.usertools/aarch64.bazelrc

    # bazel test invocation as normal.
    test:nonpip_filters --test_tag_filters=-no_oss,-oss_serial,-gpu,-tpu,-benchmark-test,-v1only,-no_aarch64,-no_oss_py38,-no_oss_py39,-no_oss_py310
    test:nonpip_filters --build_tag_filters=-no_oss,-oss_serial,-gpu,-tpu,-benchmark-test,-v1only,-no_aarch64,-no_oss_py38,-no_oss_py39,-no_oss_py310
    Registered: Tue Nov 05 12:39:12 UTC 2024
    - Last Modified: Fri Jul 12 20:16:57 UTC 2024
    - 5.7K bytes
    - Viewed (0)
  3. ci/official/containers/linux_arm64/devel.usertools/aarch64_clang.bazelrc

    # bazel test invocation as normal.
    test:nonpip_filters --test_tag_filters=-no_oss,-oss_serial,-gpu,-tpu,-benchmark-test,-v1only,-no_aarch64,-no_oss_py38,-no_oss_py39,-no_oss_py310
    test:nonpip_filters --build_tag_filters=-no_oss,-oss_serial,-gpu,-tpu,-benchmark-test,-v1only,-no_aarch64,-no_oss_py38,-no_oss_py39,-no_oss_py310
    Registered: Tue Nov 05 12:39:12 UTC 2024
    - Last Modified: Fri Jul 12 20:16:57 UTC 2024
    - 6.2K bytes
    - Viewed (0)
  4. .github/bot_config.yml

    cuda_comment: >
       From the template it looks like you are installing **TensorFlow** (TF) prebuilt binaries:
          * For TF-GPU - See point 1
          * For TF-CPU - See point 2
       -----------------------------------------------------------------------------------------------
       
       **1. Installing **TensorFlow-GPU** (TF) prebuilt binaries**
       
       
       Make sure you are using compatible TF and CUDA versions.
    Registered: Tue Nov 05 12:39:12 UTC 2024
    - Last Modified: Mon Jul 15 05:00:54 UTC 2024
    - 4K bytes
    - Viewed (0)
  5. ci/official/utilities/rename_and_verify_wheels.sh

    # VERY basic check to ensure the [and-cuda] package variant is installable.
    # Checks TFCI_BAZEL_COMMON_ARGS for "gpu" or "cuda", implying that the test is
    # relevant. All of the GPU test machines have CUDA installed via other means,
    # so I am not sure how to verify that the dependencies themselves are valid for
    # the moment.
    if [[ "$TFCI_BAZEL_COMMON_ARGS" =~ gpu|cuda ]]; then
      echo "Checking to make sure tensorflow[and-cuda] is installable..."
    Registered: Tue Nov 05 12:39:12 UTC 2024
    - Last Modified: Wed Oct 02 21:18:17 UTC 2024
    - 4.3K bytes
    - Viewed (0)
  6. CONTRIBUTING.md

        and
        [GPU developer Dockerfile](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tools/dockerfiles/dockerfiles/devel-gpu.Dockerfile)
        for the required packages. Alternatively, use the said
        [tensorflow/build Docker images](https://hub.docker.com/r/tensorflow/build)
        (`tensorflow/tensorflow:devel` and `tensorflow/tensorflow:devel-gpu` are no
    Registered: Tue Nov 05 12:39:12 UTC 2024
    - Last Modified: Wed Oct 23 06:20:12 UTC 2024
    - 15.9K bytes
    - Viewed (0)
  7. ci/official/envs/linux_x86_cuda_build

    TFCI_BAZEL_TARGET_SELECTING_CONFIG_PREFIX=linux_cuda
    TFCI_BUILD_PIP_PACKAGE_ARGS="--repo_env=WHEEL_NAME=tensorflow"
    TFCI_DOCKER_ARGS="--gpus all"
    TFCI_LIB_SUFFIX="-gpu-linux-x86_64"
    Registered: Tue Nov 05 12:39:12 UTC 2024
    - Last Modified: Mon Oct 28 17:57:41 UTC 2024
    - 1.1K bytes
    - Viewed (0)
  8. tensorflow/c/eager/dlpack.cc

      switch (ctx.device_type) {
        case DLDeviceType::kDLCPU:
          return "CPU:0";
        case DLDeviceType::kDLCUDA:
          return absl::StrCat("GPU:", ctx.device_id);
        case DLDeviceType::kDLROCM:
          return absl::StrCat("GPU:", ctx.device_id);
        default:
          return absl::nullopt;
      }
    }
    
    // Converts DLPack data type to TF_DATATYPE.
    Registered: Tue Nov 05 12:39:12 UTC 2024
    - Last Modified: Sat Oct 12 05:11:17 UTC 2024
    - 12.9K bytes
    - Viewed (0)
  9. ci/official/libtensorflow.sh

    # limitations under the License.
    # ==============================================================================
    source "${BASH_SOURCE%/*}/utilities/setup.sh"
    
    # Record GPU count and CUDA version status
    if [[ "$TFCI_NVIDIA_SMI_ENABLE" == 1 ]]; then
      tfrun nvidia-smi
    fi
    
    # Update the version numbers for Nightly only
    if [[ "$TFCI_NIGHTLY_UPDATE_VERSION_ENABLE" == 1 ]]; then
    Registered: Tue Nov 05 12:39:12 UTC 2024
    - Last Modified: Fri Jan 19 19:07:48 UTC 2024
    - 1.5K bytes
    - Viewed (0)
  10. RELEASE.md

            `XNNPACK` delegate automatically when the model has a `fp32` operation.
    *   GPU
        *   Allow GPU acceleration starting with internal graph nodes
        *   Experimental support for quantized models with the Android GPU delegate
        *   Add GPU delegate whitelist.
        *   Rename GPU whitelist -> compatibility (list).
        *   Improve GPU compatibility list entries from crash reports.
    *   NNAPI
        *   Set default value for
    Registered: Tue Nov 05 12:39:12 UTC 2024
    - Last Modified: Tue Oct 22 14:33:53 UTC 2024
    - 735.3K bytes
    - Viewed (0)
Back to top