Search Options

Results per page
Sort
Preferred Languages
Advance

Results 11 - 20 of 52 for GPU (0.04 sec)

  1. .github/ISSUE_TEMPLATE/tensorflow_issue_template.yaml

        attributes:
          label: GCC/compiler version
          description: If compiling from source
      - type: input
        id: Cuda
        attributes:
          label: CUDA/cuDNN version
      - type: input
        id: Gpu
        attributes:
          label: GPU model and memory
          description: If compiling from source
      - type: textarea
        id: what-happened
        attributes:
          label: Current behavior?
    Registered: Tue Nov 05 12:39:12 UTC 2024
    - Last Modified: Wed Jun 28 18:25:42 UTC 2023
    - 3.7K bytes
    - Viewed (0)
  2. ci/official/envs/linux_x86_cuda_build

    TFCI_BAZEL_TARGET_SELECTING_CONFIG_PREFIX=linux_cuda
    TFCI_BUILD_PIP_PACKAGE_ARGS="--repo_env=WHEEL_NAME=tensorflow"
    TFCI_DOCKER_ARGS="--gpus all"
    TFCI_LIB_SUFFIX="-gpu-linux-x86_64"
    Registered: Tue Nov 05 12:39:12 UTC 2024
    - Last Modified: Mon Oct 28 17:57:41 UTC 2024
    - 1.1K bytes
    - Viewed (0)
  3. RELEASE.md

            `XNNPACK` delegate automatically when the model has a `fp32` operation.
    *   GPU
        *   Allow GPU acceleration starting with internal graph nodes
        *   Experimental support for quantized models with the Android GPU delegate
        *   Add GPU delegate whitelist.
        *   Rename GPU whitelist -> compatibility (list).
        *   Improve GPU compatibility list entries from crash reports.
    *   NNAPI
        *   Set default value for
    Registered: Tue Nov 05 12:39:12 UTC 2024
    - Last Modified: Tue Oct 22 14:33:53 UTC 2024
    - 735.3K bytes
    - Viewed (0)
  4. tensorflow/c/eager/c_api_test.cc

      TFE_Op* matmul = MatMulOp(ctx, m, m);
    
      // Disable the test if no GPU is present.
      string gpu_device_name;
      if (GetDeviceName(ctx, &gpu_device_name, "GPU")) {
        TFE_OpSetDevice(matmul, "GPU:0", status);
        ASSERT_TRUE(TF_GetCode(status) == TF_OK) << TF_Message(status);
        const char* device_name = TFE_OpGetDevice(matmul, status);
        ASSERT_TRUE(strstr(device_name, "GPU:0") != nullptr);
    
        TFE_OpSetDevice(matmul, "CPU:0", status);
    Registered: Tue Nov 05 12:39:12 UTC 2024
    - Last Modified: Thu Aug 03 20:50:20 UTC 2023
    - 94.6K bytes
    - Viewed (0)
  5. tensorflow/c/eager/dlpack.cc

      switch (ctx.device_type) {
        case DLDeviceType::kDLCPU:
          return "CPU:0";
        case DLDeviceType::kDLCUDA:
          return absl::StrCat("GPU:", ctx.device_id);
        case DLDeviceType::kDLROCM:
          return absl::StrCat("GPU:", ctx.device_id);
        default:
          return absl::nullopt;
      }
    }
    
    // Converts DLPack data type to TF_DATATYPE.
    Registered: Tue Nov 05 12:39:12 UTC 2024
    - Last Modified: Sat Oct 12 05:11:17 UTC 2024
    - 12.9K bytes
    - Viewed (0)
  6. ci/official/libtensorflow.sh

    # limitations under the License.
    # ==============================================================================
    source "${BASH_SOURCE%/*}/utilities/setup.sh"
    
    # Record GPU count and CUDA version status
    if [[ "$TFCI_NVIDIA_SMI_ENABLE" == 1 ]]; then
      tfrun nvidia-smi
    fi
    
    # Update the version numbers for Nightly only
    if [[ "$TFCI_NIGHTLY_UPDATE_VERSION_ENABLE" == 1 ]]; then
    Registered: Tue Nov 05 12:39:12 UTC 2024
    - Last Modified: Fri Jan 19 19:07:48 UTC 2024
    - 1.5K bytes
    - Viewed (0)
  7. ci/official/envs/linux_x86_cuda

    TFCI_BAZEL_TARGET_SELECTING_CONFIG_PREFIX=linux_cuda
    TFCI_BUILD_PIP_PACKAGE_ARGS="--repo_env=WHEEL_NAME=tensorflow"
    TFCI_DOCKER_ARGS="--gpus all"
    TFCI_LIB_SUFFIX="-gpu-linux-x86_64"
    Registered: Tue Nov 05 12:39:12 UTC 2024
    - Last Modified: Mon Oct 14 23:45:36 UTC 2024
    - 1K bytes
    - Viewed (0)
  8. SECURITY.md

    ### Hardware attacks
    
    Physical GPUs or TPUs can also be the target of attacks. [Published
    research](https://scholar.google.com/scholar?q=gpu+side+channel) shows that it
    might be possible to use side channel attacks on the GPU to leak data from other
    running models or processes in the same system. GPUs can also have
    implementation bugs that might allow attackers to leave malicious code running
    Registered: Tue Nov 05 12:39:12 UTC 2024
    - Last Modified: Wed Oct 16 16:10:43 UTC 2024
    - 9.6K bytes
    - Viewed (0)
  9. tensorflow/c/c_api_experimental.cc

      // threadpool of GPU event mgr, as that can trigger more callbacks to be
      // scheduled on that same threadpool, causing a deadlock in cases where the
      // caller of event_mgr->ThenExecute() blocks on the completion of the callback
      // (as in the case of ConstOp kernel creation on GPU, which involves copying a
      // CPU tensor to GPU).
    Registered: Tue Nov 05 12:39:12 UTC 2024
    - Last Modified: Sat Oct 12 16:27:48 UTC 2024
    - 29.5K bytes
    - Viewed (0)
  10. ci/official/wheel.sh

    # limitations under the License.
    # ==============================================================================
    source "${BASH_SOURCE%/*}/utilities/setup.sh"
    
    # Record GPU count and CUDA version status
    if [[ "$TFCI_NVIDIA_SMI_ENABLE" == 1 ]]; then
      tfrun nvidia-smi
    fi
    
    # Update the version numbers for Nightly only
    if [[ "$TFCI_NIGHTLY_UPDATE_VERSION_ENABLE" == 1 ]]; then
    Registered: Tue Nov 05 12:39:12 UTC 2024
    - Last Modified: Mon Oct 14 23:45:36 UTC 2024
    - 2.2K bytes
    - Viewed (0)
Back to top