Search Options

Results per page
Sort
Preferred Languages
Advance

Results 1 - 10 of 283 for gpus (0.04 sec)

  1. tensorflow/compiler/mlir/tensorflow/ir/tf_ops_device_helper.h

    namespace mlir {
    
    class Operation;
    
    namespace TF {
    
    class RuntimeDevices;
    
    // Returns true if at least one GPU device is available at runtime.
    bool CanUseGpuDevice(const RuntimeDevices &devices);
    
    // Returns true if all of the GPUs available at runtime support TensorCores
    // (NVIDIA compute capability >= 7.0).
    bool CanUseTensorCores(const RuntimeDevices &devices);
    
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Fri Nov 12 21:57:12 UTC 2021
    - 1.4K bytes
    - Viewed (0)
  2. ci/official/envs/linux_x86_cuda

    TFCI_BAZEL_TARGET_SELECTING_CONFIG_PREFIX=linux_cuda
    TFCI_BUILD_PIP_PACKAGE_ARGS="--repo_env=WHEEL_NAME=tensorflow"
    TFCI_DOCKER_ARGS="--gpus all"
    TFCI_LIB_SUFFIX="-gpu-linux-x86_64"
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Fri Jan 19 00:24:30 UTC 2024
    - 1K bytes
    - Viewed (0)
  3. tensorflow/compiler/mlir/tensorflow/ir/tf_ops_device_helper.cc

      return device.type == ::tensorflow::DEVICE_GPU;
    }
    
    }  // namespace
    
    // Returns true if at least one GPU device is available at runtime.
    bool CanUseGpuDevice(const RuntimeDevices &devices) {
      return llvm::any_of(devices.device_names(), IsGpuDevice);
    }
    
    // Returns true if all of the GPUs available at runtime support TensorCores
    // (NVIDIA compute capability >= 7.0).
    bool CanUseTensorCores(const RuntimeDevices &devices) {
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Tue Jun 21 08:41:18 UTC 2022
    - 2.4K bytes
    - Viewed (0)
  4. CITATION.cff

    shared state, and the operations that mutate that state. It maps the nodes of a dataflow graph across many machines in a cluster, and within a machine across multiple computational devices, including multicore CPUs, general purpose GPUs, and custom-designed ASICs known as Tensor Processing Units (TPUs). This architecture gives flexibility to the application developer, whereas in previous “parameter server” designs the management of shared state is built into the system, TensorFlow enables developers to...
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Mon Sep 06 15:26:23 UTC 2021
    - 3.5K bytes
    - Viewed (0)
  5. tensorflow/compiler/mlir/tensorflow/tests/layout_optimization_to_nhwc.mlir

      // it to NCHW before padding, and does all computations in NCHW (this is the
      // default setup for ResNet model trained in fp32 on GPU).
      //
      // To be able to use Tensor Cores on latest NVIDIA GPUs this model has to be
      // converted to NHWC data format.
    
      // Padding in spatial dimension (NCHW)
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Mon Oct 30 06:52:55 UTC 2023
    - 7.3K bytes
    - Viewed (0)
  6. SECURITY.md

    ### Hardware attacks
    
    Physical GPUs or TPUs can also be the target of attacks. [Published
    research](https://scholar.google.com/scholar?q=gpu+side+channel) shows that it
    might be possible to use side channel attacks on the GPU to leak data from other
    running models or processes in the same system. GPUs can also have
    implementation bugs that might allow attackers to leave malicious code running
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Sun Oct 01 06:06:35 UTC 2023
    - 9.6K bytes
    - Viewed (0)
  7. tensorflow/compiler/jit/xla_device.h

        // the logical on-device shape without padding is used.
        PaddedShapeFn padded_shape_fn;
    
        // Set of devices to use. This controls which of the devices on the given
        // platform will have resources allocated. For GPUs this will be
        // filled from visible_gpu_devices list from session configuration.
        std::optional<std::set<int>> allowed_devices;
      };
    
      // Creates a new XLA Device.
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Wed Feb 21 09:53:30 UTC 2024
    - 13.4K bytes
    - Viewed (0)
  8. configure.py

            'you want to build with.\nYou can find the compute capability of your '
            'device at: https://developer.nvidia.com/cuda-gpus. Each capability '
            'can be specified as "x.y" or "compute_xy" to include both virtual and'
            ' binary GPU code, or as "sm_xy" to only include the binary '
            'code.\nPlease note that each additional compute capability '
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Mon Jun 10 04:32:44 UTC 2024
    - 53.8K bytes
    - Viewed (1)
  9. cluster/gce/config-default.sh

    NODE_LOCAL_SSDS_EXT=${NODE_LOCAL_SSDS_EXT:-}
    # Accelerators to be attached to each node. Format "type=<accelerator-type>,count=<accelerator-count>"
    # More information on available GPUs here - https://cloud.google.com/compute/docs/gpus/
    NODE_ACCELERATORS=${NODE_ACCELERATORS:-""}
    export REGISTER_MASTER_KUBELET=${REGISTER_MASTER:-true}
    PREEMPTIBLE_NODE=${PREEMPTIBLE_NODE:-false}
    PREEMPTIBLE_MASTER=${PREEMPTIBLE_MASTER:-false}
    Registered: Sat Jun 15 01:39:40 UTC 2024
    - Last Modified: Sat Mar 16 20:16:32 UTC 2024
    - 26.9K bytes
    - Viewed (0)
  10. tensorflow/compiler/jit/clone_constants_for_better_clustering.cc

      // constant" threshold, if there is one.
      const int kSmallTensorThreshold = 16;
      return total_elements < kSmallTensorThreshold;
    }
    
    // We only clone small constants since we want to avoid increasing memory
    // pressure on GPUs.
    absl::StatusOr<bool> IsSmallConstant(Node* n) {
      if (!n->IsConstant()) {
        return false;
      }
    
      return IsConstantSmall(n);
    }
    
    bool IsInPlaceOp(absl::string_view op_name) {
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Tue Mar 12 06:33:33 UTC 2024
    - 7.3K bytes
    - Viewed (0)
Back to top