Search Options

Results per page
Sort
Preferred Languages
Advance

Results 1 - 5 of 5 for wait (0.15 sec)

  1. tensorflow/c/eager/parallel_device/parallel_device_lib.cc

      StatusPtr first_bad_status(nullptr);
    
      for (const auto& dt : device_threads_) {
        StatusPtr async_wait_status(TF_NewStatus());
        dt->AsyncWait(async_wait_status.get());
        // Prefer non cancelled errors to uncover real failures.
        if (TF_GetCode(async_wait_status.get()) != TF_OK &&
            (first_bad_status == nullptr ||
             TF_GetCode(first_bad_status.get()) == TF_CANCELLED)) {
    C++
    - Registered: Tue Apr 30 12:39:09 GMT 2024
    - Last Modified: Fri Feb 09 07:47:20 GMT 2024
    - 25.4K bytes
    - Viewed (1)
  2. configure.py

      proc = subprocess.Popen(
          [environ_cp['PYTHON_BIN_PATH'], paths[0]] + cuda_libraries,
          stdout=subprocess.PIPE,
          env=maybe_encode_env(environ_cp))
    
      if proc.wait():
        # Errors from find_cuda_config.py were sent to stderr.
        print('Asking for detailed CUDA configuration...\n')
        return False
    
      config = dict(
    Python
    - Registered: Tue Apr 30 12:39:09 GMT 2024
    - Last Modified: Mon Apr 15 18:25:36 GMT 2024
    - 53.8K bytes
    - Viewed (1)
  3. RELEASE.md

          `experimental_default_delegate_latest_features` to enable all default
          delegate features.
    
    * `tf.data`
        * Add `wait` to `tf.data.Dataset.load`. If `True`, for snapshots written
          with `distributed_save`, it reads the snapshot while it is being written.
          For snapshots written with regular `save`, it waits for the snapshot until
          it's finished. The default is `False` for backward compatibility. Users of
    Plain Text
    - Registered: Tue May 07 12:40:20 GMT 2024
    - Last Modified: Mon Apr 29 19:17:57 GMT 2024
    - 727.7K bytes
    - Viewed (8)
  4. tensorflow/c/eager/tape.h

      // functions (and hence the tensors they keep alive). Instead, everything
      // is deleted in ~GradientTape. Persistent GradientTapes are useful when
      // users want to compute multiple gradients over the same tape.
      explicit GradientTape(bool persistent) : persistent_(persistent) {}
      ~GradientTape() {
        for (const auto& pair : op_tape_) {
    C
    - Registered: Tue Apr 30 12:39:09 GMT 2024
    - Last Modified: Tue Apr 02 12:40:29 GMT 2024
    - 47.2K bytes
    - Viewed (1)
  5. .bazelrc

    # Attempt to minimize the amount of data transfer between bazel and the remote
    # workers:
    build:rbe_base --remote_download_toplevel
    test:rbe_base --test_env=USER=anon
    
    # TODO(kanglan): Check if we want to merge rbe_linux into rbe_linux_cpu.
    build:rbe_linux --config=rbe_base
    build:rbe_linux --action_env=PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/go/bin"
    Plain Text
    - Registered: Tue May 07 12:40:20 GMT 2024
    - Last Modified: Thu May 02 19:34:20 GMT 2024
    - 52.8K bytes
    - Viewed (2)
Back to top