Search Options

Results per page
Sort
Preferred Languages
Advance

Results 1 - 3 of 3 for prop (0.14 sec)

  1. tensorflow/c/c_api_experimental.h

    // from a placeholder node "arg_tensor_enqueue_<tensor_id>".
    //
    // `tensor` is still owned by the caller. This call will be blocked if the queue
    // has reached its capacity, and will be unblocked when the queued tensors again
    // drop below the capacity due to dequeuing.
    //
    // Tensors are dequeued via the corresponding TF dequeue op.
    // TODO(hongm): Add support for `timeout_ms`.
    TF_CAPI_EXPORT extern void TF_EnqueueNamedTensor(TF_Session* session,
    C
    - Registered: Tue Apr 30 12:39:09 GMT 2024
    - Last Modified: Thu Apr 27 21:07:00 GMT 2023
    - 15.1K bytes
    - Viewed (0)
  2. README.md

    [![Fuzzing Status](https://oss-fuzz-build-logs.storage.googleapis.com/badges/tensorflow.svg)](https://bugs.chromium.org/p/oss-fuzz/issues/list?sort=-opened&can=1&q=proj:tensorflow)
    [![Fuzzing Status](https://oss-fuzz-build-logs.storage.googleapis.com/badges/tensorflow-py.svg)](https://bugs.chromium.org/p/oss-fuzz/issues/list?sort=-opened&can=1&q=proj:tensorflow-py)
    [![OSSRank](https://shields.io/endpoint?url=https://ossrank.com/shield/44)](https://ossrank.com/p/44)
    Plain Text
    - Registered: Tue May 07 12:40:20 GMT 2024
    - Last Modified: Thu Oct 05 15:00:10 GMT 2023
    - 11.9K bytes
    - Viewed (0)
  3. tensorflow/c/eager/parallel_device/parallel_device.cc

      return result;
    }
    
    // Used as an argument to TFE_NewCustomDeviceTensorHandle, indicating how
    // ParallelTensors wrapped in TFE_TensorHandles should be cleaned up once their
    // reference counts drop to zero.
    void ParallelTensorDeallocator(void* data) {
      delete reinterpret_cast<ParallelTensor*>(data);
    }
    
    // Used as an argument to TFE_NewCustomDeviceTensorHandle, for computing the
    C++
    - Registered: Tue Apr 30 12:39:09 GMT 2024
    - Last Modified: Wed Mar 29 22:05:31 GMT 2023
    - 18.3K bytes
    - Viewed (0)
Back to top