- Sort Score
- Result 10 results
- Languages All
Results 1 - 7 of 7 for TensorHandles (0.12 sec)
-
tensorflow/c/eager/parallel_device/parallel_device_lib.h
// device-specific executors have scheduled the op. // // Accepts inferred shapes for outputs (`expected_output_shapes`), which if // fully defined will avoid querying the shapes of the underlying // TensorHandles when ParallelTensor::Shape is called. This allows async // computation to continue without blocking. // // The return status and value is the same as `Execute`.
Registered: Tue Nov 05 12:39:12 UTC 2024 - Last Modified: Mon Oct 21 04:14:14 UTC 2024 - 13.1K bytes - Viewed (0) -
tensorflow/c/eager/c_api_experimental.h
// maximum number of in flight async nodes. Enqueuing of additional async ops // after the limit is reached blocks until some inflight nodes finishes. // The effect is bounding the memory held by inflight TensorHandles that are // referenced by the inflight nodes. // A recommended value has not been established. // A value of 0 removes the limit, which is the behavior of TensorFlow 2.11. // When is_async is false, the value is ignored.
Registered: Tue Nov 05 12:39:12 UTC 2024 - Last Modified: Wed Feb 21 22:37:46 UTC 2024 - 39.5K bytes - Viewed (0) -
tensorflow/c/eager/immediate_execution_context.h
DEVICE_PLACEMENT_SILENT_FOR_INT32 = 3, }; // LINT.ThenChange(//tensorflow/c/eager/c_api.h) // Abstract interface to a context. // // A context is responsible for creating key objects such as Tensors, // TensorHandles & Operations. class ImmediateExecutionContext : public AbstractContext { public: // Optimized scalar creation functions virtual AbstractTensorInterface* CreateInt64Scalar(int64_t value) = 0;
Registered: Tue Nov 05 12:39:12 UTC 2024 - Last Modified: Sat Oct 12 05:11:17 UTC 2024 - 12.3K bytes - Viewed (0) -
tensorflow/c/eager/parallel_device/parallel_device.cc
if (absl::holds_alternative<ParallelTensor*>(inputs[i])) { std::string message(absl::StrCat( "Expected all inputs to TPUReplicatedInput to be non-parallel " "TensorHandles. The input ", i, " was a parallel tensor (already " "placed on the parallel device).")); TF_SetStatus(status, TF_INVALID_ARGUMENT, message.c_str()); return result;
Registered: Tue Nov 05 12:39:12 UTC 2024 - Last Modified: Mon Oct 21 04:14:14 UTC 2024 - 18.3K bytes - Viewed (0) -
tensorflow/c/eager/parallel_device/parallel_device_lib.cc
TF_SetStatus(status, TF_GetCode(first_bad_status.get()), TF_Message(first_bad_status.get())); return result; } // For each output of the original operation, pack the per-device // TensorHandles we've computed into a single parallel TensorHandle. std::vector<std::unique_ptr<ParallelTensor>> per_device_outputs; per_device_outputs.reserve(first_op_output_count); for (int i = 0; i < first_op_output_count; ++i) {
Registered: Tue Nov 05 12:39:12 UTC 2024 - Last Modified: Mon Oct 21 04:14:14 UTC 2024 - 25.9K bytes - Viewed (0) -
tensorflow/c/eager/parallel_device/parallel_device_test.cc
TensorHandlePtr value_one(FloatTensorHandle(1., status.get())); TensorHandlePtr value_two(FloatTensorHandle(2., status.get())); { // Try to pack two TensorHandles onto a parallel device with a single // component. ASSERT_EQ(TF_GetCode(status.get()), TF_OK) << TF_Message(status.get()); std::array<TFE_TensorHandle*, 2> components{value_one.get(),
Registered: Tue Nov 05 12:39:12 UTC 2024 - Last Modified: Tue Aug 06 23:56:17 UTC 2024 - 29.4K bytes - Viewed (0) -
RELEASE.md
* Removed `autotune_algorithm` from experimental optimization options. * TF Core: * `tf.constant` always creates CPU tensors irrespective of the current device context. * Eager `TensorHandles` maintain a list of mirrors for any copies to local or remote devices. This avoids any redundant copies due to op execution. * For `tf.Tensor` & `tf.Variable`, `.experimental_ref()` is no longer
Registered: Tue Nov 05 12:39:12 UTC 2024 - Last Modified: Tue Oct 22 14:33:53 UTC 2024 - 735.3K bytes - Viewed (0)