- Sort Score
- Result 10 results
- Languages All
Results 1 - 5 of 5 for TensorHandles (0.13 sec)
-
tensorflow/c/eager/parallel_device/parallel_device_lib.h
// device-specific executors have scheduled the op. // // Accepts inferred shapes for outputs (`expected_output_shapes`), which if // fully defined will avoid querying the shapes of the underlying // TensorHandles when ParallelTensor::Shape is called. This allows async // computation to continue without blocking. // // The return status and value is the same as `Execute`.
Registered: Tue Nov 05 12:39:12 UTC 2024 - Last Modified: Mon Oct 21 04:14:14 UTC 2024 - 13.1K bytes - Viewed (0) -
tensorflow/c/eager/immediate_execution_context.h
DEVICE_PLACEMENT_SILENT_FOR_INT32 = 3, }; // LINT.ThenChange(//tensorflow/c/eager/c_api.h) // Abstract interface to a context. // // A context is responsible for creating key objects such as Tensors, // TensorHandles & Operations. class ImmediateExecutionContext : public AbstractContext { public: // Optimized scalar creation functions virtual AbstractTensorInterface* CreateInt64Scalar(int64_t value) = 0;
Registered: Tue Nov 05 12:39:12 UTC 2024 - Last Modified: Sat Oct 12 05:11:17 UTC 2024 - 12.3K bytes - Viewed (0) -
tensorflow/c/eager/parallel_device/parallel_device.cc
if (absl::holds_alternative<ParallelTensor*>(inputs[i])) { std::string message(absl::StrCat( "Expected all inputs to TPUReplicatedInput to be non-parallel " "TensorHandles. The input ", i, " was a parallel tensor (already " "placed on the parallel device).")); TF_SetStatus(status, TF_INVALID_ARGUMENT, message.c_str()); return result;
Registered: Tue Nov 05 12:39:12 UTC 2024 - Last Modified: Mon Oct 21 04:14:14 UTC 2024 - 18.3K bytes - Viewed (0) -
tensorflow/c/eager/parallel_device/parallel_device_lib.cc
TF_SetStatus(status, TF_GetCode(first_bad_status.get()), TF_Message(first_bad_status.get())); return result; } // For each output of the original operation, pack the per-device // TensorHandles we've computed into a single parallel TensorHandle. std::vector<std::unique_ptr<ParallelTensor>> per_device_outputs; per_device_outputs.reserve(first_op_output_count); for (int i = 0; i < first_op_output_count; ++i) {
Registered: Tue Nov 05 12:39:12 UTC 2024 - Last Modified: Mon Oct 21 04:14:14 UTC 2024 - 25.9K bytes - Viewed (0) -
RELEASE.md
* Removed `autotune_algorithm` from experimental optimization options. * TF Core: * `tf.constant` always creates CPU tensors irrespective of the current device context. * Eager `TensorHandles` maintain a list of mirrors for any copies to local or remote devices. This avoids any redundant copies due to op execution. * For `tf.Tensor` & `tf.Variable`, `.experimental_ref()` is no longer
Registered: Tue Nov 05 12:39:12 UTC 2024 - Last Modified: Tue Oct 22 14:33:53 UTC 2024 - 735.3K bytes - Viewed (0)