- Sort Score
- Result 10 results
- Languages All
Results 1 - 5 of 5 for XlaLaunch (0.24 sec)
-
tensorflow/compiler/jit/encapsulate_xla_computations_pass.h
// functions contain the computations to be passed to XlaLaunch. During // encapsulation, we sort the arguments into the order expected by // XlaLaunch. static Status Encapsulate(std::unique_ptr<Graph>* graph, FunctionLibraryDefinition* flib_def); // b) we rewrite the function calls generated in phase (a) into XlaLaunch // operators. We also convert the XlaClusterOutput output nodes of the
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu Feb 22 06:59:07 UTC 2024 - 3.6K bytes - Viewed (0) -
tensorflow/compiler/jit/xla_platform_info.h
// xla_device_metadata_ lives in the tensorflow::DeviceBase in which the // XlaLaunch/_XlaCompile/_XlaRun op is placed and thus does not die before the // XlaLaunch/_XlaCompile/_XlaRun OpKernel. const XlaDevice::Metadata* xla_device_metadata_; // pjrt_device_metadata_ lives in tensorflow::PjRtBaseDevice in which the // XlaLaunch/XlaCompileOnDemand op is placed and thus does not die before the // op kernel.
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed Feb 21 09:53:30 UTC 2024 - 7.2K bytes - Viewed (0) -
tensorflow/compiler/jit/ops/xla_ops.cc
#include "absl/status/status.h" #include "tensorflow/core/framework/op.h" #include "tensorflow/core/framework/shape_inference.h" namespace tensorflow { using shape_inference::InferenceContext; REGISTER_OP("XlaLaunch") .Input("constants: Tconstants") .Attr("Tconstants: list(type) >= 0") .Input("args: Targs") .Attr("Targs: list(type) >= 0") .Input("resources: Nresources * resource")
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Sat Apr 06 09:08:06 UTC 2024 - 4.5K bytes - Viewed (0) -
tensorflow/compiler/jit/xla_compile_util.h
const NodeDef& node_def, absl::Span<const XlaArgument> args, absl::Span<const DataType> result_types); // Checks if single device compilation and execution with PJRT is enabled for // `device_type` in either the XlaLaunch op or the XlaCompileOnDemand op. bool UsePjRtForSingleDeviceCompilation(const DeviceType& device_type); // Gets the resource name of the PjRt DeviceCompiler for `device_type`.
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed Feb 21 09:53:30 UTC 2024 - 2.4K bytes - Viewed (0) -
tensorflow/compiler/mlir/tensorflow/transforms/host_runtime/lower_cluster_to_runtime_ops.cc
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed Apr 17 18:52:57 UTC 2024 - 9.4K bytes - Viewed (0)