- Sort Score
- Result 10 results
- Languages All
Results 1 - 10 of 15 for _XlaCompile (0.24 sec)
-
tensorflow/compiler/jit/ops/xla_ops.cc
node and associated metadata. compilation_successful: If the `must_compile` attr is false the _XlaCompile op can decide not to compile the clusters based on some profitability heuristics. In that case `compilation_successful` is false if _XlaCompile chose not to compile the cluster. If the `must_compile` attr is true then _XlaCompile always attempts to compile the cluster and `compilation_successful` is always true. )");
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Sat Apr 06 09:08:06 UTC 2024 - 4.5K bytes - Viewed (0) -
tensorflow/compiler/jit/variable_info.h
VariableInfo(const VariableInfo&) = delete; VariableInfo& operator=(const VariableInfo&) = delete; // The index of the DT_RESOURCE input to the _XlaCompile/_XlaRun operator. // Note that the indices can be different between _XlaCompile and _XlaRun. int index() const { return index_; } // A pointer to the resource variable. May be null if this VariableInfo is
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue Feb 14 21:57:02 UTC 2023 - 3.3K bytes - Viewed (0) -
tensorflow/compiler/jit/build_xla_ops_pass_test.cc
call->AddAttr(kXlaHasReferenceVarsAttr, false); Node* write_op = MakeWrite(root, Output(call), "write_result"); write_op->AddAttr(kXlaHasReferenceVarsAttr, false); auto xla_compile = NodeWith(Op("_XlaCompile"), Attr("must_compile", false)); auto predicated_compilation_key = NodeWith(Op("Switch"), Inputs(Out(0, xla_compile), Out(1, xla_compile))); auto xla_run =
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu Feb 22 08:47:20 UTC 2024 - 12.2K bytes - Viewed (0) -
tensorflow/compiler/jit/xla_platform_info.h
} private: DeviceType device_type_; se::Platform::Id platform_id_; // xla_device_metadata_ lives in the tensorflow::DeviceBase in which the // XlaLaunch/_XlaCompile/_XlaRun op is placed and thus does not die before the // XlaLaunch/_XlaCompile/_XlaRun OpKernel. const XlaDevice::Metadata* xla_device_metadata_; // pjrt_device_metadata_ lives in tensorflow::PjRtBaseDevice in which the
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed Feb 21 09:53:30 UTC 2024 - 7.2K bytes - Viewed (0) -
tensorflow/compiler/jit/kernels/xla_ops.h
const bool has_ref_vars_; // cannot_compile_cluster_ is set to true if XLA returns an Unimplemented // error when compiling the cluster this _XlaCompile is supposed to compile. // If `cannot_compile_cluster_` is true then we avoid compiling this cluster // on any future calls to _XlaCompile. bool cannot_compile_cluster_ TF_GUARDED_BY(cannot_compile_cluster_mu_) = false; mutex cannot_compile_cluster_mu_; };
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Mon Oct 16 23:44:26 UTC 2023 - 4.8K bytes - Viewed (0) -
tensorflow/compiler/jit/tests/device_compiler_test_helper.h
RegisterXlaActivityListener(std::move(listener)); } JitCompilationListener* listener() const { return listener_; } // Returns a test graph that will split into two XLA clusters (due to a node // with _XlaCompile = false). GraphDef GetTestGraph(const PartialTensorShape& input_shape); // Runs the graph using specified batch size both with and without XLA JIT
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Fri Feb 09 08:24:16 UTC 2024 - 3.6K bytes - Viewed (0) -
tensorflow/compiler/jit/mark_for_compilation_pass_test_helper.h
return copy; } }; // Runs the MarkForCompilation pass on `graph` after assigning all nodes in // `graph` to the CPU device. To make testing easier, ignores device // registration and _XlaCompile attributes. static Status MarkForCompilation(std::unique_ptr<Graph>* graph, FunctionLibraryDefinition* flib_def, Options options = Options());
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu Feb 09 19:51:48 UTC 2023 - 2.8K bytes - Viewed (0) -
tensorflow/compiler/jit/build_xla_ops_pass.cc
.NewSubScope(n->name()) .WithDevice(n->requested_device()) .WithAssignedDevice(device_name_str); ops::_XlaCompile xla_compile(root.WithOpName("xla_compile"), /*constants=*/cluster_info.constant_inputs, /*args=*/cluster_info.non_constant_inputs,
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue Mar 12 06:33:33 UTC 2024 - 24.3K bytes - Viewed (0) -
tensorflow/compiler/jit/tests/device_compiler_test_helper.cc
{{"f"}, "Add", {"e", "a"}, {{"T", DT_FLOAT}}}, {{"g"}, "Mul", {"f", "b"}, {{"T", DT_FLOAT}}}, // Force two clusters by excluding this node explicitly. {{"h"}, "Add", {"g", "f"}, {{"T", DT_FLOAT}, {"_XlaCompile", false}}}, {{"i"}, "Add", {"h", "e"}, {{"T", DT_FLOAT}}}, {{"j"}, "Add", {"i", "h"}, {{"T", DT_FLOAT}}}, {{"k"}, "Add", {"j", "h"}, {{"T", DT_FLOAT}}}, {{"l"}, "Add", {"k", "h"}, {{"T", DT_FLOAT}}},
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Fri Feb 09 08:24:16 UTC 2024 - 6.2K bytes - Viewed (0) -
tensorflow/compiler/jit/flags.h
}; // Flags common to the _Xla* ops and their kernels. struct XlaOpsCommonFlags { // If true, _XlaCompile always refuses to compile the cluster, which means the // XLA clusters always run in the TF executor. Defaults to false. bool tf_xla_always_defer_compilation; // If true, _XlaCompile compiles the cluster asynchronously with respect to // the main execution. The fallback path is taken while compilation happens.
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed Apr 17 18:52:57 UTC 2024 - 14.5K bytes - Viewed (0)