- Sort Score
- Result 10 results
- Languages All
Results 1 - 10 of 79 for computations (0.2 sec)
-
tensorflow/compiler/mlir/lite/stablehlo/transforms/legalize_hlo_conversions/scatter.h
if (!operand_type.hasStaticShape() || !indices_type.hasStaticShape() || !updates_type.hasStaticShape()) { return failure(); } // Match the scatter computation against computations supported by TF. if (failed(MatchBinaryReduceFunction<BinaryOp>( scatter_op.getUpdateComputation()))) { return failure(); }
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu Apr 25 16:01:03 UTC 2024 - 10.1K bytes - Viewed (0) -
tensorflow/compiler/mlir/tensorflow/utils/tpu_rewrite_device_util.h
mlir::TF::RuntimeDevices devices, std::string* host_device); // Parses XLA compilation and execution devices from a tf_device.cluster and // returns the host device for the head and tail computations. For TPU device, // if the computation is replicated, GetDeviceAliasForHostOfLogicalCore(0) is // returned instead. mlir::LogicalResult GetHostDeviceOutsideComputation( mlir::TF::RuntimeDevices devices, mlir::tf_device::ClusterOp cluster,
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Fri Apr 26 09:37:10 UTC 2024 - 11.3K bytes - Viewed (0) -
tensorflow/compiler/mlir/tensorflow/utils/xla_sharding_util.cc
location, output_type, concat_dimension_op.getOutput(), inputs); } // For tile sharded inputs to TPU computation, inject split op between the // input values and TPU computation so that tiled input values are passed in // as inputs to TPU computations. If more than one dimension is sharded, then // a tree of connected split ops are added before tf_device.parallel_execute op.
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed May 22 21:28:13 UTC 2024 - 34K bytes - Viewed (0) -
tensorflow/compiler/mlir/tensorflow/transforms/tf_device_passes.td
let summary = "Decompose composite resource variable operations into primitive Read/AssignVariableOp and raw computation."; let description = [{ A pass that decomposes composite resource operations into primitive ones like ReadVariableOp, AssignVariableOp and other computations to facilitate transformations like resource op lifting. For example: ```mlir
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed Apr 17 18:52:57 UTC 2024 - 12.5K bytes - Viewed (0) -
tensorflow/compiler/mlir/tensorflow/transforms/passes.h
// Creates a pass that lifts operations on external resource variables from // device computation nested in `tf_device::LaunchOp` out so that resource // variable load operations are all before device computation while resource // variable store operations are all after device computation. After this pass, // device computation no longer interacts with external resource variables.
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed Jun 12 21:18:05 UTC 2024 - 31.8K bytes - Viewed (0) -
docs/en/docs/deployment/concepts.md
Those worker processes would be the ones running your application, they would perform the main computations to receive a **request** and return a **response**, and they would load anything you put in variables in RAM. <img src="/img/deployment/concepts/process-ram.svg">
Registered: Mon Jun 17 08:32:26 UTC 2024 - Last Modified: Thu May 02 22:37:31 UTC 2024 - 18K bytes - Viewed (0) -
tensorflow/compiler/mlir/tensorflow/transforms/fused_kernel_matcher.cc
#define GEN_PASS_DEF_FUSEDKERNELMATCHERPASS #include "tensorflow/compiler/mlir/tensorflow/transforms/tf_passes.h.inc" // Optimizes TF computations by fusing subgraphs/nodes onto more efficient // implementations to decrease the number of operations needed to perform a // computation. struct FusedKernelMatcherPass : public impl::FusedKernelMatcherPassBase<FusedKernelMatcherPass> { void runOnOperation() override; };
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu Apr 25 16:01:03 UTC 2024 - 14.9K bytes - Viewed (0) -
tensorflow/compiler/mlir/tensorflow/utils/tf_xla_mlir_translate.cc
xla::XlaComputation computation, return_value.valid() ? builder.Build(return_value) : builder.Build()); auto hlo_module = computation.proto(); xla::HloProto hlo_proto; hlo_proto.mutable_hlo_module()->Swap(&hlo_module); compilation_result->computation = std::make_shared<xla::XlaComputation>(); xla::XlaComputation* xla_computation = compilation_result->computation.get();
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu Apr 25 16:01:03 UTC 2024 - 18.8K bytes - Viewed (0) -
subprojects/core-api/src/main/java/org/gradle/api/provider/Provider.java
* </p> * * <p> * A typical use of a provider is to pass values from one Gradle model element to another, e.g. from a project extension * to a task, or between tasks. Providers also allow expensive computations to be deferred until their value is actually * needed, usually at task execution time. * </p> * * <p> * There are a number of ways to create a {@link Provider} instance. Some common methods: * </p> *
Registered: Wed Jun 12 18:38:38 UTC 2024 - Last Modified: Tue Apr 16 09:14:21 UTC 2024 - 10.8K bytes - Viewed (0) -
tensorflow/compiler/mlir/tf2xla/transforms/tf2xla_rewriter.cc
XlaComputation& computation) { xla::DebugOptions debug_options; TF_ASSIGN_OR_RETURN(auto hlo_module_config, xla::HloModule::CreateModuleConfigFromProto( computation.proto(), debug_options)); TF_ASSIGN_OR_RETURN( std::unique_ptr<xla::HloModule> hlo_module, xla::HloModule::CreateFromProto(computation.proto(), hlo_module_config));
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu May 02 09:16:07 UTC 2024 - 18.9K bytes - Viewed (0)