- Sort Score
- Result 10 results
- Languages All
Results 21 - 30 of 381 for computations (0.14 sec)
-
tensorflow/compiler/mlir/tensorflow/utils/tpu_rewrite_device_util.h
mlir::TF::RuntimeDevices devices, std::string* host_device); // Parses XLA compilation and execution devices from a tf_device.cluster and // returns the host device for the head and tail computations. For TPU device, // if the computation is replicated, GetDeviceAliasForHostOfLogicalCore(0) is // returned instead. mlir::LogicalResult GetHostDeviceOutsideComputation( mlir::TF::RuntimeDevices devices, mlir::tf_device::ClusterOp cluster,
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Fri Apr 26 09:37:10 UTC 2024 - 11.3K bytes - Viewed (0) -
src/time/time.go
// suggest a representation, namely using 1-1-1 00:00:00 UTC as the // epoch, and that's what we do. // // The Add and Sub computations are oblivious to the choice of epoch. // // The presentation computations - year, month, minute, and so on - all // rely heavily on division and modulus by positive constants. For // calendrical calculations we want these divisions to round down, even
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Wed May 29 17:58:53 UTC 2024 - 50.7K bytes - Viewed (0) -
tensorflow/compiler/mlir/tensorflow/utils/xla_sharding_util.cc
location, output_type, concat_dimension_op.getOutput(), inputs); } // For tile sharded inputs to TPU computation, inject split op between the // input values and TPU computation so that tiled input values are passed in // as inputs to TPU computations. If more than one dimension is sharded, then // a tree of connected split ops are added before tf_device.parallel_execute op.
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed May 22 21:28:13 UTC 2024 - 34K bytes - Viewed (0) -
tensorflow/compiler/mlir/tensorflow/transforms/tf_device_passes.td
let summary = "Decompose composite resource variable operations into primitive Read/AssignVariableOp and raw computation."; let description = [{ A pass that decomposes composite resource operations into primitive ones like ReadVariableOp, AssignVariableOp and other computations to facilitate transformations like resource op lifting. For example: ```mlir
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed Apr 17 18:52:57 UTC 2024 - 12.5K bytes - Viewed (0) -
tensorflow/compiler/mlir/tensorflow/transforms/passes.h
// Creates a pass that lifts operations on external resource variables from // device computation nested in `tf_device::LaunchOp` out so that resource // variable load operations are all before device computation while resource // variable store operations are all after device computation. After this pass, // device computation no longer interacts with external resource variables.
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed Jun 12 21:18:05 UTC 2024 - 31.8K bytes - Viewed (0) -
docs/en/docs/deployment/concepts.md
Those worker processes would be the ones running your application, they would perform the main computations to receive a **request** and return a **response**, and they would load anything you put in variables in RAM. <img src="/img/deployment/concepts/process-ram.svg">
Registered: Mon Jun 17 08:32:26 UTC 2024 - Last Modified: Thu May 02 22:37:31 UTC 2024 - 18K bytes - Viewed (0) -
guava-tests/test/com/google/common/hash/HashTestUtils.java
int numActions = 100; // hashcodes from non-overlapping hash computations HashCode expected1 = randomHash(hashFunction, new Random(1L), numActions); HashCode expected2 = randomHash(hashFunction, new Random(2L), numActions); // equivalent, but overlapping, computations (should produce the same results as above) Random random1 = new Random(1L); Random random2 = new Random(2L);
Registered: Wed Jun 12 16:38:11 UTC 2024 - Last Modified: Mon Oct 10 19:45:10 UTC 2022 - 25.3K bytes - Viewed (0) -
tensorflow/compiler/mlir/tensorflow/transforms/fused_kernel_matcher.cc
#define GEN_PASS_DEF_FUSEDKERNELMATCHERPASS #include "tensorflow/compiler/mlir/tensorflow/transforms/tf_passes.h.inc" // Optimizes TF computations by fusing subgraphs/nodes onto more efficient // implementations to decrease the number of operations needed to perform a // computation. struct FusedKernelMatcherPass : public impl::FusedKernelMatcherPassBase<FusedKernelMatcherPass> { void runOnOperation() override; };
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu Apr 25 16:01:03 UTC 2024 - 14.9K bytes - Viewed (0) -
tensorflow/compiler/mlir/tensorflow/utils/tf_xla_mlir_translate.cc
xla::XlaComputation computation, return_value.valid() ? builder.Build(return_value) : builder.Build()); auto hlo_module = computation.proto(); xla::HloProto hlo_proto; hlo_proto.mutable_hlo_module()->Swap(&hlo_module); compilation_result->computation = std::make_shared<xla::XlaComputation>(); xla::XlaComputation* xla_computation = compilation_result->computation.get();
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu Apr 25 16:01:03 UTC 2024 - 18.8K bytes - Viewed (0) -
pkg/scheduler/framework/plugins/volumezone/volume_zone.go
logger := klog.FromContext(ctx) // If a pod doesn't have any volume attached to it, the predicate will always be true. // Thus we make a fast path for it, to avoid unnecessary computations in this case. if len(pod.Spec.Volumes) == 0 { return nil } var podPVTopologies []pvTopology state, err := getStateData(cs) if err != nil { // Fallback to calculate pv list here
Registered: Sat Jun 15 01:39:40 UTC 2024 - Last Modified: Sat Mar 16 14:13:06 UTC 2024 - 10.9K bytes - Viewed (0)