- Sort Score
- Result 10 results
- Languages All
Results 1 - 10 of 333 for computations (0.34 sec)
-
tensorflow/compiler/jit/pjrt_base_device.h
// a) argument and return value, for entry computations b) variables, for // all computations, should be represented in XLA. Parameters/return values // will be shaped according to the function pair, and reshaped back to/from // their declared shapes for computations. Must be non-empty. std::vector<XlaShapeLayoutHelpers::ShapeDeterminationFns> shape_determination_fns;
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed Feb 21 12:19:41 UTC 2024 - 4K bytes - Viewed (0) -
tensorflow/compiler/jit/encapsulate_xla_computations_pass.h
// Rewrites computations generated by the xla.compile() Python code into // XlaLaunch nodes. // // xla.compile() does two main things: // a) marks operators that make up an XLA computation with the attribute // _xla_compile_id=XYZ, where XYZ is a unique key. // b) adds XlaClusterOutput nodes to represent outputs of the computation. // These nodes are not marked with the _xla_compile_id attribute.
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu Feb 22 06:59:07 UTC 2024 - 3.6K bytes - Viewed (0) -
tensorflow/compiler/jit/xla_device.h
// a) argument and return value, for entry computations b) variables, for // all computations, should be represented in XLA. Parameters/return values // will be shaped according to the function pair, and reshaped back to/from // their declared shapes for computations. Must be non-empty. std::vector<XlaShapeLayoutHelpers::ShapeDeterminationFns>
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed Feb 21 09:53:30 UTC 2024 - 13.4K bytes - Viewed (0) -
platforms/core-execution/persistent-cache/src/test/groovy/org/gradle/cache/ManualEvictionInMemoryCacheTest.groovy
import java.util.function.Supplier class ManualEvictionInMemoryCacheTest extends Specification { @Timeout(value = 5, unit = TimeUnit.SECONDS) def "supports #concurrency concurrent computations"() { def latch = new CountDownLatch(concurrency) def executor = Executors.newFixedThreadPool(concurrency) def cache = new ManualEvictionInMemoryCache<String, String>() when:
Registered: Wed Jun 12 18:38:38 UTC 2024 - Last Modified: Fri Sep 22 09:08:47 UTC 2023 - 1.9K bytes - Viewed (0) -
tensorflow/compiler/jit/encapsulate_xla_computations_pass.cc
return errors::InvalidArgument( "Undeclared output of XLA computation. Some common causes of this " "error are: 1) variable initializers that depend on the XLA " "computation; 2) gradient computations that depend on the XLA " "computation, which can be mitigated by moving gradient computations " "inside XLA computation. Offending edge: ",
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue Mar 12 06:33:33 UTC 2024 - 15.1K bytes - Viewed (0) -
tensorflow/compiler/mlir/quantization/tensorflow/ops/tf_op_quant_spec.h
#include "tensorflow/compiler/mlir/quantization/tensorflow/quantization_options.pb.h" namespace mlir { namespace quant { // Check if the op has data movement trait. Ops with this trait do not perform // any computations but just move data and has one result operand. bool IsOpWithDataMovementTrait(Operation* op); // Check if the op is quantizable. Currently, the scope of quantizable op is
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue Mar 05 07:39:40 UTC 2024 - 2.5K bytes - Viewed (0) -
src/vendor/golang.org/x/crypto/sha3/shake.go
// a customizable variant of SHAKE128. // N is used to define functions based on cSHAKE, it can be empty when plain cSHAKE is // desired. S is a customization byte string used for domain separation - two cSHAKE // computations on same input with different S yield unrelated outputs. // When N and S are both empty, this is equivalent to NewShake128. func NewCShake128(N, S []byte) ShakeHash { if len(N) == 0 && len(S) == 0 { return NewShake128()
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Tue Jun 04 16:19:04 UTC 2024 - 5.4K bytes - Viewed (0) -
tensorflow/compiler/mlir/quantization/common/ir/FakeQuantSupport.h
limitations under the License. ==============================================================================*/ // // This file defines support utilities for interoperating with FakeQuant* based // QAT (Quantized Aware Training) computations, as implemented by TFLite. Note // that FakeQuant* operators mix multiple concerns specific to how TFLite // originally implemented quantization. As such, utilities here enforce
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu Mar 21 11:52:27 UTC 2024 - 3.7K bytes - Viewed (0) -
tensorflow/compiler/mlir/tf2xla/api/v2/legalize_tf.cc
compilation_result->computation->proto(), xla::DebugOptions())); TF_ASSIGN_OR_RETURN( std::unique_ptr<xla::HloModule> hlo_module, xla::HloModule::CreateFromProto(compilation_result->computation->proto(), hlo_module_config)); std::string all_computations; for (auto computation : hlo_module->computations()) {
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed May 29 00:40:46 UTC 2024 - 6.8K bytes - Viewed (0) -
tensorflow/compiler/mlir/tensorflow/ir/tf_device_ops.td
} def TfDevice_ReplicateOp : TfDevice_Op<"replicate", [SingleBlockImplicitTerminator<"ReturnOp">, AttrSizedOperandSegments]> { let summary = "Wraps an N-way replicated computation."; let description = [{ The region held by this operation represents a computation that is replicated across multiple devices. The number of replications is based on the `n` attribute. Explicit devices can be populated in the `devices` attribute, and it
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue Jan 23 23:53:20 UTC 2024 - 14.8K bytes - Viewed (0)