- Sort Score
- Result 10 results
- Languages All
Results 1 - 10 of 10 for Contraction (0.2 sec)
-
tensorflow/compiler/mlir/tensorflow/transforms/fused_kernel_matcher.cc
if (!isa<func::FuncOp, IfOp, WhileOp>(contraction->getParentOp())) { return rewriter.notifyMatchFailure( contraction, "fused operation must be nested inside a function, If or While"); } // If the contraction is used in multiple places, fusing it will only create // more contraction nodes, which is slower. if (!contraction.getResult().hasOneUse())
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu Apr 25 16:01:03 UTC 2024 - 14.9K bytes - Viewed (0) -
tensorflow/compiler/mlir/tensorflow/ir/tf_op_interfaces.h
#include "tensorflow/core/framework/resource_mgr.h" namespace mlir { namespace TF { //===----------------------------------------------------------------------===// // TensorFlow Contraction Fusion. //===----------------------------------------------------------------------===// struct ContractionFusion { explicit ContractionFusion( StringRef output_kernel, ArrayRef<int> additional_arguments = {},
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed May 03 19:26:14 UTC 2023 - 6.5K bytes - Viewed (0) -
tensorflow/compiler/mlir/tensorflow/transforms/fold_broadcast.cc
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu Apr 25 16:01:03 UTC 2024 - 7.9K bytes - Viewed (0) -
tensorflow/compiler/jit/mark_for_compilation_pass.cc
// c. There is no path from B to A in the cycles graph (but there may // be a path from A to B). // // So check the legality of the edge contraction by checking if any of // the n^2 pairs of resource variable operations are forbidden. if (unsafe_resource_deps_.contains( {resource_var_from, resource_var_to})) {
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed Feb 21 12:19:41 UTC 2024 - 85.3K bytes - Viewed (0) -
tensorflow/compiler/mlir/tensorflow/transforms/einsum.h
#include "tensorflow/compiler/mlir/tensorflow/ir/tf_ops.h" #include "tensorflow/core/util/matmul_bcast.h" namespace mlir { namespace TF { // TF.Einsum provides fully general tensor contractions. For a few select // cases, we can convert this op to other TF Ops, which in later passes // properly convert to TF Lite ops. struct ConvertTFEinsumOp : public OpRewritePattern<TF::EinsumOp> { public:
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Sat Dec 12 02:01:03 UTC 2020 - 2.1K bytes - Viewed (0) -
tensorflow/compiler/mlir/tensorflow/ir/tf_generated_ops.td
let hasVerifier = 1; } def TF_EinsumOp : TF_Op<"Einsum", [Pure]> { let summary = [{ Tensor contraction according to Einstein summation convention. }]; let description = [{ Implements generalized Tensor contraction and reduction. Each input Tensor must have a corresponding input subscript appearing in the comma-separated left-hand
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue Jun 11 23:24:08 UTC 2024 - 793K bytes - Viewed (0) -
src/compress/bzip2/testdata/Isaac.Newton-Opticks.txt.bz2
to make all the several Colours fall successively upon the Object-glasses, and thereby to make the Rings contract and dilate: The Contraction or Dilatation of each Ring thus made by the variation of its Colour was swiftest in the red, and slowest in the violet, and in the intermediate Colours it had intermediate degrees of Celerity. Comparing the quantity of Contraction and Dilatation made by all the degrees of each Colour, I found that it was greatest in the red; less in the yellow, still less in...
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Mon Sep 24 18:26:02 UTC 2018 - 129.4K bytes - Viewed (0) -
src/testdata/Isaac.Newton-Opticks.txt
Object-glasses, and thereby to make the Rings contract and dilate: The Contraction or Dilatation of each Ring thus made by the variation of its Colour was swiftest in the red, and slowest in the violet, and in the intermediate Colours it had intermediate degrees of Celerity. Comparing the quantity of Contraction and Dilatation made by all the degrees of each Colour, I found that it was greatest in the red; less in the
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Mon Oct 01 16:16:21 UTC 2018 - 553.9K bytes - Viewed (0) -
tensorflow/cc/gradients/linalg_grad.cc
absl::string_view output_subs) { // Claim: For the einsum operation z = einsum("{eq_x},{eq_y}->{eq_z}", x, y), // where the equation involves only Tensor contractions, generalized traces // and transposes, the input gradients are given by the vector-jacobian // products (VJPs): // // grad_wrt_x = einsum("{eq_y},{eq_z}->{eq_x}", y, grad_wrt_z)
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Mon Mar 07 23:11:54 UTC 2022 - 20.4K bytes - Viewed (0) -
RELEASE.md
`batch_dims` case. * Removing of dtype in the constructor of initializers and partition_info in call. * Add `tf.math.nextafter` op. * Turn on MKL-DNN contraction kernels by default. MKL-DNN dynamically dispatches the best kernel implementation based on CPU vector architecture. To disable them, build with `--define=tensorflow_mkldnn_contraction_kernel=0`.
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue Jun 11 23:24:08 UTC 2024 - 730.3K bytes - Viewed (0)