- Sort Score
- Result 10 results
- Languages All
Results 51 - 58 of 58 for mat_mul (0.38 sec)
-
tensorflow/compiler/mlir/tensorflow/transforms/tf_passes.td
``` The pass also works across control flow and functional calls. }]; } def UnrollBatchMatMulPass : Pass<"tf-unroll-batch-matmul", "mlir::func::FuncOp"> { let summary = "Unroll TF BatchMatMul op into Reshape, Slice, MatMul, Pack ops."; let constructor = "TF::CreateUnrollBatchMatMulPassPass()"; } def ClusterFormationPass : Pass<"tf-device-cluster-formation", "mlir::ModuleOp"> {
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed Jun 12 21:18:05 UTC 2024 - 99.6K bytes - Viewed (0) -
tensorflow/compiler/mlir/lite/stablehlo/transforms/uniform_quantized_stablehlo_to_tfl_pass.cc
UniformQuantizedStableHloToTflPass> { private: void runOnOperation() override; }; // TODO: b/323645515 - Refactor reference functions. // Bias scales for matmul-like ops should be input scale * filter scale. Here it // is assumed that the input is per-tensor quantized and filter is per-channel // quantized. SmallVector<double> GetBiasScales(const double input_scale,
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Mon Apr 22 09:00:19 UTC 2024 - 99.8K bytes - Viewed (0) -
tensorflow/compiler/mlir/lite/transforms/optimize.cc
if (fc_op.getFusedActivationFunction() != "NONE") return failure(); // Only fuse multiplier if all dimensions other than the depth dimension // are equal to 1 since otherwise // `matmul(x, filter) * cst != matmul(x, filter * cst)` // even if `filter` and `cst` are be broadcastable. auto shape = cst.getType().getShape(); if (!IsDimensionsDegenerateExceptLastOne(shape)) return failure();
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue Apr 30 00:40:15 UTC 2024 - 102.3K bytes - Viewed (0) -
tensorflow/compiler/mlir/tf2xla/tests/legalize-tf.mlir
// CHECK: "mhlo.dot"(%[[UPDATED_A]], %[[UPDATED_B]]) %0 = "tf.MatMul"(%a, %b) {transpose_a = true, transpose_b = true} : (tensor<7x5xf32>, tensor<11x7xf32>) -> tensor<5x11xf32> func.return %0 : tensor<5x11xf32> } // Verify that MatMul with ranked inputs are lowered to HLO. // CHECK-LABEL: matmul_ranked
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Mon May 06 18:46:23 UTC 2024 - 335.5K bytes - Viewed (0) -
tensorflow/compiler/mlir/tensorflow/tests/canonicalize.mlir
func.return %0: tensor<2x3x7xf32> } // CHECK-LABEL: testBatchMatMulToMatMul func.func @testBatchMatMulToMatMul(%arg0: tensor<2x3xf32>, %arg1: tensor<3x2xf32>) -> tensor<2x2xf32> { // CHECK: %0 = "tf.MatMul"(%arg0, %arg1) <{grad_a = false, grad_b = false, transpose_a = false, transpose_b = false}> {device = "/job:localhost/replica:0/task:0/device:GPU:0"} : (tensor<2x3xf32>, tensor<3x2xf32>) -> tensor<2x2xf32>
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu May 09 22:07:10 UTC 2024 - 132.1K bytes - Viewed (0) -
tensorflow/compiler/mlir/tf2xla/transforms/legalize_tf.cc
// - rhs: [RHSBATCHDIMS..., RHSROWS, RHSCOLS] // - result: [broadcast(LHSBATCHDIMS, RHSBATCHDIMS)..., LHSROWS, RHSCOLS] // To perform the matmul, we need to first broadcast lhs and rhs to a common // set of leading dimensions before doing the actual matmul. // That's what the code below does. // In particular, we populate out_lhs and out_rhs to have dimension structure:
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue Jun 11 20:00:43 UTC 2024 - 291.8K bytes - Viewed (0) -
RELEASE.md
* `tf.config.experimental.enable_tensor_float_32_execution`
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue Jun 11 23:24:08 UTC 2024 - 730.3K bytes - Viewed (0) -
tensorflow/compiler/mlir/tensorflow/ir/tf_generated_ops.td
} def TF__FusedMatMulOp : TF_Op<"_FusedMatMul", [Pure, TF_SameOperandsAndResultElementTypeResolveRef]> { let summary = [{ Performs a MatMul followed by a specified series of operations. }]; let description = [{ The inputs to the MatMul are specified by `a` and `b`. The series of operations that follows is specified by the `fused_ops` attribute, which is a list of TF op
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue Jun 11 23:24:08 UTC 2024 - 793K bytes - Viewed (0)