- Sort Score
- Result 10 results
- Languages All
Results 61 - 70 of 76 for mat_mul (0.42 sec)
-
tensorflow/compiler/mlir/tf2xla/transforms/legalize_tf_patterns.td
foreach src = [TF_PreventGradientOp, TF_CheckNumericsOp] in def : Pat<(src $op, $msg), (replaceWithValue $op)>; //===----------------------------------------------------------------------===// // MatMul op patterns. //===----------------------------------------------------------------------===// def GetPrecisionConfig: NativeCodeCall< "GetPrecisionConfig(&$_builder)">;
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Mon May 06 18:46:23 UTC 2024 - 34.8K bytes - Viewed (0) -
tensorflow/compiler/mlir/lite/stablehlo/transforms/optimize.cc
// %1 = mhlo.reshape %param : (1xCxZ) -> CxZ // mhlo.dot_general %input, %1 {batch_dims = []} // To: // mhlo.dot_general %input, %param {batch_dims = [0]} // // This usage will mostly come from tf-unroll-batch-matmul, so it's fine to only // handle the case where batching dim is the leftmost dim. LogicalResult ConvertReshapeDotRhsToBatchedDot(mhlo::DotGeneralOp dot, PatternRewriter &rewriter) {
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu Apr 25 16:01:03 UTC 2024 - 26.9K bytes - Viewed (0) -
tensorflow/compiler/mlir/tensorflow/transforms/passes.h
// Guarantee that all FuncOp's have a single use. std::unique_ptr<OperationPass<ModuleOp>> CreateGuaranteeAllFuncsOneUsePass(); // Optional pass which will unroll BatchMatMul and use only MatMul std::unique_ptr<OperationPass<func::FuncOp>> CreateUnrollBatchMatMulPassPass(); // Optional pass which will map TF BatchMatMul to TF Einsum std::unique_ptr<OperationPass<func::FuncOp>> CreateBatchMatMulToEinsumPass();
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed Jun 12 21:18:05 UTC 2024 - 31.8K bytes - Viewed (0) -
tensorflow/compiler/jit/xla_launch_util.cc
// // 2. Old fashion Tensor with raw device memory pointer. This case occurs // when the producer is a non-XLA TF GPU kernel or function (e.g. // tf.matmul). // // 3. AsyncValueTensor, containing a PjRtBuffer. This is the legacy mode // and certain device type (e.g. TPU) still uses this path. AsyncValueTensor* av_tensor = AsyncValueTensor::FromTensor(tensor);
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu May 16 00:36:08 UTC 2024 - 40.4K bytes - Viewed (0) -
tensorflow/compiler/mlir/quantization/stablehlo/passes/bridge/convert_tf_quant_to_mhlo_int_test.cc
quantization_axis = -1 : i64, quantization_min_val = -128 : i64, quantization_max_val = 127 : i64 } : (tensor<9x10x!tf_type.qint8>, tensor<f32>, tensor<i32>) -> tensor<9x10xf32> %0 = "tf.MatMul"(%input, %filter_new) { } : (tensor<8x9xf32>, tensor<9x10xf32>) -> tensor<8x10xf32> return %0 : tensor<8x10xf32> })mlir"; constexpr absl::string_view kProgram = R"mlir(
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed Apr 03 01:03:21 UTC 2024 - 35.8K bytes - Viewed (0) -
tensorflow/compiler/mlir/lite/tests/optimize.mlir
func.func @FuseMulWithFullyConnectedWithBias(%arg: tensor<2x512xf32>) -> tensor<2x1024xf32> { %cst_mul = arith.constant dense<2.0> : tensor<512xf32> %cst_weights = arith.constant dense<3.0> : tensor<1024x512xf32> %cst_bias = arith.constant dense<5.0> : tensor<1024xf32> %0 = "tfl.mul"(%arg, %cst_mul) {fused_activation_function = "NONE"} : (tensor<2x512xf32>, tensor<512xf32>) -> tensor<2x512xf32>
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu May 16 20:31:41 UTC 2024 - 284.1K bytes - Viewed (0) -
tensorflow/compiler/mlir/tensorflow/transforms/tf_passes.td
``` The pass also works across control flow and functional calls. }]; } def UnrollBatchMatMulPass : Pass<"tf-unroll-batch-matmul", "mlir::func::FuncOp"> { let summary = "Unroll TF BatchMatMul op into Reshape, Slice, MatMul, Pack ops."; let constructor = "TF::CreateUnrollBatchMatMulPassPass()"; } def ClusterFormationPass : Pass<"tf-device-cluster-formation", "mlir::ModuleOp"> {
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed Jun 12 21:18:05 UTC 2024 - 99.6K bytes - Viewed (0) -
src/cmd/vendor/golang.org/x/telemetry/package-lock.json
"node": ">=8" }, "funding": { "url": "https://github.com/sponsors/sindresorhus" } }, "node_modules/mathml-tag-names": { "version": "2.1.3", "resolved": "https://registry.npmjs.org/mathml-tag-names/-/mathml-tag-names-2.1.3.tgz", "integrity": "sha512-APMBEanjybaPzUrfqU0IMU5I0AswKMH7k8OTLs0vvV4KZpExkTkY87nR/zpbuTPj+gARop7aGUbl11pnDfW6xg==", "dev": true,
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Mon Mar 04 17:57:25 UTC 2024 - 156K bytes - Viewed (0) -
tensorflow/compiler/mlir/lite/stablehlo/transforms/uniform_quantized_stablehlo_to_tfl_pass.cc
UniformQuantizedStableHloToTflPass> { private: void runOnOperation() override; }; // TODO: b/323645515 - Refactor reference functions. // Bias scales for matmul-like ops should be input scale * filter scale. Here it // is assumed that the input is per-tensor quantized and filter is per-channel // quantized. SmallVector<double> GetBiasScales(const double input_scale,
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Mon Apr 22 09:00:19 UTC 2024 - 99.8K bytes - Viewed (0) -
tensorflow/compiler/mlir/lite/transforms/optimize.cc
if (fc_op.getFusedActivationFunction() != "NONE") return failure(); // Only fuse multiplier if all dimensions other than the depth dimension // are equal to 1 since otherwise // `matmul(x, filter) * cst != matmul(x, filter * cst)` // even if `filter` and `cst` are be broadcastable. auto shape = cst.getType().getShape(); if (!IsDimensionsDegenerateExceptLastOne(shape)) return failure();
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue Apr 30 00:40:15 UTC 2024 - 102.3K bytes - Viewed (0)