Search Options

Results per page
Sort
Preferred Languages
Advance

Results 1 - 10 of 80 for Convolution (0.33 sec)

  1. tensorflow/compiler/mlir/tensorflow/g3doc/space_to_depth.md

    speedup and reduce memory usage in the first convolution.
    
    The first convolution in many image models, including ResNet or ResNet-like, is
    a (kernel=7, stride=2) 2D convolution. The input of the convolution is images,
    which usually has RGB channels. The input of this first convolution is of shape
    [batch\_size, height, width, 3] and the kernel size is [kernel\_size,
    kernel\_size, 3, out\_channel]. Space to depth is to transform this first
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Sat Oct 24 02:51:43 UTC 2020
    - 8.3K bytes
    - Viewed (0)
  2. tensorflow/compiler/mlir/quantization/stablehlo/tests/passes/nchw_convolution_to_nhwc.mlir

      %2 = stablehlo.convolution(%arg0, %0) dim_numbers = [b, 0, 1, f]x[o, i, 0, 1]->[b, f, 0, 1], window = {pad = [[1, 1], [1, 1]]} {batch_group_count = 1 : i64, feature_group_count = 1 : i64} : (tensor<1x4x4x8xf32>, tensor<8x8x3x3xf32>) -> tensor<1x8x4x4xf32>
      return %2 : tensor<1x8x4x4xf32>
    }
    
    // CHECK-NOT: stablehlo.transpose
    // CHECK: %[[CONV:.+]] = stablehlo.convolution
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Mon Mar 25 23:00:47 UTC 2024
    - 5.5K bytes
    - Viewed (0)
  3. tensorflow/compiler/mlir/quantization/stablehlo/tests/pipelines/process_nchw_tensor.mlir

    // CHECK: return %[[TRANSPOSE_1]]
    
    // -----
    
    // Tests that a `add(convolution(%activation, %weight), %bias)` pattern with the
    // activation tensor of NCHW format and non-constant bias is converted to NHWC
    // convolution, but without the deferred transpose for `stablehlo.add`.
    // Transpose ops are inserted to the activation and output of
    // `stablehlo.convolution`. The weight constants is transposed.
    
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Thu Apr 18 20:32:46 UTC 2024
    - 12.6K bytes
    - Viewed (0)
  4. tensorflow/compiler/mlir/quantization/stablehlo/tests/bridge/optimize.mlir

      ) -> tensor<?x2x2x1xi32> {
      // CHECK-DAG: %[[conv:.*]] = mhlo.convolution
      // CHECK-DAG: %[[combined:.*]] = chlo.broadcast_add %[[zp_offset:.*]], %[[bias:.*]]
      // CHECK-DAG: %[[result:.*]] = chlo.broadcast_add %[[conv]], %[[combined]]
      // CHECK: return %[[result]]
      %0 = mhlo.convolution(%lhs, %rhs)
          dim_numbers = [b, 0, 1, f]x[0, 1, i, o]->[b, 0, 1, f],
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Sat Feb 24 02:26:47 UTC 2024
    - 10.7K bytes
    - Viewed (0)
  5. tensorflow/compiler/mlir/tensorflow/transforms/tpu_space_to_depth_pass.cc

      // Iterate through block argument and its convolution users. Space to depth
      // transform will be applied only if all the below conditions are satisfied:
      //  1. All the users of the block argument will lead to convolutions;
      //  2. block_size of for the space to depth transform for these convolutions
      //     are the same;
      //  3. block_size of for the space to depth transform for these convolutions
      //     are larger than 1.
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Thu Apr 25 16:01:03 UTC 2024
    - 29.3K bytes
    - Viewed (0)
  6. tensorflow/compiler/mlir/lite/stablehlo/tests/fuse_mhlo_convolution.mlir

    // RUN: odml-to-stablehlo-opt %s -fuse-mhlo-convolution-pass -cse | FileCheck %s
    
    // CHECK-LABEL: @fuseMulAndConv2D
    // CHECK-SAME: %[[INPUT:[^:[:space:]]+]]
    func.func @fuseMulAndConv2D(%input: tensor<1x256x256x3xf32>) -> (tensor<1x256x256x2xf32>) {
      // CHECK-DAG: %[[FILTER:.+]] = mhlo.constant dense<{{\[\[\[\[}}1.000000e+00, 2.000000e+00], [3.000000e+00, 4.000000e+00], [5.000000e+00, 6.000000e+00]]]]> : tensor<1x1x3x2xf32>
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Sat Apr 06 15:32:52 UTC 2024
    - 4.4K bytes
    - Viewed (0)
  7. tensorflow/compiler/mlir/quantization/stablehlo/tests/passes/lift_quantizable_spots_as_functions.mlir

      %3 = stablehlo.constant dense<6.000000e+00> : tensor<f32>
      %4 = stablehlo.convolution(%arg0, %0) dim_numbers = [b, 0, 1, f]x[0, 1, i, o]->[b, 0, 1, f], window = {pad = [[1, 1], [1, 1]]} {batch_group_count = 1 : i64, feature_group_count = 1 : i64} : (tensor<?x28x28x1xf32>, tensor<3x3x1x16xf32>) -> tensor<?x28x28x16xf32>
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Fri May 10 04:07:09 UTC 2024
    - 49.8K bytes
    - Viewed (0)
  8. tensorflow/compiler/mlir/lite/stablehlo/tests/legalize-tfl-stablehlo-conv.mlir

    module {
      func.func @main(%arg0: tensor<8x8x1x207xf32>, %arg1: tensor<3x3x16x207xf32>) -> tensor<16x8x8x1xf32> {
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Wed Jan 24 06:08:43 UTC 2024
    - 1.6K bytes
    - Viewed (0)
  9. tensorflow/compiler/mlir/lite/stablehlo/transforms/uniform_quantized_stablehlo_to_tfl_pass.cc

        rewriter.replaceAllUsesExcept(rhs, dq.getOutput(), dq);
      }
    };
    
    // Splits hybrid quantized `stablehlo.convolution` into `tfl.dequantize` and
    // float `stablehlo.convolution` op. Weight tensor is transposed to match the
    // filter tensor format for TFLite convolution.
    // Legalization of float `stablehlo.convolution` op relies on existing passes
    // for conversion of StableHLO -> MHLO -> TF -> TFL.
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Mon Apr 22 09:00:19 UTC 2024
    - 99.8K bytes
    - Viewed (0)
  10. tensorflow/compiler/mlir/quantization/stablehlo/tests/passes/quantize/quantize_weight_only.mlir

    // CHECK-SAME: (tensor<1x2xf32>, tensor<2x3x!quant.uniform<i8:f32, 6.000000e-03>>) -> tensor<1x3xf32>
    // CHECK: return %[[DOT]]
    
    // -----
    
    // Test that hybrid quantized convolution is produced when q/dq pair only exists
    // for weight.
    
    module attributes {tf_saved_model.semantics} {
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Tue May 14 17:10:32 UTC 2024
    - 4.8K bytes
    - Viewed (0)
Back to top