Search Options

Results per page
Sort
Preferred Languages
Advance

Results 11 - 14 of 14 for Motivation (0.21 sec)

  1. RELEASE.md

    *   Add `UnifiedGRU` as the new GRU implementation for tf2.0. Change the default
        recurrent activation function for GRU from `hard_sigmoid` to `sigmoid`, and
        `reset_after` to True in 2.0. Historically recurrent activation is
        `hard_sigmoid` since it is fast than 'sigmoid'. With new unified backend
        between CPU and GPU mode, since the CuDNN kernel is using sigmoid, we change
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Tue Jun 11 23:24:08 UTC 2024
    - 730.3K bytes
    - Viewed (0)
  2. tensorflow/compiler/mlir/lite/ir/tfl_ops.td

        QuantizableResult,
        PredOpTrait<"input and output must have same element type",
          TFL_TCresVTEtIsSameAsOp<0, 0>>]> {
      let summary = "Hardswish activation function.";
      let description = [{
        Computes hard-swish activation function
          f(x) -> (x * relu6(x+3))/6
        element-wise.
      }];
    
      let arguments = (ins TFL_TensorOf<[F32, QUI8, QI8]>:$input);
    
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Thu Jun 06 19:09:08 UTC 2024
    - 186K bytes
    - Viewed (0)
  3. tensorflow/compiler/mlir/lite/transforms/passes.td

          Option<"quantize_signed_", "quantize-signed", "bool", "false",
                 "signed inference type. Only used in tests">,
          Option<"activation_number_of_bits_", "activation-number-of-bits", "int", "8",
                 "number of bits for inference type. Only used in tests">,
          Option<"post_training_quantize_", "post-training-quantize", "bool", "false",
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Wed Apr 24 20:30:06 UTC 2024
    - 22.6K bytes
    - Viewed (0)
  4. tensorflow/compiler/mlir/lite/tests/prepare-quantize-post-training-16bits.mlir

    // RUN: tf-opt %s -tfl-prepare-quantize="quantize-signed=true post-training-quantize=true activation-number-of-bits=16" -cse | FileCheck %s
    
    // CHECK-LABEL: QuantizeUnidirectionalLstmFullPerTensor
    func.func @QuantizeUnidirectionalLstmFullPerTensor(%arg0: tensor<1x2x3xf32>) -> (tensor<1x2x3xf32>) {
      %input = "quantfork.stats"(%arg0) {layerStats = dense<[0.0, 1.0]> : tensor<2xf32>} : (tensor<1x2x3xf32>) -> tensor<1x2x3xf32>
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Thu May 02 09:41:17 UTC 2024
    - 26.1K bytes
    - Viewed (0)
Back to top