Search Options

Results per page
Sort
Preferred Languages
Advance

Results 1 - 10 of 19 for focusing (0.34 sec)

  1. docs/en/docs/tutorial/sql-databases.md

        In a real life application you would need to hash the password and never save them in plaintext.
    
        For more details, go back to the Security section in the tutorial.
    
        Here we are focusing only on the tools and mechanics of databases.
    
    !!! tip
    Registered: Mon Jun 17 08:32:26 UTC 2024
    - Last Modified: Thu Apr 18 19:53:19 UTC 2024
    - 29.6K bytes
    - Viewed (0)
  2. samples/bookinfo/src/productpage/templates/productpage.html

        <div class="absolute right-0 top-0 hidden pr-4 pt-4 sm:block">
          <button id="close-dialog" type="button" class="rounded-md bg-white text-gray-400 hover:text-gray-500 focus:outline-none focus:ring-2 focus:ring-blue-500 focus:ring-offset-2">
            <span class="sr-only">Close</span>
            <svg class="h-6 w-6" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" aria-hidden="true">
    Registered: Fri Jun 14 15:00:06 UTC 2024
    - Last Modified: Mon Jun 03 19:54:05 UTC 2024
    - 10.9K bytes
    - Viewed (0)
  3. tensorflow/compiler/mlir/lite/tests/optimize.mlir

    // Fusing:  %[[add3:[0-9].*]] = tfl.add %[[add2]], %[[relu]] {fused_activation_function = "RELU6"} : tensor<1xf32>
    // Fusing:  return
    
    // NoFusing-LABEL: FusingaddRelu
    // NoFusing:  %[[add:[0-9].*]] = tfl.add %arg0, %arg1 {fused_activation_function = "NONE"} : tensor<1xf32>
    // NoFusing:  %[[add1:[0-9].*]] = tfl.add %arg0, %[[add]] {fused_activation_function = "RELU"} : tensor<1xf32>
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Thu May 16 20:31:41 UTC 2024
    - 284.1K bytes
    - Viewed (0)
  4. platforms/documentation/docs/src/docs/css/base.css

    [type=button]{-webkit-appearance:button}[type=button]::-moz-focus-inner,[type=reset]::-moz-focus-inner,[type=submit]::-moz-focus-inner,button::-moz-focus-inner{border-style:none;padding:0}[type=button]:-moz-focusring,[type=reset]:-moz-focusring,[type=submit]:-moz-focusring,button:-moz-focusring{outline:1px dotted ButtonText}fieldset{padding:.35em .75em .625em}legend{box-sizing:border-box;color:inherit;display:table;max-width:100%;padding:0;white-space:normal}progress{display:inline-block;vertical...
    Registered: Wed Jun 12 18:38:38 UTC 2024
    - Last Modified: Sat May 25 05:15:02 UTC 2024
    - 30.5K bytes
    - Viewed (0)
  5. tensorflow/compiler/mlir/tensorflow/transforms/fused_kernel_matcher.cc

    // TODO(b/158266710): Support CPU MKL configurations.
    
    #define GEN_PASS_DEF_FUSEDKERNELMATCHERPASS
    #include "tensorflow/compiler/mlir/tensorflow/transforms/tf_passes.h.inc"
    
    // Optimizes TF computations by fusing subgraphs/nodes onto more efficient
    // implementations to decrease the number of operations needed to perform a
    // computation.
    struct FusedKernelMatcherPass
        : public impl::FusedKernelMatcherPassBase<FusedKernelMatcherPass> {
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Thu Apr 25 16:01:03 UTC 2024
    - 14.9K bytes
    - Viewed (0)
  6. tensorflow/compiler/mlir/quantization/tensorflow/passes/prepare_lifting.cc

        return "quant-prepare-lifting";
      }
    
      StringRef getDescription() const final {
        // This is a brief description of the pass.
        return "Apply graph optimizations such as fusing and constant folding to "
               "prepare lifting.";
      }
    
      void getDependentDialects(DialectRegistry& registry) const override {
        registry.insert<TF::TensorFlowDialect, arith::ArithDialect>();
      }
    
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Fri May 17 17:58:54 UTC 2024
    - 13.3K bytes
    - Viewed (0)
  7. tensorflow/compiler/mlir/lite/tf_tfl_passes.cc

            mlir::TFL::CreatePostQuantizePass(emit_quant_adaptor_ops));
      }
      pass_manager.addNestedPass<mlir::func::FuncOp>(
          mlir::TFL::CreateOptimizeOpOrderPass());
      // Add optimization pass after quantization for additional fusing
      // opportunities.
    
      if (!pass_config.unfold_batch_matmul) {
        // Enable an optimization pass that transforms FC to BatchMatmul only when
        // `unfold_batch_matmul=false`.
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Thu Jun 06 18:45:51 UTC 2024
    - 25.5K bytes
    - Viewed (0)
  8. tensorflow/compiler/mlir/quantization/stablehlo/passes/quantization_patterns.h

        // Safeguard check to ensure that there is at least one quantizable op.
        if (failed(candidate_ops) || candidate_ops->empty()) return failure();
    
        // Rewrite the floating-point ops to the quantized version, by fusing
        // preceding dequantize ops and succeding quantize ops.
        for (Operation* candidate_op : *candidate_ops) {
          // If it is requantize op, we shouldn't rewrite this op.
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Thu Apr 25 16:01:03 UTC 2024
    - 10.9K bytes
    - Viewed (0)
  9. tensorflow/compiler/mlir/quantization/tensorflow/passes/passes.h

    std::unique_ptr<OperationPass<ModuleOp>>
    CreateLiftQuantizableSpotsAsFunctionsPass(
        const tensorflow::quantization::QuantizationOptions& quant_options);
    
    // Apply graph optimizations such as fusing and constant folding to prepare
    // lifting.
    std::unique_ptr<OperationPass<func::FuncOp>> CreatePrepareLiftingPass(
        tensorflow::quantization::OpSet target_opset);
    
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Fri May 10 04:07:09 UTC 2024
    - 12.3K bytes
    - Viewed (0)
  10. tensorflow/compiler/mlir/quantization/stablehlo/python/integration_test/quantize_model_test_base.py

                data_format='NHWC',
                name='sample/conv',
            )
            if bias_fn is not None:
              out = nn_ops.bias_add(out, self.bias)
            if has_batch_norm:
              # Fusing is supported for non-training case.
              out, _, _, _, _, _ = nn_ops.fused_batch_norm_v3(
                  out, scale, offset, mean, variance, is_training=False
              )
            if activation_fn is not None:
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Tue May 14 06:31:57 UTC 2024
    - 18.2K bytes
    - Viewed (0)
Back to top