Search Options

Results per page
Sort
Preferred Languages
Advance

Results 1 - 10 of 212 for TfLite (0.2 sec)

  1. .github/ISSUE_TEMPLATE/tflite-other.md

    TensorFlower Gardener <******@****.***> 1672352909 -0800
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Thu Dec 29 22:28:29 UTC 2022
    - 3.4K bytes
    - Viewed (0)
  2. .github/ISSUE_TEMPLATE/tflite-converter-issue.md

    ---
    name: TensorFlow Lite Converter Issue
    about: Use this template for reporting issues during model conversion to TFLite
    labels: 'TFLiteConverter'
    
    ---
    
    ### 1. System information
    
    - OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
    - TensorFlow installation (pip package or built from source):
    - TensorFlow library (version, if pip package or github SHA, if built from source):
    
    ### 2. Code
    
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Wed Jun 15 03:35:58 UTC 2022
    - 2.1K bytes
    - Viewed (0)
  3. .github/ISSUE_TEMPLATE/tflite-op-request.md

    A. Unique TensorFlower <******@****.***> 1655263956 -0700
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Wed Jun 15 03:35:58 UTC 2022
    - 879 bytes
    - Viewed (0)
  4. .github/ISSUE_TEMPLATE/tflite-in-play-services.md

    A. Unique TensorFlower <******@****.***> 1655263956 -0700
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Wed Jun 15 03:35:58 UTC 2022
    - 880 bytes
    - Viewed (0)
  5. tensorflow/compiler/mlir/lite/utils/convert_type.cc

        case tflite::TensorType_FLOAT16:
          return builder.getF16Type();
        case tflite::TensorType_BFLOAT16:
          return builder.getBF16Type();
        case tflite::TensorType_FLOAT32:
          return builder.getF32Type();
        case tflite::TensorType_FLOAT64:
          return builder.getF64Type();
        case tflite::TensorType_INT32:
          return builder.getIntegerType(32);
        case tflite::TensorType_UINT16:
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Tue May 07 23:04:40 UTC 2024
    - 8.2K bytes
    - Viewed (0)
  6. tensorflow/compiler/mlir/lite/quantization/lite/quantize_weights.cc

                                 const tflite::Model* input_model,
                                 BufferType quant_type,
                                 bool use_updated_hybrid_scheme) {
      tflite::TensorType inference_type;
      switch (quant_type) {
        case BufferType::QUANTIZED_FLOAT16:
          inference_type = tflite::TensorType_FLOAT16;
          break;
        default:
          inference_type = tflite::TensorType_INT8;
      }
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Wed Jun 12 23:15:24 UTC 2024
    - 9.5K bytes
    - Viewed (0)
  7. tensorflow/compiler/mlir/lite/utils/convert_type.h

    namespace mlir {
    class Builder;
    }  // namespace mlir
    
    namespace tflite {
    // Convert the MLIR type to the corresponding TFLite tensor.
    tflite::TensorType ConvertTypeToTensorType(mlir::Type type);
    
    // Convert the scalar type of a TFlite tensor to the corresponding MLIR type.
    mlir::Type ConvertElementType(tflite::TensorType type, mlir::Builder builder);
    
    // Convert the scalar type of a TFLite tensor to the corresponding
    // Tensorflow type
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Fri May 03 18:01:23 UTC 2024
    - 2.1K bytes
    - Viewed (0)
  8. tensorflow/compiler/mlir/lite/experimental/tac/tac_filter.proto

    // A list of filters for TAC users to run ops/functions on ML hardwares. The
    // intuition is that, for ops/functions that can be run on ML hardware (e.g.
    // EdgeTPU) and TFLite CPU, TAC users give a hint that they're more performant
    // to run on TFLite CPU. These filters give the TAC users freedom to specify the
    // parts that they want to use other hardware to accelerate.
    message TacFilters {
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Fri May 19 19:32:06 UTC 2023
    - 1.8K bytes
    - Viewed (0)
  9. tensorflow/compiler/mlir/lite/mlir_tflite_runner.cc

        return 1;
    
      // Create TFLite interpreter & invoke converted program.
      std::unique_ptr<tflite::FlatBufferModel> model =
          tflite::FlatBufferModel::BuildFromBuffer(serialized_flatbuffer.c_str(),
                                                   serialized_flatbuffer.size());
      tflite::ops::builtin::BuiltinOpResolver builtins;
      std::unique_ptr<tflite::Interpreter> interpreter;
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Sat Jun 03 00:14:05 UTC 2023
    - 6.3K bytes
    - Viewed (0)
  10. tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir/importer_test_min_max.cc

    #include "tensorflow/lite/model.h"
    
    using llvm::cl::opt;
    
    // RUN: flatbuffer_translate -mlir-to-tflite-flatbuffer %s.mlir -o - \
    // RUN:   | %p/importer_test_min_max - \
    // RUN:   | flatbuffer_translate --tflite-flatbuffer-to-mlir - -o - \
    // RUN:   | FileCheck %s
    
    // RUN: flatbuffer_translate -mlir-to-tflite-flatbuffer %s.mlir -o - \
    // RUN:   | %p/importer_test_min_max - \
    // RUN:   | flatbuffer_to_string - \
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Tue May 21 18:21:50 UTC 2024
    - 6.8K bytes
    - Viewed (0)
Back to top