- Sort Score
- Result 10 results
- Languages All
Results 1 - 10 of 548 for tflite (0.13 sec)
-
.github/ISSUE_TEMPLATE/tflite-other.md
name: TensorFlow Lite Other Issue description: Use this template to report any issue in TensorFlow Lite that is not about Converters, Play Services or Ops body: - type: dropdown id: issue-type attributes: label: Issue Type description: What type of issue would you like to report? multiple: false options: - Bug - Build/Install - Performance - Support - Feature Request - Documentation Feature Request - Documentation Bug - Others validations: required: true - type:
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu Dec 29 22:28:29 UTC 2022 - 3.4K bytes - Viewed (0) -
.github/ISSUE_TEMPLATE/tflite-converter-issue.md
2) Reference [TensorFlow Lite Model Colab](https://colab.research.google.com/gist/ymodak/0dfeb28255e189c5c48d9093f296e9a8/tensorflow-lite-debugger-colab.ipynb): Demonstrate how to convert your TF model to a TF Lite model (with quantization, if used) and run TFLite Inference (if possible). ``` (You can paste links or attach files by dragging & dropping them below)
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed Jun 15 03:35:58 UTC 2022 - 2.1K bytes - Viewed (0) -
.github/ISSUE_TEMPLATE/tflite-op-request.md
--- name: TensorFlow Lite Op Request about: Use this template for reporting Lite ops you are using or missing labels: 'comp:lite' --- **System information** - OS Platform and Distribution (e.g., Linux Ubuntu 16.04): - TensorFlow installed from (source or binary): - TensorFlow version (or github SHA if from source): **Provide the text output from tflite_convert** ``` # Copy and paste here ```
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed Jun 15 03:35:58 UTC 2022 - 879 bytes - Viewed (0) -
.github/ISSUE_TEMPLATE/tflite-in-play-services.md
--- name: TensorFlow Lite in Play Services issue about: Use this template for issues with TensorFlow Lite in Google Play Services labels: 'comp:lite-in-play-services' --- **System information** - Android Device information (use `adb shell getprop ro.build.fingerprint` if possible): - TensorFlow Lite in Play Services SDK version (found in `build.gradle`): - Google Play Services version
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed Jun 15 03:35:58 UTC 2022 - 880 bytes - Viewed (0) -
tensorflow/compiler/mlir/lite/utils/convert_type.cc
case tflite::TensorType_FLOAT16: return builder.getF16Type(); case tflite::TensorType_BFLOAT16: return builder.getBF16Type(); case tflite::TensorType_FLOAT32: return builder.getF32Type(); case tflite::TensorType_FLOAT64: return builder.getF64Type(); case tflite::TensorType_INT32: return builder.getIntegerType(32); case tflite::TensorType_UINT16:
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue May 07 23:04:40 UTC 2024 - 8.2K bytes - Viewed (0) -
tensorflow/compiler/mlir/lite/schema/schema_generated.h
tflite::Int32VectorT *AsInt32Vector() { return type == SparseIndexVector_Int32Vector ? reinterpret_cast<tflite::Int32VectorT *>(value) : nullptr; } const tflite::Int32VectorT *AsInt32Vector() const { return type == SparseIndexVector_Int32Vector ? reinterpret_cast<const tflite::Int32VectorT *>(value) : nullptr; } tflite::Uint16VectorT *AsUint16Vector() {
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue May 21 18:21:50 UTC 2024 - 1M bytes - Viewed (0) -
tensorflow/compiler/mlir/lite/mlir_tflite_runner.cc
return 1; // Create TFLite interpreter & invoke converted program. std::unique_ptr<tflite::FlatBufferModel> model = tflite::FlatBufferModel::BuildFromBuffer(serialized_flatbuffer.c_str(), serialized_flatbuffer.size()); tflite::ops::builtin::BuiltinOpResolver builtins; std::unique_ptr<tflite::Interpreter> interpreter;
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Sat Jun 03 00:14:05 UTC 2023 - 6.3K bytes - Viewed (0) -
tensorflow/compiler/mlir/lite/flatbuffer_export.cc
} else { return tflite::TensorType_INT4; } case 8: return itype.isUnsigned() ? tflite::TensorType_UINT8 : tflite::TensorType_INT8; case 16: return itype.isUnsigned() ? tflite::TensorType_UINT16 : tflite::TensorType_INT16; case 32: return itype.isUnsigned() ? tflite::TensorType_UINT32
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed Jun 12 21:41:49 UTC 2024 - 164.5K bytes - Viewed (0) -
tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir/importer_test_min_max.cc
#include "llvm/Support/raw_ostream.h" #include "tensorflow/compiler/mlir/lite/schema/schema_generated.h" #include "tensorflow/compiler/mlir/lite/schema/schema_utils.h" #include "tensorflow/lite/model.h" using llvm::cl::opt; // RUN: flatbuffer_translate -mlir-to-tflite-flatbuffer %s.mlir -o - \ // RUN: | %p/importer_test_min_max - \ // RUN: | flatbuffer_translate --tflite-flatbuffer-to-mlir - -o - \ // RUN: | FileCheck %s
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue May 21 18:21:50 UTC 2024 - 6.8K bytes - Viewed (0) -
tensorflow/compiler/mlir/lite/quantization/lite/quantize_weights.cc
const tflite::Model* input_model, BufferType quant_type, bool use_updated_hybrid_scheme) { tflite::TensorType inference_type; switch (quant_type) { case BufferType::QUANTIZED_FLOAT16: inference_type = tflite::TensorType_FLOAT16; break; default: inference_type = tflite::TensorType_INT8; }
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed Jun 12 23:15:24 UTC 2024 - 9.5K bytes - Viewed (0)