- Sort Score
- Result 10 results
- Languages All
Results 1 - 1 of 1 for biasa (0.08 sec)
-
RELEASE.md
* The new backend is used for 8 bits full integer post-training quantization * The new backend removes the redundant rescales and fixes some bugs (shared weight/bias, extremely small scales, etc) * Set `experimental_new_quantizer` in tf.lite.TFLiteConverter to False to disable this change * `tf.keras` * `tf.keras.metrics.AUC` now support logit predictions.
Registered: Tue Nov 05 12:39:12 UTC 2024 - Last Modified: Tue Oct 22 14:33:53 UTC 2024 - 735.3K bytes - Viewed (0)