- Sort Score
- Result 10 results
- Languages All
Results 1 - 3 of 3 for Performance (0.07 sec)
-
RELEASE.md
replicas taking part in sync training. * Performance improvements for GPU multi-worker distributed training using `tf.distribute.experimental.MultiWorkerMirroredStrategy` * Update NVIDIA `NCCL` to `2.5.7-1` for better performance and performance tuning. Please seeRegistered: Tue Sep 09 12:39:10 UTC 2025 - Last Modified: Mon Aug 18 20:54:38 UTC 2025 - 740K bytes - Viewed (3) -
docs/en/docs/release-notes.md
* **Safer** types. * Better **performance** and **less energy** consumption. * Better **extensibility**. * etc. ...all this while keeping the **same Python API**. In most of the cases, for simple models, you can simply upgrade the Pydantic version and get all the benefits. 🚀 In some cases, for pure data validation and processing, you can get performance improvements of **20x** or more. This means 2,000% or more. 🤯
Registered: Sun Sep 07 07:19:17 UTC 2025 - Last Modified: Fri Sep 05 12:48:45 UTC 2025 - 544.1K bytes - Viewed (0) -
lib/fips140/v1.0.0.zip
(2^256-1) else (x1 & (2^256-1)) - 2^256 package edwards25519 import "math/bits" type fiatScalarUint1 uint64 // We use uint64 instead of a more narrow type for performance reasons; see https://github.com/mit-plv/fiat-crypto/pull/1006#issuecomment-892625927 type fiatScalarInt1 int64 // We use uint64 instead of a more narrow type for performance reasons; see https://github.com/mit-plv/fiat-crypto/pull/1006#issuecomment-892625927 // The type fiatScalarMontgomery is a field element in the Montgomery domain....
Registered: Tue Sep 09 11:13:09 UTC 2025 - Last Modified: Wed Jan 29 15:10:35 UTC 2025 - 635K bytes - Viewed (0)