- Sort Score
- Result 10 results
- Languages All
Results 1 - 3 of 3 for opus (0.05 sec)
-
RELEASE.md
* Disabling TensorFloat-32 execution now causes TPUs to use float32 precision for float32 matmuls and other ops. TPUs have always used bfloat16 precision for certain ops, like matmul, when such ops had float32 inputs. Now, disabling TensorFloat-32 by calling `tf.config.experimental.enable_tensor_float_32_execution(False)` will cause TPUs to use float32 precision for such ops instead of bfloat16. * `tf.experimental.dtensor`
Registered: Tue Sep 09 12:39:10 UTC 2025 - Last Modified: Mon Aug 18 20:54:38 UTC 2025 - 740K bytes - Viewed (2) -
CHANGELOG/CHANGELOG-1.19.md
- Fixes regression in CPUManager that caused freeing of exclusive CPUs at incorrect times ([#90377](https://github.com/kubernetes/kubernetes/pull/90377), [@cbf123](https://github.com/cbf123)) [SIG Cloud Provider and Node]
Registered: Fri Sep 05 09:05:11 UTC 2025 - Last Modified: Wed Jan 05 05:42:32 UTC 2022 - 489.7K bytes - Viewed (0) -
lib/fips140/v1.0.0.zip
EDIT. package fiat // Invert sets e = 1/x, and returns e. // // If x == 0, Invert returns e = 0. func (e *Element) Invert(x *Element) *Element { // Inversion is implemented as exponentiation with exponent p − 2. // The sequence of {{ .Ops.Adds }} multiplications and {{ .Ops.Doubles }} squarings is derived from the // following addition chain generated with {{ .Meta.Module }} {{ .Meta.ReleaseTag }}. // {{- range lines (format .Script) }} // {{ . }} {{- end }} // var z = new(Element).Set(e) {{- range .Program.Temporaries...
Registered: Tue Sep 09 11:13:09 UTC 2025 - Last Modified: Wed Jan 29 15:10:35 UTC 2025 - 635K bytes - Viewed (0)