- Sort Score
- Result 10 results
- Languages All
Results 1 - 3 of 3 for fComplex128 (0.2 sec)
-
tensorflow/compiler/mlir/tensorflow/ir/tf_generated_ops.td
(e.g. tf.complex64 or tf.complex128) as tf.cast() make imaginary part 0 while tf.bitcast() gives module error. For example, Example 1: >>> a = [1., 2., 3.] >>> equality_bitcast = tf.bitcast(a, tf.complex128) Traceback (most recent call last): ... InvalidArgumentError: Cannot bitcast from 1 to 18 [Op:Bitcast] >>> equality_cast = tf.cast(a, tf.complex128) >>> print(equality_cast)
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue Jun 11 23:24:08 UTC 2024 - 793K bytes - Viewed (0) -
RELEASE.md
`tf.float64` (for which there was no GPU implementation). In this current release, GPU support for other floating-point types (`tf.float16`, `tf.float64`, `tf.complex64`, and `tf.complex128`) has been added for this op. If you were relying on the determinism of the `tf.float64` CPU implementation being automatically selected because of
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue Jun 11 23:24:08 UTC 2024 - 730.3K bytes - Viewed (0) -
src/cmd/compile/internal/ssagen/ssa.go
{ir.OBITNOT, types.TINT64}: ssa.OpCom64, {ir.OBITNOT, types.TUINT64}: ssa.OpCom64, {ir.OIMAG, types.TCOMPLEX64}: ssa.OpComplexImag, {ir.OIMAG, types.TCOMPLEX128}: ssa.OpComplexImag, {ir.OREAL, types.TCOMPLEX64}: ssa.OpComplexReal, {ir.OREAL, types.TCOMPLEX128}: ssa.OpComplexReal, {ir.OMUL, types.TINT8}: ssa.OpMul8, {ir.OMUL, types.TUINT8}: ssa.OpMul8, {ir.OMUL, types.TINT16}: ssa.OpMul16,
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Mon Jun 10 19:44:43 UTC 2024 - 284.9K bytes - Viewed (0)