- Sort Score
- Result 10 results
- Languages All
Results 51 - 60 of 179 for computation (0.19 sec)
-
RELEASE.md
or "true" allows the debug options passed within an XRTCompile op to be passed directly to the XLA compilation backend. If such variable is not set (service side), only a restricted set will be passed through. * Allow the XRTCompile op to return the ProgramShape resulted form the XLA compilation as a second return argument. * XLA HLO graphs can now be rendered as SVG/HTML. * Estimator
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue Jun 11 23:24:08 UTC 2024 - 730.3K bytes - Viewed (0) -
src/go/types/call.go
typ = &Pointer{base: typ} } } // If we created a synthetic pointer type above, we will throw // away the method set computed here after use. // TODO(gri) Method set computation should probably always compute // both, the value and the pointer receiver method set and represent // them in a single structure. // TODO(gri) Consider also using a method set cache for the lifetime
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Thu May 30 19:19:55 UTC 2024 - 33.5K bytes - Viewed (0) -
src/math/big/nat.go
// We want z = x**y mod m. // z₁ = x**y mod m1 = (x**y mod m) mod m1 = z mod m1 // z₂ = x**y mod m2 = (x**y mod m) mod m2 = z mod m2 // (We are using the math/big convention for names here, // where the computation is z = x**y mod m, so its parts are z1 and z2. // The paper is computing x = a**e mod n; it refers to these as x2 and z1.) z1 := nat(nil).expNN(x, y, m1, false) z2 := nat(nil).expNN(x, y, m2, false)
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Mon May 13 21:31:58 UTC 2024 - 31.7K bytes - Viewed (0) -
tensorflow/compiler/mlir/lite/transforms/prepare_tf.cc
} }; // StridedSlice can have complicated attributes like begin_axis_mask, // end_axis_mask, ellipsis_axis_mask, new_axis_mask, shrink_axis_mask. These // masks will complicate the strided_slice computation logic, we can simplify // the logic by inserting a reshape op to pad the inputs so strided_slice can // be easier to handle. // // So the graph may looks like below: // original_input -> strided_slice -> output
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue May 28 21:49:50 UTC 2024 - 64.6K bytes - Viewed (0) -
tensorflow/compiler/mlir/lite/ir/tfl_ops.td
DeclareOpInterfaceMethods<RegionBranchOpInterface>, SingleBlockImplicitTerminator<"YieldOp">]> { let summary = [{Poly call}]; let description = [{ Have multiple function bodies for the same computation. This allows a program compiler/interpreter to choose one of the available options to execute the program based on which one is most suitable for the target backend.
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu Jun 06 19:09:08 UTC 2024 - 186K bytes - Viewed (0) -
tensorflow/compiler/mlir/lite/transforms/optimize_patterns.td
$input, (Arith_ConstantOp I32ElementsAttr:$axis), ConstBoolAttrFalse, $reverse), (replaceWithValue $input), [(AreInputDimensionsOneInAxes $input, $axis)]>; // Fusing raw computation of GELU op into one native tfl_gelu op. // // Requires constants to be exact match and only one use of all of the // intermediate results. // // For GeluApproximate, replaces
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu May 16 20:31:41 UTC 2024 - 66.4K bytes - Viewed (0) -
src/reflect/type.go
var gcdata *byte var ptrdata uintptr size := abi.MapBucketCount*(1+ktyp.Size_+etyp.Size_) + goarch.PtrSize if size&uintptr(ktyp.Align_-1) != 0 || size&uintptr(etyp.Align_-1) != 0 { panic("reflect: bad size computation in MapOf") } if ktyp.Pointers() || etyp.Pointers() { nptr := (abi.MapBucketCount*(1+ktyp.Size_+etyp.Size_) + goarch.PtrSize) / goarch.PtrSize n := (nptr + 7) / 8
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Wed May 29 17:58:53 UTC 2024 - 85.5K bytes - Viewed (0) -
src/crypto/tls/conn.go
// the MAC function as extra data, to be fed into the HMAC after // computing the digest. This makes the MAC roughly constant time as // long as the digest computation is constant time and does not // affect the subsequent write, modulo cache effects. paddingLen, paddingGood = extractPadding(payload) default: panic("unknown cipher type") }
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Thu May 23 03:10:12 UTC 2024 - 51.8K bytes - Viewed (0) -
tensorflow/compiler/mlir/tensorflow/transforms/shape_inference.cc
// Returns true if a return type was changed. bool InferShapeForXlaCallModule(XlaCallModuleOp op); // Infers the shape of _XlaHostComputeMlir based on the host computation // module. Returns true if a return type was changed. bool InferShapeForXlaHostComputeMlir(_XlaHostComputeMlirOp op); // Infers the shape of function attached to XlaHostCompute.
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Sat Jun 08 07:28:49 UTC 2024 - 134.1K bytes - Viewed (0) -
src/crypto/tls/common.go
return supportsRSAFallback(err) } } // In TLS 1.3 we are done because supported_groups is only relevant to the // ECDHE computation, point format negotiation is removed, cipher suites are // only relevant to the AEAD choice, and static RSA does not exist. if vers == VersionTLS13 { return nil }
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Thu May 23 03:10:12 UTC 2024 - 59.1K bytes - Viewed (0)