- Sort Score
- Result 10 results
- Languages All
Results 1 - 10 of 130 for negligible (0.15 sec)
-
staging/src/k8s.io/apiserver/pkg/util/flowcontrol/max_seats_test.go
InformerFactory: informerFactory, FlowcontrolClient: flowcontrolClient, // for the purposes of this test, serverCL ~= nominalCL since there is // only 1 PL with large concurrency shares, making mandatory PLs negligible. ServerConcurrencyLimit: testcase.nominalCL, ReqsGaugeVec: metrics.PriorityLevelConcurrencyGaugeVec, ExecSeatsGaugeVec: metrics.PriorityLevelExecutionSeatsGaugeVec,
Registered: Sat Jun 15 01:39:40 UTC 2024 - Last Modified: Mon Oct 30 12:18:40 UTC 2023 - 4.2K bytes - Viewed (0) -
subprojects/core/src/main/java/org/gradle/api/internal/initialization/ResettableConfiguration.java
* This method was originally added in order to release the resources of the {@code classpath} * configurations used for resolving buildscript classpaths, as they consumed a non-negligible * amount of memory even after the buildscript classpath was assembled. * <p> * Future work in this area should remove the need of this method by instead caching resolution
Registered: Wed Jun 12 18:38:38 UTC 2024 - Last Modified: Wed Dec 27 17:33:18 UTC 2023 - 3K bytes - Viewed (0) -
src/math/big/calibrate_test.go
sqrModeKaratsuba = "karatsubaSqr(x)" ) func TestCalibrate(t *testing.T) { if !*calibrate { return } computeKaratsubaThresholds() // compute basicSqrThreshold where overhead becomes negligible minSqr := computeSqrThreshold(10, 30, 1, 3, sqrModeMul, sqrModeBasic) // compute karatsubaSqrThreshold where karatsuba is faster maxSqr := computeSqrThreshold(200, 500, 10, 3, sqrModeBasic, sqrModeKaratsuba)
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Tue Sep 05 23:35:29 UTC 2023 - 4.6K bytes - Viewed (0) -
src/crypto/ecdsa/ecdsa.go
} // FIPS 186-4 makes us check k <= N - 2 and then add one. // Checking 0 < k <= N - 1 is strictly equivalent. // None of this matters anyway because the chance of selecting // zero is cryptographically negligible. if _, err = k.SetBytes(b, c.N); err == nil && k.IsZero() == 0 { break } if testingOnlyRejectionSamplingLooped != nil { testingOnlyRejectionSamplingLooped() } }
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Thu May 23 00:11:18 UTC 2024 - 20.4K bytes - Viewed (0) -
guava/src/com/google/common/base/Throwables.java
* * <ul> * <li>{@code getStackTrace} takes {@code stackSize} time to return but then negligible time to * retrieve each element of the returned list. * <li>{@code lazyStackTrace} takes negligible time to return but then {@code 1/stackSize} time * to retrieve each element of the returned list (probably slightly more than {@code * 1/stackSize}).
Registered: Wed Jun 12 16:38:11 UTC 2024 - Last Modified: Wed Mar 06 15:38:58 UTC 2024 - 20.6K bytes - Viewed (0) -
android/guava/src/com/google/common/base/Throwables.java
* * <ul> * <li>{@code getStackTrace} takes {@code stackSize} time to return but then negligible time to * retrieve each element of the returned list. * <li>{@code lazyStackTrace} takes negligible time to return but then {@code 1/stackSize} time * to retrieve each element of the returned list (probably slightly more than {@code * 1/stackSize}).
Registered: Wed Jun 12 16:38:11 UTC 2024 - Last Modified: Wed Mar 06 15:38:58 UTC 2024 - 20.6K bytes - Viewed (0) -
tensorflow/compiler/mlir/tensorflow/transforms/check_control_dependencies.cc
} // Returns true iff there is any dependency between the IDs in `resource_ids` // and the IDs in `other_resource_ids`. // Note that this can be made more efficient if necessary. For current use cases // this runtime is negligible (typically at least one of the resource ID vectors // is small). bool ResourceIdsHaveDependency(const ResourceIdVec& resource_ids, const ResourceIdVec& other_resource_ids) {
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed Oct 05 23:50:19 UTC 2022 - 10.2K bytes - Viewed (0) -
src/runtime/memmove_arm64.s
// dstend R5 // data R6-R17 // tmp1 R14 // Copies are split into 3 main cases: small copies of up to 32 bytes, medium // copies of up to 128 bytes, and large copies. The overhead of the overlap // check is negligible since it is only required for large copies. // // Large copies use a software pipelined loop processing 64 bytes per iteration. // The destination pointer is 16-byte aligned to minimize unaligned accesses.
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Fri Mar 18 18:26:13 UTC 2022 - 6K bytes - Viewed (0) -
src/runtime/memmove_amd64.s
// comparable with the cost of main loop. So code is slightly messed there. // There is more clean implementation of that algorithm for bigger sizes // where the cost of unaligned part copying is negligible. // You can see it after gobble_big_data_fwd label. LEAQ (SI)(BX*1), CX MOVQ DI, R10 // CX points to the end of buffer so we need go back slightly. We will use negative offsets there. MOVOU -0x80(CX), X5
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Sun Apr 10 15:52:08 UTC 2022 - 12.5K bytes - Viewed (0) -
android/guava/src/com/google/common/io/ByteStreams.java
* userspace buffer (byte[] or ByteBuffer), then copies them from that buffer into the * destination channel. * </ol> * * This value is intended to be large enough to make the overhead of system calls negligible, * without being so large that it causes problems for systems with atypical memory management if * approaches 2 or 3 are used. */ private static final int ZERO_COPY_CHUNK_SIZE = 512 * 1024;
Registered: Wed Jun 12 16:38:11 UTC 2024 - Last Modified: Wed Jan 17 18:59:58 UTC 2024 - 29.7K bytes - Viewed (0)