- Sort Score
- Result 10 results
- Languages All
Results 1 - 10 of 1,215 for evenly (0.16 sec)
-
releasenotes/notes/new_lb_algorithm_default.yaml
The `ROUND_ROBIN` algorithm can lead to overburdened endpoints, especially when weights are used. The `LEAST_REQUEST` algorithm, distributes more evenly across and is far less likely to overburden endpoints. A number of experiments (by both the Istio and Envoy teams) have shown that `LEAST_REQUEST` outperforms `ROUND_ROBIN` in virtually all
Registered: Fri Jun 14 15:00:06 UTC 2024 - Last Modified: Wed Feb 09 20:55:01 UTC 2022 - 856 bytes - Viewed (0) -
docs/en/docs/css/custom.css
.md-footer-meta { padding-bottom: 2em; } .user-list { display: flex; flex-wrap: wrap; margin-bottom: 2rem; } .user-list-center { justify-content: space-evenly; } .user { margin: 1em; min-width: 7em; } .user .avatar-wrapper { width: 80px; height: 80px; margin: 10px auto; overflow: hidden; border-radius: 50%; position: relative;
Registered: Mon Jun 17 08:32:26 UTC 2024 - Last Modified: Sun Jan 28 09:53:45 UTC 2024 - 2.8K bytes - Viewed (0) -
tensorflow/compiler/mlir/tensorflow/tests/tf_device_ops_invalid.mlir
} // ----- // Check number of replicated inputs is evenly divisible by 'n'. func.func @verifier_replicate_bad_operandSegmentSizes(%arg0: tensor<*xi32>) { "tf_device.replicate" (%arg0, %arg0, %arg0, %arg0) ({ // expected-error@-1 {{'tf_device.replicate' op expects number of replicated inputs (4) to be evenly divisible by 'n' (3)}} ^entry(%input0: tensor<*xi32>, %input1: tensor<*xi32>): tf_device.return
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Mon Aug 14 15:35:49 UTC 2023 - 9.8K bytes - Viewed (0) -
pkg/kubelet/cm/cpumanager/cpu_assignment.go
// NUMA nodes to allocate any 'remainder' CPUs from (in cases where the total // number of CPUs to allocate cannot be evenly distributed across the chosen // set of NUMA nodes). This "balance score" is calculated as the standard // deviation of how many CPUs will be available on each NUMA node after all // evenly distributed and remainder CPUs are allocated. The subset with the
Registered: Sat Jun 15 01:39:40 UTC 2024 - Last Modified: Thu Jan 25 23:56:21 UTC 2024 - 36.3K bytes - Viewed (0) -
.teamcity/src/main/kotlin/model/bucket-extensions.kt
if (expectedBucketSize == 0) { // The elements in the list are so small that they can't even be divided into {expectedBucketNumber}. // For example, how do you split [0,0,0,0,0] into 3 buckets? // In this case, we simply put the elements into these buckets evenly. return list.chunked(list.size / expectedBucketNumber, smallElementAggregateFunction) }
Registered: Wed Jun 12 18:38:38 UTC 2024 - Last Modified: Thu Nov 17 05:17:44 UTC 2022 - 4K bytes - Viewed (0) -
platforms/core-execution/persistent-cache/src/main/java/org/gradle/cache/internal/ProducerGuard.java
* guard instead. */ public static <T> ProducerGuard<T> adaptive() { return new AdaptiveProducerGuard<T>(); } /** * Creates a {@link ProducerGuard} which evenly spreads calls over a fixed number of locks. * This means that in some cases two different keys can block on the same lock. The benefit of * this strategy is that it uses only a fixed amount of memory. If your code depends on
Registered: Wed Jun 12 18:38:38 UTC 2024 - Last Modified: Fri Apr 26 16:02:31 UTC 2024 - 4.7K bytes - Viewed (0) -
tensorflow/compiler/mlir/tensorflow/utils/xla_sharding_util.cc
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed May 22 21:28:13 UTC 2024 - 34K bytes - Viewed (0) -
pkg/kubelet/cm/cpumanager/policy_options.go
// any possible naming scheme will lead to ambiguity to some extent. // We picked "pcpu" because it the established docs hints at vCPU already. FullPhysicalCPUsOnly bool // Flag to evenly distribute CPUs across NUMA nodes in cases where more // than one NUMA node is required to satisfy the allocation. DistributeCPUsAcrossNUMA bool // Flag to ensure CPUs are considered aligned at socket boundary rather than
Registered: Sat Jun 15 01:39:40 UTC 2024 - Last Modified: Wed Sep 27 13:02:15 UTC 2023 - 5.1K bytes - Viewed (0) -
src/math/big/ratconv.go
for { if _, r = t.div(r, q, f); len(r) != 0 { break // f doesn't divide q evenly } tab = append(tab, f) f = nat(nil).sqr(f) // nat(nil) to ensure a new f for each table entry } // Factor q using the table entries, if any. // We start with the largest factor f = tab[len(tab)-1] // that evenly divides q. It does so at most once because // otherwise f·f would also divide q. That can't be true
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Wed Nov 15 22:16:34 UTC 2023 - 12.3K bytes - Viewed (0) -
android/guava-tests/test/com/google/common/math/IntMathTest.java
try { assertEquals(p + "/" + q, p, IntMath.divide(p, q, UNNECESSARY) * q); assertTrue(p + "/" + q + " not expected to divide evenly", dividesEvenly); } catch (ArithmeticException e) { assertFalse(p + "/" + q + " expected to divide evenly", dividesEvenly); } } } } public void testZeroDivIsAlwaysZero() { for (int q : NONZERO_INTEGER_CANDIDATES) {
Registered: Wed Jun 12 16:38:11 UTC 2024 - Last Modified: Wed Feb 07 17:50:39 UTC 2024 - 24.5K bytes - Viewed (0)