- Sort Score
- Result 10 results
- Languages All
Results 11 - 20 of 56 for incremented (0.16 sec)
-
src/runtime/mbitmap.go
s.freeindex = snelems return snelems } s.allocCache >>= uint(bitIndex + 1) sfreeindex = result + 1 if sfreeindex%64 == 0 && sfreeindex != snelems { // We just incremented s.freeindex so it isn't 0. // As each 1 in s.allocCache was encountered and used for allocation // it was shifted away. At this point s.allocCache contains all 0s. // Refill s.allocCache so that it corresponds
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Thu May 23 00:18:55 UTC 2024 - 60K bytes - Viewed (0) -
staging/src/k8s.io/apiserver/pkg/storage/testing/store_tests.go
cont := out.Continue // the second list call should try to get 2 more items from etcd // but since there is only one item left, that is all we should get with no continueValue // both read counters should be incremented for the singular calls they make in this case out = &example.PodList{} options = storage.ListOptions{ // ResourceVersion should be unset when setting continuation token. ResourceVersion: "",
Registered: Sat Jun 15 01:39:40 UTC 2024 - Last Modified: Tue Jun 11 12:45:33 UTC 2024 - 91.4K bytes - Viewed (0) -
pkg/kubelet/pod_workers.go
return TerminatedPod } if s.IsTerminationRequested() { return TerminatingPod } return SyncPod } // mergeLastUpdate records the most recent state from a new update. Pod and MirrorPod are // incremented. KillPodOptions is accumulated. If RunningPod is set, Pod is synthetic and // will *not* be used as the last pod state unless no previous pod state exists (because
Registered: Sat Jun 15 01:39:40 UTC 2024 - Last Modified: Tue Apr 02 13:22:37 UTC 2024 - 74.8K bytes - Viewed (0) -
src/runtime/mheap.go
// if sweepgen == h->sweepgen + 1, the span was cached before sweep began and is still cached, and needs sweeping // if sweepgen == h->sweepgen + 3, the span was swept and then cached and is still cached // h->sweepgen is incremented by 2 after every GC sweepgen uint32 divMul uint32 // for divide by elemsize allocCount uint16 // number of allocated objects
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Wed May 22 22:31:00 UTC 2024 - 78K bytes - Viewed (0) -
tensorflow/compiler/mlir/g3doc/_includes/tf_passes.md
for `tf.EmptyTensorList` or the specified size for `tf.TensorListReserve`. Each push will be turned into `tf.XlaDynamicUpdateSlice` with the incremented size, and each pop will be turned into a `tf.Slice` and a copy of the buffer with decremented size. Each `tf.TensorListSetItem` will be turned into a `tf.XlaDynamicUpdateSlice` with unchanged size, and each `tf.TensorListGetItem` will be rewritten to a `tf.Slice`.
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed Aug 02 02:26:39 UTC 2023 - 96.4K bytes - Viewed (0) -
subprojects/core/src/integTest/groovy/org/gradle/api/services/BuildServiceIntegrationTest.groovy
Registered: Wed Jun 12 18:38:38 UTC 2024 - Last Modified: Thu Jun 06 19:15:46 UTC 2024 - 61K bytes - Viewed (0) -
platforms/software/dependency-management/src/integTest/groovy/org/gradle/integtests/resolve/transform/ArtifactTransformInputArtifactIntegrationTest.groovy
false | PathSensitivity.NAME_ONLY inputChanges = incremental ? "@Inject abstract InputChanges getInputChanges()" : "" normalization = (sensitivity?.name()?.toLowerCase()?.replaceAll("_", " ") ?: "no") + " path sensitivity" type = (incremental ? "incremental" : "non-incremental") } def "re-runs incremental transform when input artifact file changes from file to missing"() {
Registered: Wed Jun 12 18:38:38 UTC 2024 - Last Modified: Fri Oct 27 19:15:32 UTC 2023 - 51.9K bytes - Viewed (0) -
platforms/documentation/docs/src/docs/userguide/optimizing-performance/incremental_build.adoc
Implies `<<#incremental,@Incremental>>`. | [[incremental]]`@link:{javadocPath}/org/gradle/work/Incremental.html[Incremental]` | `Provider<FileSystemLocation>` or `FileCollection`
Registered: Wed Jun 12 18:38:38 UTC 2024 - Last Modified: Wed Jan 24 23:14:04 UTC 2024 - 63.9K bytes - Viewed (0) -
src/runtime/mprof.go
next := prev | 0x1 if c.value.CompareAndSwap(prev, next) { return cycle, alreadyFlushed } } } // increment increases the cycle count by one, wrapping the value at // mProfCycleWrap. It clears the flushed flag. func (c *mProfCycleHolder) increment() { // We explicitly wrap mProfCycle rather than depending on // uint wraparound because the memRecord.future ring does not
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Thu May 30 17:57:37 UTC 2024 - 53.3K bytes - Viewed (0) -
pilot/pkg/model/push_context.go
// Full determines whether a full push is required or not. If false, an incremental update will be sent. // Incremental pushes: // * Do not recompute the push context // * Do not recompute proxy state (such as ServiceInstances) // * Are not reported in standard metrics such as push time // As a result, configuration updates should never be incremental. Generally, only EDS will set this, but // in the future SDS will as well.
Registered: Fri Jun 14 15:00:06 UTC 2024 - Last Modified: Wed May 15 09:02:11 UTC 2024 - 91.8K bytes - Viewed (0)