- Sort Score
- Result 10 results
- Languages All
Results 311 - 320 of 339 for Queues (0.18 sec)
-
CHANGELOG.md
* Fix: Handle multiple 1xx responses. * Fix: Address a performance bug in our internal task runner. We had a race condition that could result in it OkHttp starting a thread for each queued task, even when a single thread could run all of them. * Fix: Address a performance bug in `MultipartReader`. We were scanning the entire input stream
Registered: Fri Sep 05 11:42:10 UTC 2025 - Last Modified: Mon Jul 07 19:32:33 UTC 2025 - 31.6K bytes - Viewed (1) -
src/main/resources/fess_indices/_aws/fess.json
"será", "seremos", "seréis", "serán", "sería", "serías", "seríamos", "seríais", "serían", "era", "eras", "éramos", "erais", "eran", "fui", "fuiste", "fue", "fuimos", "fuisteis", "fueron", "fuera", "fueras", "fuéramos", "fuerais", "fueran", "fuese", "fueses", "fuésemos", "fueseis", "fuesen", "siendo", "sido", "tengo", "tienes", "tiene", "tenemos", "tenéis", "tienen", "tenga", "tengas", "tengamos", "tengáis", "tengan", "tendré", "tendrás", "tendrá", "tendremos", "tendréis", "tendrán", "tendría", "tendrías",...
Registered: Thu Sep 04 12:52:25 UTC 2025 - Last Modified: Sat Jun 14 00:36:40 UTC 2025 - 117.3K bytes - Viewed (0) -
src/main/resources/fess_indices/_cloud/fess.json
"será", "seremos", "seréis", "serán", "sería", "serías", "seríamos", "seríais", "serían", "era", "eras", "éramos", "erais", "eran", "fui", "fuiste", "fue", "fuimos", "fuisteis", "fueron", "fuera", "fueras", "fuéramos", "fuerais", "fueran", "fuese", "fueses", "fuésemos", "fueseis", "fuesen", "siendo", "sido", "tengo", "tienes", "tiene", "tenemos", "tenéis", "tienen", "tenga", "tengas", "tengamos", "tengáis", "tengan", "tendré", "tendrás", "tendrá", "tendremos", "tendréis", "tendrán", "tendría", "tendrías",...
Registered: Thu Sep 04 12:52:25 UTC 2025 - Last Modified: Sat Feb 27 09:26:16 UTC 2021 - 117.3K bytes - Viewed (0) -
CHANGELOG/CHANGELOG-1.27.md
- Sometimes, the scheduler incorrectly placed a pod in the "unschedulable" queue instead of the "backoff" queue. This happened when some plugin previously declared the pod as "unschedulable" and then in a later attempt encounters some other error. Scheduling of that pod then got delayed by up to five minutes, after which periodic flushing moved the pod back into the "active" queue. ([#120334](https://github.com/kubernetes/kubernetes/pull/120334), [@pohly](https://github.com/pohly))...
Registered: Fri Sep 05 09:05:11 UTC 2025 - Last Modified: Wed Jul 17 07:48:22 UTC 2024 - 466.3K bytes - Viewed (2) -
cmd/erasure-multipart.go
Registered: Sun Sep 07 19:28:11 UTC 2025 - Last Modified: Sun Sep 07 16:13:09 UTC 2025 - 47.3K bytes - Viewed (0) -
cmd/erasure-server-pool-decom.go
Name string Prefix string } func (db decomBucketInfo) String() string { return pathJoin(db.Name, db.Prefix) } func (p *poolMeta) QueueBuckets(idx int, buckets []decomBucketInfo) { // add new queued buckets for _, bucket := range buckets { p.Pools[idx].Decommission.bucketPush(bucket) } } var ( errDecommissionAlreadyRunning = errors.New("decommission is already in progress")
Registered: Sun Sep 07 19:28:11 UTC 2025 - Last Modified: Fri Aug 29 02:39:48 UTC 2025 - 42.1K bytes - Viewed (1) -
CHANGELOG/CHANGELOG-1.32.md
- Fixes the bug in PodTopologySpread that only happens with QHint enabled,
Registered: Fri Sep 05 09:05:11 UTC 2025 - Last Modified: Wed Aug 13 14:49:49 UTC 2025 - 412.3K bytes - Viewed (0) -
go.sum
github.com/eapache/go-xerial-snappy v0.0.0-20230731223053-c322873962e3/go.mod h1:YvSRo5mw33fLEx1+DlK6L2VV43tJt5Eyel9n9XBcR+0= github.com/eapache/queue v1.1.0 h1:YOEu7KNc61ntiQlcEeUIoDTJ2o8mQznoNvUhiigpIqc= github.com/eapache/queue v1.1.0/go.mod h1:6eCeP0CKFpHLu8blIFXhExK/dRa7WDZfr6jVFPTqq+I= github.com/eclipse/paho.mqtt.golang v1.5.0 h1:EH+bUVJNgttidWFkLLVKaQPGmkTUfQQqjOsyvMGvD6o=
Registered: Sun Sep 07 19:28:11 UTC 2025 - Last Modified: Sat Sep 06 17:33:19 UTC 2025 - 79.9K bytes - Viewed (0) -
okhttp/src/jvmTest/kotlin/okhttp3/internal/cache/DiskLruCacheTest.kt
val snapshot = cache["b"]!! snapshot.close() assertThat(cache.edit("d")).isNull() assertThat(taskFaker.isIdle()).isFalse() // On cache misses, no retry job is queued. assertThat(cache["c"]).isNull() assertThat(taskFaker.isIdle()).isFalse() // Let the rebuild complete successfully. filesystem.setFaultyRename(cacheDir / DiskLruCache.JOURNAL_FILE_BACKUP, false)
Registered: Fri Sep 05 11:42:10 UTC 2025 - Last Modified: Wed Mar 19 19:25:20 UTC 2025 - 75.7K bytes - Viewed (0) -
CHANGELOG/CHANGELOG-1.30.md
- Previously, the scheduling queue didn't notice any extenders' failures, potentially resulting in missed cluster events and Pods rejected by Extenders being stuck in the unschedulable pod pool for up to 5 minutes in the worst-case scenario. Now, the scheduling queue notices extenders' failures and requeues Pods rejected by Extenders appropriately.
Registered: Fri Sep 05 09:05:11 UTC 2025 - Last Modified: Wed Jun 18 18:59:10 UTC 2025 - 398.1K bytes - Viewed (0)