- Sort Score
- Result 10 results
- Languages All
Results 161 - 170 of 830 for wait (0.08 sec)
-
.github/workflows/tests.yml
env: POSTGRES_PASSWORD: gorm POSTGRES_USER: gorm POSTGRES_DB: gorm TZ: Asia/Shanghai ports: - 9920:5432 # Set health checks to wait until postgres has started options: >- --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5 steps:
Registered: Sun Nov 03 09:35:10 UTC 2024 - Last Modified: Mon Sep 30 03:21:19 UTC 2024 - 6.6K bytes - Viewed (0) -
architecture/networking/controllers.md
Normally, this just means running the queue. All informers created by `kube.Client` are kept track in the client, and started in one go with `RunAndWait` in one centralized call. As a result, each individual controllers should simply wait until informers have synced, then run the queue to start processing things. A queue is used to give a few properties:
Registered: Wed Nov 06 22:53:10 UTC 2024 - Last Modified: Fri Feb 09 17:41:25 UTC 2024 - 4.9K bytes - Viewed (0) -
docs/bucket/replication/delete-replication.sh
./mc cp README.md myminio1/foobucket/dir/file versionId="$(./mc ls --json --versions myminio1/foobucket/dir/ | jq -r .versionId)" kill ${pid2} && wait ${pid2} || true aws s3api --endpoint-url http://localhost:9001 delete-object --bucket foobucket --key dir/file --version-id "$versionId" out="$(./mc ls myminio1/foobucket/dir/)" if [ "$out" != "" ]; then
Registered: Sun Nov 03 19:28:11 UTC 2024 - Last Modified: Fri Sep 06 09:42:21 UTC 2024 - 3.9K bytes - Viewed (0) -
cmd/data-scanner_test.go
obj ObjectInfo want lifecycle.Action }{ { // with object locking ilm: *deleteAllLc, retention: lock.Retention{LockEnabled: true}, obj: obj, want: lifecycle.NoneAction, }, { // without object locking ilm: *deleteAllLc, retention: lock.Retention{}, obj: obj, want: lifecycle.DeleteAllVersionsAction, },
Registered: Sun Nov 03 19:28:11 UTC 2024 - Last Modified: Fri May 03 11:18:58 UTC 2024 - 6.9K bytes - Viewed (0) -
cmd/erasure-metadata.go
fi.Erasure.Index = index + 1 if fi.IsValid() { return disks[index].WriteMetadata(ctx, origbucket, bucket, prefix, fi) } return errFileCorrupt }, index) } // Wait for all the routines. mErrs := g.Wait() err := reduceWriteQuorumErrs(ctx, mErrs, objectOpIgnoredErrs, quorum) if err != nil && revert { ng := errgroup.WithNErrs(len(disks)) for index := range disks {
Registered: Sun Nov 03 19:28:11 UTC 2024 - Last Modified: Thu Oct 31 22:10:24 UTC 2024 - 21.3K bytes - Viewed (0) -
cmd/sftp-server-driver.go
return nil, err } return obj, nil } // TransferError will catch network errors during transfer. // When TransferError() is called Close() will also // be called, so we do not need to Wait() here. func (w *writerAt) TransferError(err error) { _ = w.w.CloseWithError(err) _ = w.r.CloseWithError(err) w.err = err } func (w *writerAt) Close() (err error) { switch {
Registered: Sun Nov 03 19:28:11 UTC 2024 - Last Modified: Wed Jun 05 07:51:13 UTC 2024 - 11.1K bytes - Viewed (0) -
internal/store/batch_test.go
wg.Add(1) go func(key int) { defer wg.Done() if err := batch.Add(testItem); err != nil { t.Errorf("failed to add item %v; %v", key, err) return } }(i) } wg.Wait() batchLen := batch.Len() if batchLen != int(limit) { t.Fatalf("Expected batch.Len() %v; but got %v", limit, batchLen) } keys := store.List() if len(keys) > 0 {
Registered: Sun Nov 03 19:28:11 UTC 2024 - Last Modified: Fri Sep 06 23:06:30 UTC 2024 - 5.6K bytes - Viewed (0) -
docs/en/docs/advanced/events.md
Let's imagine that you have some **machine learning models** that you want to use to handle requests. 🤖 The same models are shared among requests, so, it's not one model per request, or one per user or something similar. Let's imagine that loading the model can **take quite some time**, because it has to read a lot of **data from disk**. So you don't want to do it for every request.
Registered: Sun Nov 03 07:19:11 UTC 2024 - Last Modified: Mon Oct 28 10:36:22 UTC 2024 - 7.6K bytes - Viewed (0) -
docs/metrics/prometheus/alerts.md
2. Start Prometheus server and AlertManager 3. Bring down couple of MinIO instances to bring down the Erasure Set tolerance to -1 and verify the same with `mc admin prometheus metrics ALIAS | grep minio_cluster_health_erasure_set_status` 4. Wait for 5 mins (as alert is configured to be firing after 5 mins), and verify that you see an entry in webhook for the alert as well as in Prometheus console as shown below ```json { "receiver": "web\\.hook",
Registered: Sun Nov 03 19:28:11 UTC 2024 - Last Modified: Sun Jan 28 20:53:59 UTC 2024 - 4.4K bytes - Viewed (0) -
docs/config/README.md
Registered: Sun Nov 03 19:28:11 UTC 2024 - Last Modified: Fri Aug 16 08:43:49 UTC 2024 - 17.9K bytes - Viewed (1)