- Sort Score
- Result 10 results
- Languages All
Results 1 - 3 of 3 for resyncID (0.04 sec)
-
cmd/bucket-replication.go
} p.resyncer.Lock() p.resyncer.statusMap[bucket] = meta p.resyncer.Unlock() tgts := meta.cloneTgtStats() for arn, st := range tgts { switch st.ResyncStatus { case ResyncFailed, ResyncStarted, ResyncPending: go p.resyncer.resyncBucket(ctx, objAPI, true, resyncOpts{ bucket: bucket, arn: arn, resyncID: st.ResyncID,
Registered: Sun Sep 07 19:28:11 UTC 2025 - Last Modified: Fri Aug 29 02:39:48 UTC 2025 - 118K bytes - Viewed (0) -
cmd/site-replication.go
OpType: "start", ResyncID: rs.ResyncID, Buckets: res.Buckets, } if len(res.Buckets) > 0 { res.ErrDetail = "partial failure in starting site resync" } if len(buckets) != 0 && len(res.Buckets) == len(buckets) { return res, fmt.Errorf("all buckets resync failed") } return res, nil } // cancelResync stops an ongoing site level resync for the peer specified.
Registered: Sun Sep 07 19:28:11 UTC 2025 - Last Modified: Fri Aug 29 02:39:48 UTC 2025 - 184.7K bytes - Viewed (0) -
CHANGELOG/CHANGELOG-1.4.md
If this happens to you, you can wait at most 10 minutes for the replication controller to start a resync, the extra pods will then be deleted. Or, you can manually trigger a resync by change the replicas in the spec of the replication controller. ### kubectl delete: < v1.4.0 client vs >=v1.4.0 cluster
Registered: Fri Sep 05 09:05:11 UTC 2025 - Last Modified: Thu Dec 24 02:28:26 UTC 2020 - 133.5K bytes - Viewed (0)