Search Options

Results per page
Sort
Preferred Languages
Advance

Results 1 - 3 of 3 for resyncID (0.08 sec)

  1. cmd/bucket-replication.go

    		Error:     errStr,
    		Bytes:     sz,
    	}
    }
    
    // delete resync metadata from replication resync state in memory
    func (p *ReplicationPool) deleteResyncMetadata(ctx context.Context, bucket string) {
    	if p == nil {
    		return
    	}
    	p.resyncer.Lock()
    	delete(p.resyncer.statusMap, bucket)
    	defer p.resyncer.Unlock()
    
    	globalSiteResyncMetrics.deleteBucket(bucket)
    }
    
    Registered: Sun Nov 03 19:28:11 UTC 2024
    - Last Modified: Thu Oct 10 06:49:55 UTC 2024
    - 116.1K bytes
    - Viewed (0)
  2. cmd/site-replication.go

    		OpType:   "start",
    		ResyncID: rs.ResyncID,
    		Buckets:  res.Buckets,
    	}
    	if len(res.Buckets) > 0 {
    		res.ErrDetail = "partial failure in starting site resync"
    	}
    	if len(buckets) != 0 && len(res.Buckets) == len(buckets) {
    		return res, fmt.Errorf("all buckets resync failed")
    	}
    	return res, nil
    }
    
    // cancelResync stops an ongoing site level resync for the peer specified.
    Registered: Sun Nov 03 19:28:11 UTC 2024
    - Last Modified: Thu Aug 15 12:04:40 UTC 2024
    - 185.1K bytes
    - Viewed (0)
  3. CHANGELOG/CHANGELOG-1.4.md

    If this happens to you, you can wait at most 10 minutes for the replication controller to start a resync, the extra pods will then be deleted. Or, you can manually trigger a resync by change the replicas in the spec of the replication controller.
    
    ### kubectl delete: < v1.4.0 client vs >=v1.4.0 cluster
    
    Registered: Fri Nov 01 09:05:11 UTC 2024
    - Last Modified: Thu Dec 24 02:28:26 UTC 2020
    - 133.5K bytes
    - Viewed (0)
Back to top