- Sort Score
- Result 10 results
- Languages All
Results 21 - 30 of 569 for Pools (0.04 sec)
-
.github/workflows/run-mint.sh
docker volume rm $(docker volume ls -q -f dangling=true) || true # Stop two nodes, one of each pool, to check that all S3 calls work while quorum is still there [ "${MODE}" == "pools" ] && docker-compose -f minio-${MODE}.yaml stop minio2 [ "${MODE}" == "pools" ] && docker-compose -f minio-${MODE}.yaml stop minio6 # Pause one node, to check that all S3 calls work while one node goes wrongRegistered: Sun Dec 28 19:28:13 UTC 2025 - Last Modified: Mon Jan 20 14:49:07 UTC 2025 - 1.9K bytes - Viewed (0) -
cmd/server-main.go
if err != nil { return err } setDriveCount := uint64(v) var pools []poolArgs switch cv.Version { case "v1": cfV1 := config.ServerConfigV1{} if err = yaml.Unmarshal(rd, &cfV1); err != nil { return err } pools = make([]poolArgs, 0, len(cfV1.Pools)) for _, list := range cfV1.Pools { pools = append(pools, poolArgs{ args: list, setDriveCount: setDriveCount, })Registered: Sun Dec 28 19:28:13 UTC 2025 - Last Modified: Tue May 27 15:18:36 UTC 2025 - 35.9K bytes - Viewed (4) -
docs/distributed/DESIGN.md
- MinIO also supports expansion of existing clusters in server pools. Each pool is a self contained entity with same SLA's (read/write quorum) for each object as original cluster. By using the existing namespace for lookup validation MinIO ensures conflicting objects are not created. When no such object exists then MinIO simply uses the least used pool to place new objects. ### There are no limits on how many server pools can be combined ```
Registered: Sun Dec 28 19:28:13 UTC 2025 - Last Modified: Wed Feb 26 09:25:50 UTC 2025 - 8K bytes - Viewed (2) -
guava/src/com/google/common/util/concurrent/MoreExecutors.java
* is complete. It does so by using daemon threads and adding a shutdown hook to wait for their * completion. * * <p>This is mainly for fixed thread pools. See {@link Executors#newFixedThreadPool(int)}. * * @param executor the executor to modify to make sure it exits when the application is finishedRegistered: Fri Dec 26 12:43:10 UTC 2025 - Last Modified: Wed Oct 08 18:55:33 UTC 2025 - 45.2K bytes - Viewed (0) -
cmd/endpoint.go
Registered: Sun Dec 28 19:28:13 UTC 2025 - Last Modified: Sun Sep 28 20:59:21 UTC 2025 - 34.5K bytes - Viewed (0) -
schema/pool.go
package schema import ( "reflect" "sync" ) // sync pools var ( normalPool sync.Map poolInitializer = func(reflectType reflect.Type) FieldNewValuePool { v, _ := normalPool.LoadOrStore(reflectType, &sync.Pool{ New: func() interface{} { return reflect.New(reflectType).Interface() }, }) return v.(FieldNewValuePool) }
Registered: Sun Dec 28 09:35:17 UTC 2025 - Last Modified: Mon Apr 11 13:37:44 UTC 2022 - 345 bytes - Viewed (0) -
docs/distributed/distributed-from-config-file.sh
consolePort="$((s3Port + 1000))" cat <<EOF >/tmp/minio.configfile.$i version: v1 address: ':${s3Port}' console-address: ':${consolePort}' rootUser: 'minr0otUS2r' rootPassword: 'pBU94AGAY85e' pools: # Specify the nodes and drives with pools - - 'http://localhost:9001/tmp/xl/node9001/mnt/disk{1...4}/' - 'http://localhost:9002/tmp/xl/node9002/mnt/disk{1,2,3,4}/' - - 'http://localhost:9003/tmp/xl/node9003/mnt/disk{1...4}/'
Registered: Sun Dec 28 19:28:13 UTC 2025 - Last Modified: Fri Jun 28 09:06:49 UTC 2024 - 3.3K bytes - Viewed (0) -
cmd/metrics-v3-cluster-erasure-set.go
const ( poolIDL = "pool_id" setIDL = "set_id" ) var ( erasureSetOverallWriteQuorumMD = NewGaugeMD(erasureSetOverallWriteQuorum, "Overall write quorum across pools and sets") erasureSetOverallHealthMD = NewGaugeMD(erasureSetOverallHealth, "Overall health across pools and sets (1=healthy, 0=unhealthy)") erasureSetReadQuorumMD = NewGaugeMD(erasureSetReadQuorum, "Read quorum for the erasure set in a pool", poolIDL, setIDL)
Registered: Sun Dec 28 19:28:13 UTC 2025 - Last Modified: Tue May 14 07:25:56 UTC 2024 - 4.4K bytes - Viewed (0) -
cmd/erasure-server-pool.go
}) if err != nil { return nil, err } if deploymentID == "" { // all pools should have same deployment ID deploymentID = formats[i].ID } // Validate if users brought different DeploymentID pools. if deploymentID != formats[i].ID { return nil, fmt.Errorf("all pools must have same deployment ID - expected %s, got %s for pool(%s)", deploymentID, formats[i].ID, humanize.Ordinal(i+1)) }Registered: Sun Dec 28 19:28:13 UTC 2025 - Last Modified: Sun Sep 28 20:59:21 UTC 2025 - 89.2K bytes - Viewed (0) -
docs/distributed/README.md
endlessly, so you can perpetually expand your clusters as needed. When you restart, it is immediate and non-disruptive to the applications. Each group of servers in the command-line is called a pool. There are 2 server pools in this example. New objects are placed in server pools in proportion to the amount of free space in each pool. Within each pool, the location of the erasure-set of drives is determined based on a deterministic hashing algorithm. > **NOTE:** **Each pool you add must...
Registered: Sun Dec 28 19:28:13 UTC 2025 - Last Modified: Tue Aug 12 18:20:36 UTC 2025 - 8.9K bytes - Viewed (0)