- Sort Score
- Result 10 results
- Languages All
Results 1 - 10 of 66 for Parity (0.09 sec)
-
docs/erasure/storage-class/README.md
### Allowed values for STANDARD storage class `STANDARD` storage class implies more parity than `REDUCED_REDUNDANCY` class. So, `STANDARD` parity drives should be - Greater than or equal to 2, if `REDUCED_REDUNDANCY` parity is not set. - Greater than `REDUCED_REDUNDANCY` parity, if it is set. Parity blocks can not be higher than data blocks, so `STANDARD` storage class parity can not be higher than N/2. (N being total number of drives)
Registered: Sun Sep 07 19:28:11 UTC 2025 - Last Modified: Tue Aug 12 18:20:36 UTC 2025 - 5.9K bytes - Viewed (0) -
docs/distributed/SIZING.md
# Erasure code sizing guide ## Toy Setups Capacity constrained environments, MinIO will work but not recommended for production. | servers | drives (per node) | stripe_size | parity chosen (default) | tolerance for reads (servers) | tolerance for writes (servers) | |--------:|------------------:|------------:|------------------------:|------------------------------:|-------------------------------:|
Registered: Sun Sep 07 19:28:11 UTC 2025 - Last Modified: Tue Aug 15 23:04:20 UTC 2023 - 3.9K bytes - Viewed (0) -
cmd/notification-summary.go
for _, disk := range diskInfo { // Ignore invalid. if disk.PoolIndex < 0 || len(s.Backend.StandardSCData) <= disk.PoolIndex { // https://github.com/minio/minio/issues/16500 continue } // Ignore parity disks if disk.DiskIndex < s.Backend.StandardSCData[disk.PoolIndex] { capacity += disk.TotalSpace } } return } // GetTotalCapacityFree gets the total capacity free in the cluster.
Registered: Sun Sep 07 19:28:11 UTC 2025 - Last Modified: Tue Jun 20 00:53:08 UTC 2023 - 2.2K bytes - Viewed (0) -
cmd/metrics-v3-cluster-config.go
) var ( configRRSParityMD = NewGaugeMD(configRRSParity, "Reduced redundancy storage class parity") configStandardParityMD = NewGaugeMD(configStandardParity, "Standard storage class parity") ) // loadClusterConfigMetrics - `MetricsLoaderFn` for cluster config // such as standard and RRS parity. func loadClusterConfigMetrics(ctx context.Context, m MetricValues, c *metricsCache) error {
Registered: Sun Sep 07 19:28:11 UTC 2025 - Last Modified: Fri May 24 12:50:46 UTC 2024 - 1.5K bytes - Viewed (0) -
docs/erasure/README.md
reconstruct missing or corrupted data. MinIO uses Reed-Solomon code to shard objects into variable data and parity blocks. For example, in a 12 drive setup, an object can be sharded to a variable number of data and parity blocks across all the drives - ranging from six data and six parity blocks to ten data and two parity blocks. By default, MinIO shards the objects across N/2 data and N/2 parity drives. Though, you can use [storage classes](https://github.com/minio/minio/tree/master/docs/erasure/storage-class)...
Registered: Sun Sep 07 19:28:11 UTC 2025 - Last Modified: Tue Aug 12 18:20:36 UTC 2025 - 4.2K bytes - Viewed (0) -
cmd/xl-storage-format-v1.go
version == xlMetaVersion100) && format == xlMetaFormat) } // Verifies if the backend format metadata is sane by validating // the ErasureInfo, i.e. data and parity blocks. func isXLMetaErasureInfoValid(data, parity int) bool { return ((data >= parity) && (data > 0) && (parity >= 0)) } //msgp:clearomitted //go:generate msgp -file=$GOFILE -unexported // A xlMetaV1Object represents `xl.meta` metadata header.
Registered: Sun Sep 07 19:28:11 UTC 2025 - Last Modified: Tue Oct 22 15:30:50 UTC 2024 - 8.4K bytes - Viewed (0) -
docs/distributed/DESIGN.md
of common SLA here original cluster had 1024 drives with 16 drives per erasure set with default parity of '4', second pool is expected to have a minimum of 8 drives per erasure set to match the original cluster SLA (parity count) of '4'. '12' drives stripe per erasure set in the second pool satisfies the original pool's parity count. Refer to the sizing guide with details on the default parity count chosen for different erasure stripe sizes [here](https://github.com/minio/minio/blob/mas...
Registered: Sun Sep 07 19:28:11 UTC 2025 - Last Modified: Wed Feb 26 09:25:50 UTC 2025 - 8K bytes - Viewed (1) -
cmd/fmt-gen.go
import ( "encoding/json" "log" "os" "path/filepath" "github.com/klauspost/compress/zip" "github.com/minio/cli" ) var fmtGenFlags = []cli.Flag{ cli.IntFlag{ Name: "parity", Usage: "specify erasure code parity", }, cli.StringFlag{ Name: "deployment-id", Usage: "deployment-id of the MinIO cluster for which format.json is needed", }, cli.StringFlag{ Name: "address",
Registered: Sun Sep 07 19:28:11 UTC 2025 - Last Modified: Fri Aug 29 02:39:48 UTC 2025 - 3.7K bytes - Viewed (0) -
cmd/erasure-coding.go
func erasureSelfTest() { // Approx runtime ~1ms var testConfigs [][2]uint8 for total := uint8(4); total < 16; total++ { for data := total / 2; data < total; data++ { parity := total - data testConfigs = append(testConfigs, [2]uint8{data, parity}) } } got := make(map[[2]uint8]map[ErasureAlgo]uint64, len(testConfigs)) // Copied from output of fmt.Printf("%#v", got) at the end.
Registered: Sun Sep 07 19:28:11 UTC 2025 - Last Modified: Fri Aug 29 02:39:48 UTC 2025 - 8.5K bytes - Viewed (0) -
cmd/erasure-sets_test.go
t.Fatalf("Unable to format drives for erasure, %s", err) } ep := PoolEndpoints{Endpoints: endpoints} parity, err := ecDrivesNoConfig(16) if err != nil { t.Fatalf("Unexpected error during EC drive config: %v", err) } if _, err := newErasureSets(ctx, ep, storageDisks, format, parity, 0); err != nil { t.Fatalf("Unable to initialize erasure") } }
Registered: Sun Sep 07 19:28:11 UTC 2025 - Last Modified: Fri Aug 29 02:39:48 UTC 2025 - 6.8K bytes - Viewed (0)