- Sort Score
- Result 10 results
- Languages All
Results 21 - 30 of 388 for drives (0.04 sec)
-
cmd/server-main_test.go
ctx, cancel := context.WithCancel(t.Context()) defer cancel() // Tests for ErasureSD object layer. nDisks := 1 disks, err := getRandomDisks(nDisks) if err != nil { t.Fatal("Failed to create drives for the backend") } defer removeRoots(disks) obj, err := newObjectLayer(ctx, mustGetPoolEndpoints(0, disks...)) if err != nil { t.Fatal("Unexpected object layer initialization error", err) }
Registered: Sun Sep 07 19:28:11 UTC 2025 - Last Modified: Fri Aug 29 02:39:48 UTC 2025 - 3.1K bytes - Viewed (0) -
docs/throttle/README.md
If you have traditional spinning (hdd) drives, some applications with high concurrency might require MinIO cluster to be tuned such that to avoid random I/O on the drives. The way to convert high concurrent I/O into a sequential I/O is by reducing the number of concurrent operations allowed per cluster. This allows MinIO cluster to be operationally resilient to such workloads, while also making sure the drives are at optimal efficiency and responsive.
Registered: Sun Sep 07 19:28:11 UTC 2025 - Last Modified: Fri Aug 16 08:43:49 UTC 2024 - 1.5K bytes - Viewed (1) -
docs/metrics/prometheus/list.md
| `minio_node_drive_total_inodes` | Total inodes on a drive. | | `minio_node_drive_used_inodes` | Total inodes used on a drive. | | `minio_node_drive_reads_per_sec` | Reads per second on a drive. | | `minio_node_drive_reads_kb_per_sec` | Kilobytes read per second on a drive. |
Registered: Sun Sep 07 19:28:11 UTC 2025 - Last Modified: Tue Aug 12 18:20:36 UTC 2025 - 43.4K bytes - Viewed (0) -
cmd/endpoint-ellipses.go
msg := fmt.Sprintf("Incorrect number of endpoints provided %s, number of drives %d is not divisible by any supported erasure set sizes %d", args, commonSize, setSizes) return nil, config.ErrInvalidNumberOfErasureEndpoints(nil).Msg(msg) } var setSize uint64 // Custom set drive count allows to override automatic distribution. // only meant if you want to further optimize drive distribution. if setDriveCount > 0 {
Registered: Sun Sep 07 19:28:11 UTC 2025 - Last Modified: Fri Aug 29 02:39:48 UTC 2025 - 14.6K bytes - Viewed (0) -
cmd/speedtest.go
ch := make(chan madmin.SpeedTestResult, 1) go func() { defer xioutil.SafeClose(ch) concurrency := opts.concurrencyStart if opts.autotune { // if we have less drives than concurrency then choose // only the concurrency to be number of drives to start // with - since default '32' might be big and may not // complete in total time of 10s. if globalEndpoints.NEndpoints() < concurrency {
Registered: Sun Sep 07 19:28:11 UTC 2025 - Last Modified: Tue May 27 15:19:03 UTC 2025 - 9.2K bytes - Viewed (0) -
docs/distributed/distributed-from-config-file.sh
cat <<EOF >/tmp/minio.configfile.$i version: v1 address: ':${s3Port}' console-address: ':${consolePort}' rootUser: 'minr0otUS2r' rootPassword: 'pBU94AGAY85e' pools: # Specify the nodes and drives with pools - - 'http://localhost:9001/tmp/xl/node9001/mnt/disk{1...4}/' - 'http://localhost:9002/tmp/xl/node9002/mnt/disk{1,2,3,4}/' - - 'http://localhost:9003/tmp/xl/node9003/mnt/disk{1...4}/'
Registered: Sun Sep 07 19:28:11 UTC 2025 - Last Modified: Fri Jun 28 09:06:49 UTC 2024 - 3.3K bytes - Viewed (0) -
cmd/erasure-sets.go
} // Fetch all the drive info status. beforeDrives := formatsToDrivesInfo(s.endpoints.Endpoints, formats, sErrs) res.After.Drives = make([]madmin.HealDriveInfo, len(beforeDrives)) res.Before.Drives = make([]madmin.HealDriveInfo, len(beforeDrives)) // Copy "after" drive state too from before. for k, v := range beforeDrives { res.Before.Drives[k] = v res.After.Drives[k] = v }
Registered: Sun Sep 07 19:28:11 UTC 2025 - Last Modified: Fri Aug 29 02:39:48 UTC 2025 - 37K bytes - Viewed (1) -
cmd/peer-s3-client.go
SetCount: -1, // explicitly set an invalid value -1, for bucket heal scenario } for i, err := range errs { if err == nil { res.Before.Drives = append(res.Before.Drives, healBucketResults[i].Before.Drives...) res.After.Drives = append(res.After.Drives, healBucketResults[i].After.Drives...) } } return res, nil } // ListBuckets lists buckets across all nodes and returns a consistent view:
Registered: Sun Sep 07 19:28:11 UTC 2025 - Last Modified: Fri Aug 29 02:39:48 UTC 2025 - 15.6K bytes - Viewed (0) -
docs/logging/README.md
- Additionally in the case of the erasure coded setup `tags.objectLocation` provides per object details about - Pool number the object operation was performed on. - Set number the object operation was performed on. - The list of drives participating in this operation belong to the set. ```json { "version": "1", "deploymentid": "90e81272-45d9-4fe8-9c45-c9a7322bf4b5", "time": "2024-05-09T07:38:10.449688982Z", "event": "",
Registered: Sun Sep 07 19:28:11 UTC 2025 - Last Modified: Tue Aug 12 18:20:36 UTC 2025 - 10.5K bytes - Viewed (0) -
docs/metrics/prometheus/grafana/node/minio-node.json
"legendFormat": "Free [{{drive}}]", "refId": "C" } ], "title": "Drive Usage", "type": "timeseries" }, { "datasource": { "type": "prometheus", "uid": "${DS_PROMETHEUS}" }, "description": "", "fieldConfig": { "defaults": { "color": { "mode": "palette-classic" },
Registered: Sun Sep 07 19:28:11 UTC 2025 - Last Modified: Mon Aug 04 01:46:49 UTC 2025 - 22.5K bytes - Viewed (0)