- Sort Score
- Result 10 results
- Languages All
Results 1 - 10 of 221 for irides (0.04 sec)
-
docs/distributed/README.md
A stand-alone MinIO server would go down if the server hosting the drives goes offline. In contrast, a distributed MinIO setup with _m_ servers and _n_ drives will have your data safe as long as _m/2_ servers or _m*n_/2 or more drives are online. For example, an 16-server distributed setup with 200 drives per node would continue serving files, up to 4 servers can be offline in default configuration i.e around 800 drives down MinIO would continue to read and write objects.
Registered: 2025-05-25 19:28 - Last Modified: 2024-01-18 07:03 - 8.8K bytes - Viewed (0) -
cmd/metrics-v3-system-drive.go
driveUsedInodesMD = NewGaugeMD(driveUsedInodes, "Total used inodes on a drive", allDriveLabels...) driveFreeInodesMD = NewGaugeMD(driveFreeInodes, "Total free inodes on a drive", allDriveLabels...) driveTotalInodesMD = NewGaugeMD(driveTotalInodes, "Total inodes available on a drive", allDriveLabels...) driveTimeoutErrorsMD = NewCounterMD(driveTimeoutErrorsTotal,
Registered: 2025-05-25 19:28 - Last Modified: 2025-03-30 00:56 - 7.8K bytes - Viewed (0) -
docs/metrics/v3.md
| `minio_cluster_health_drives_offline_count` | Count of offline drives in the cluster. <br><br>Type: gauge | | | `minio_cluster_health_drives_online_count` | Count of online drives in the cluster. <br><br>Type: gauge | | | `minio_cluster_health_drives_count` | Count of all drives in the cluster. <br><br>Type: gauge | |
Registered: 2025-05-25 19:28 - Last Modified: 2025-02-26 09:25 - 45.2K bytes - Viewed (0) -
docs/metrics/prometheus/list.md
|:------------------------------------|:--------------------------------------| | `minio_cluster_drive_offline_total` | Total drives offline in this cluster. | | `minio_cluster_drive_online_total` | Total drives online in this cluster. | | `minio_cluster_drive_total` | Total drives in this cluster. | ## Cluster ILM Metrics
Registered: 2025-05-25 19:28 - Last Modified: 2024-11-06 15:44 - 43.2K bytes - Viewed (0) -
docs/metrics/prometheus/grafana/node/minio-node.json
"intervalFactor": 1, "legendFormat": "", "metric": "process_start_time_seconds", "refId": "A", "step": 60 } ], "title": "Total Drives", "type": "stat" }, { "datasource": { "type": "prometheus", "uid": "${DS_PROMETHEUS}" }, "description": "", "fieldConfig": {
Registered: 2025-05-25 19:28 - Last Modified: 2024-06-04 13:24 - 22.4K bytes - Viewed (0) -
cmd/metrics-resource.go
percUtil: "Percentage of time the disk was busy", usedBytes: "Used bytes on a drive", totalBytes: "Total bytes on a drive", usedInodes: "Total inodes used on a drive", totalInodes: "Total inodes on a drive", cpuUser: "CPU user time", cpuSystem: "CPU system time", cpuIdle: "CPU idle time", cpuIOWait: "CPU ioWait time",
Registered: 2025-05-25 19:28 - Last Modified: 2025-03-30 00:56 - 17.2K bytes - Viewed (0) -
cmd/globals.go
diskReserveFraction = 0.15 // diskAssumeUnknownSize is the size to assume when an unknown size upload is requested. diskAssumeUnknownSize = 1 << 30 // diskMinInodes is the minimum number of inodes we want free on a disk to perform writes. diskMinInodes = 1000 // tlsClientSessionCacheSize is the cache size for client sessions. tlsClientSessionCacheSize = 100 ) func init() {
Registered: 2025-05-25 19:28 - Last Modified: 2024-09-03 18:23 - 16.2K bytes - Viewed (1) -
cmd/erasure-sets.go
beforeDrives := formatsToDrivesInfo(s.endpoints.Endpoints, formats, sErrs) res.After.Drives = make([]madmin.HealDriveInfo, len(beforeDrives)) res.Before.Drives = make([]madmin.HealDriveInfo, len(beforeDrives)) // Copy "after" drive state too from before. for k, v := range beforeDrives { res.Before.Drives[k] = v res.After.Drives[k] = v } if countErrs(sErrs, errUnformattedDisk) == 0 {
Registered: 2025-05-25 19:28 - Last Modified: 2025-01-19 23:13 - 37K bytes - Viewed (1) -
cmd/erasure-decode.go
return nil, fmt.Errorf("%w (offline-disks=%d/%d)", errErasureReadQuorum, disksNotFound, len(p.readers)) } // Decode reads from readers, reconstructs data if needed and writes the data to the writer. // A set of preferred drives can be supplied. In that case they will be used and the data reconstructed.
Registered: 2025-05-25 19:28 - Last Modified: 2024-08-29 01:40 - 9.5K bytes - Viewed (0) -
docs/erasure/storage-class/README.md
on 16 drive MinIO deployment. If you use eight data and eight parity drives, the file space usage will be approximately twice, i.e. 100 MiB file will take 200 MiB space. But, if you use ten data and six parity drives, same 100 MiB file takes around 160 MiB. If you use 14 data and two parity drives, 100 MiB file takes only approximately 114 MiB.
Registered: 2025-05-25 19:28 - Last Modified: 2023-08-15 23:04 - 5.8K bytes - Viewed (0)