Search Options

Results per page
Sort
Preferred Languages
Advance

Results 1 - 10 of 561 for drives (0.09 sec)

  1. docs/distributed/README.md

    If one or more drives are offline at the start of a PutObject or NewMultipartUpload operation the object will have additional data protection bits added automatically to provide additional safety for these objects.
    
    ### High availability
    
    A stand-alone MinIO server would go down if the server hosting the drives goes offline. In contrast, a distributed MinIO setup with _m_ servers and _n_ drives will have your data safe as long as _m/2_ servers or _m*n_/2 or more drives are online.
    Registered: Sun Nov 03 19:28:11 UTC 2024
    - Last Modified: Thu Jan 18 07:03:17 UTC 2024
    - 8.8K bytes
    - Viewed (0)
  2. docs/erasure/storage-class/README.md

    two parity drives, 100 MiB file takes only approximately 114 MiB.
    
    Below is a list of data/parity drives and corresponding _approximate_ storage space usage on a 16 drive MinIO deployment. The field _storage
    usage ratio_ is simply the drive space used by the file after erasure-encoding, divided by actual file size.
    
    | Total Drives (N) | Data Drives (D) | Parity Drives (P) | Storage Usage Ratio |
    Registered: Sun Nov 03 19:28:11 UTC 2024
    - Last Modified: Tue Aug 15 23:04:20 UTC 2023
    - 5.8K bytes
    - Viewed (0)
  3. docs/erasure/README.md

    drives.  Therefore, the number of drives you present must be a multiple of one of these numbers.  Each object is written to a single erasure-coding set.
    
    Minio uses the largest possible EC set size which divides into the number of drives given. For example, *18 drives* are configured as *2 sets of 9 drives*, and *24 drives* are configured as *2 sets of 12 drives*.  This is true for scenarios when running MinIO as a standalone erasure coded deployment. In [distributed setup however node (affinity)...
    Registered: Sun Nov 03 19:28:11 UTC 2024
    - Last Modified: Thu Sep 29 04:28:45 UTC 2022
    - 4.1K bytes
    - Viewed (0)
  4. cmd/peer-s3-server.go

    				Name:    v.Name,
    				Deleted: v.Created,
    			})
    		}
    	}
    
    	return buckets, nil
    }
    
    func cloneDrives(drives map[string]StorageAPI) []StorageAPI {
    	copyDrives := make([]StorageAPI, 0, len(drives))
    	for _, drive := range drives {
    		copyDrives = append(copyDrives, drive)
    	}
    	return copyDrives
    }
    
    func getBucketInfoLocal(ctx context.Context, bucket string, opts BucketOptions) (BucketInfo, error) {
    Registered: Sun Nov 03 19:28:11 UTC 2024
    - Last Modified: Thu Aug 22 21:57:20 UTC 2024
    - 8.1K bytes
    - Viewed (0)
  5. cmd/erasure-healing.go

    			// Bucket or prefix/directory not found
    			hr.Before.Drives[i] = madmin.HealDriveInfo{Endpoint: drive, State: madmin.DriveStateMissing}
    			hr.After.Drives[i] = madmin.HealDriveInfo{Endpoint: drive, State: madmin.DriveStateMissing}
    		default:
    			hr.Before.Drives[i] = madmin.HealDriveInfo{Endpoint: drive, State: madmin.DriveStateCorrupt}
    			hr.After.Drives[i] = madmin.HealDriveInfo{Endpoint: drive, State: madmin.DriveStateCorrupt}
    		}
    	}
    Registered: Sun Nov 03 19:28:11 UTC 2024
    - Last Modified: Wed Oct 02 17:50:41 UTC 2024
    - 34.4K bytes
    - Viewed (0)
  6. docs/distributed/DESIGN.md

    - We limited the number of drives to 16 for erasure set because, erasure code shards more than 16 can become chatty and do not have any performance advantages. Additionally since 16 drive erasure set gives you tolerance of 8 drives per object by default which is plenty in any practical scenario.
    Registered: Sun Nov 03 19:28:11 UTC 2024
    - Last Modified: Tue Aug 15 23:04:20 UTC 2023
    - 8K bytes
    - Viewed (0)
  7. cmd/metrics-v3-cluster-health.go

    		"Count of offline drives in the cluster")
    	healthDrivesOnlineCountMD = NewGaugeMD(healthDrivesOnlineCount,
    		"Count of online drives in the cluster")
    	healthDrivesCountMD = NewGaugeMD(healthDrivesCount,
    		"Count of all drives in the cluster")
    )
    
    // loadClusterHealthDriveMetrics - `MetricsLoaderFn` for cluster storage drive metrics
    // such as online, offline and total drives.
    Registered: Sun Nov 03 19:28:11 UTC 2024
    - Last Modified: Sun Mar 10 09:15:15 UTC 2024
    - 3.9K bytes
    - Viewed (0)
  8. cmd/metrics-v3-system-drive.go

    }
    
    func (m *MetricValues) setDriveBasicMetrics(drive madmin.Disk, labels []string) {
    	m.Set(driveUsedBytes, float64(drive.UsedSpace), labels...)
    	m.Set(driveFreeBytes, float64(drive.AvailableSpace), labels...)
    	m.Set(driveTotalBytes, float64(drive.TotalSpace), labels...)
    	m.Set(driveUsedInodes, float64(drive.UsedInodes), labels...)
    	m.Set(driveFreeInodes, float64(drive.FreeInodes), labels...)
    Registered: Sun Nov 03 19:28:11 UTC 2024
    - Last Modified: Sun May 12 17:23:50 UTC 2024
    - 7.9K bytes
    - Viewed (0)
  9. docs/multi-tenancy/README.md

    ### 1.2 Host Multiple Tenants on Multiple Drives (Erasure Code)
    
    Use the following commands to host 3 tenants on multiple drives:
    
    ```sh
    minio server --address :9001 /disk{1...4}/data/tenant1
    minio server --address :9002 /disk{1...4}/data/tenant2
    minio server --address :9003 /disk{1...4}/data/tenant3
    ```
    
    Registered: Sun Nov 03 19:28:11 UTC 2024
    - Last Modified: Thu Sep 29 04:28:45 UTC 2022
    - 3K bytes
    - Viewed (0)
  10. docs/distributed/SIZING.md

    If one or more drives are offline at the start of a PutObject or NewMultipartUpload operation the object will have additional data
    protection bits added automatically to provide the regular safety for these objects up to 50% of the number of drives.
    This will allow normal write operations to take place on systems that exceed the write tolerance.
    
    Registered: Sun Nov 03 19:28:11 UTC 2024
    - Last Modified: Tue Aug 15 23:04:20 UTC 2023
    - 3.9K bytes
    - Viewed (0)
Back to top