Search Options

Results per page
Sort
Preferred Languages
Advance

Results 1 - 10 of 114 for irides (0.19 sec)

  1. cmd/metrics-v3-system-drive.go

    const (
    	driveUsedBytes               = "used_bytes"
    	driveFreeBytes               = "free_bytes"
    	driveTotalBytes              = "total_bytes"
    	driveUsedInodes              = "used_inodes"
    	driveFreeInodes              = "free_inodes"
    	driveTotalInodes             = "total_inodes"
    	driveTimeoutErrorsTotal      = "timeout_errors_total"
    	driveIOErrorsTotal           = "io_errors_total"
    	driveAvailabilityErrorsTotal = "availability_errors_total"
    Go
    - Registered: Sun Apr 28 19:28:10 GMT 2024
    - Last Modified: Thu Apr 25 22:01:31 GMT 2024
    - 7.8K bytes
    - Viewed (0)
  2. docs/distributed/README.md

    A stand-alone MinIO server would go down if the server hosting the drives goes offline. In contrast, a distributed MinIO setup with _m_ servers and _n_ drives will have your data safe as long as _m/2_ servers or _m*n_/2 or more drives are online.
    
    For example, an 16-server distributed setup with 200 drives per node would continue serving files, up to 4 servers can be offline in default configuration i.e around 800 drives down MinIO would continue to read and write objects.
    
    Plain Text
    - Registered: Sun Apr 28 19:28:10 GMT 2024
    - Last Modified: Thu Jan 18 07:03:17 GMT 2024
    - 8.8K bytes
    - Viewed (0)
  3. docs/erasure/README.md

    drives.  Therefore, the number of drives you present must be a multiple of one of these numbers.  Each object is written to a single erasure-coding set.
    
    Minio uses the largest possible EC set size which divides into the number of drives given. For example, *18 drives* are configured as *2 sets of 9 drives*, and *24 drives* are configured as *2 sets of 12 drives*.  This is true for scenarios when running MinIO as a standalone erasure coded deployment. In [distributed setup however node (affinity)...
    Plain Text
    - Registered: Sun Apr 28 19:28:10 GMT 2024
    - Last Modified: Thu Sep 29 04:28:45 GMT 2022
    - 4.1K bytes
    - Viewed (0)
  4. cmd/speedtest.go

    	ch := make(chan madmin.SpeedTestResult, 1)
    	go func() {
    		defer xioutil.SafeClose(ch)
    
    		concurrency := opts.concurrencyStart
    
    		if opts.autotune {
    			// if we have less drives than concurrency then choose
    			// only the concurrency to be number of drives to start
    			// with - since default '32' might be big and may not
    			// complete in total time of 10s.
    			if globalEndpoints.NEndpoints() < concurrency {
    Go
    - Registered: Sun Apr 28 19:28:10 GMT 2024
    - Last Modified: Sun Jan 28 18:04:17 GMT 2024
    - 9K bytes
    - Viewed (0)
  5. cmd/erasure-decode.go

    	return nil, fmt.Errorf("%w (offline-disks=%d/%d)", errErasureReadQuorum, disksNotFound, len(p.readers))
    }
    
    // Decode reads from readers, reconstructs data if needed and writes the data to the writer.
    // A set of preferred drives can be supplied. In that case they will be used and the data reconstructed.
    Go
    - Registered: Sun Apr 28 19:28:10 GMT 2024
    - Last Modified: Fri Apr 19 16:44:59 GMT 2024
    - 9.4K bytes
    - Viewed (0)
  6. cmd/peer-s3-server.go

    			buckets = append(buckets, BucketInfo{
    				Name:    v.Name,
    				Deleted: v.Created,
    			})
    		}
    	}
    
    	return buckets, nil
    }
    
    func cloneDrives(drives []StorageAPI) []StorageAPI {
    	newDrives := make([]StorageAPI, len(drives))
    	copy(newDrives, drives)
    	return newDrives
    }
    
    func getBucketInfoLocal(ctx context.Context, bucket string, opts BucketOptions) (BucketInfo, error) {
    	globalLocalDrivesMu.RLock()
    Go
    - Registered: Sun Apr 28 19:28:10 GMT 2024
    - Last Modified: Fri Mar 08 19:08:18 GMT 2024
    - 8.4K bytes
    - Viewed (0)
  7. docs/distributed/SIZING.md

    # Erasure code sizing guide
    
    ## Toy Setups
    
    Capacity constrained environments, MinIO will work but not recommended for production.
    
    | servers | drives (per node) | stripe_size | parity chosen (default) | tolerance for reads (servers) | tolerance for writes (servers) |
    |--------:|------------------:|------------:|------------------------:|------------------------------:|-------------------------------:|
    Plain Text
    - Registered: Sun Apr 28 19:28:10 GMT 2024
    - Last Modified: Tue Aug 15 23:04:20 GMT 2023
    - 3.9K bytes
    - Viewed (0)
  8. docs/en/docs/deployment/cloud.md

    You might want to try their services and follow their guides:
    
    * <a href="https://docs.platform.sh/languages/python.html?utm_source=fastapi-signup&utm_medium=banner&utm_campaign=FastAPI-signup-June-2023" class="external-link" target="_blank">Platform.sh</a>
    * <a href="https://docs.porter.run/language-specific-guides/fastapi" class="external-link" target="_blank">Porter</a>
    Plain Text
    - Registered: Sun Apr 28 07:19:10 GMT 2024
    - Last Modified: Wed Jan 31 22:13:52 GMT 2024
    - 1.2K bytes
    - Viewed (0)
  9. docs/distributed/DESIGN.md

    - We limited the number of drives to 16 for erasure set because, erasure code shards more than 16 can become chatty and do not have any performance advantages. Additionally since 16 drive erasure set gives you tolerance of 8 drives per object by default which is plenty in any practical scenario.
    Plain Text
    - Registered: Sun Apr 28 19:28:10 GMT 2024
    - Last Modified: Tue Aug 15 23:04:20 GMT 2023
    - 8K bytes
    - Viewed (0)
  10. docs/erasure/storage-class/README.md

    on 16 drive MinIO deployment. If you use eight data and eight parity drives, the file space usage will be approximately twice, i.e. 100 MiB
    file will take 200 MiB space. But, if you use ten data and six parity drives, same 100 MiB file takes around 160 MiB. If you use 14 data and
    two parity drives, 100 MiB file takes only approximately 114 MiB.
    
    Plain Text
    - Registered: Sun Apr 28 19:28:10 GMT 2024
    - Last Modified: Tue Aug 15 23:04:20 GMT 2023
    - 5.8K bytes
    - Viewed (1)
Back to top