Search Options

Results per page
Sort
Preferred Languages
Advance

Results 1 - 4 of 4 for from (0.22 sec)

  1. docs/bucket/lifecycle/DESIGN.md

    entirety, leaving only the object metadata on MinIO.
    
    The data on the backend is stored under the `bucket/prefix` specified in the tier configuration with a custom name derived from a randomly generated uuid - e.g. `0b/c4/0bc4fab7-2daf-4d2f-8e39-5c6c6fb7e2d3`. The first two prefixes are characters 1-2,3-4 from the uuid. This format allows tiering to any cloud irrespective of whether the cloud in question supports versioning. The reference to the transitioned object name and transitioned tier...
    Plain Text
    - Registered: Sun May 05 19:28:20 GMT 2024
    - Last Modified: Thu Sep 29 04:28:45 GMT 2022
    - 4.3K bytes
    - Viewed (0)
  2. docs/bucket/versioning/DESIGN.md

    ## Description of `xl.meta`
    
    `xl.meta` is a new self describing backend format used by MinIO to support AWS S3 compatible versioning.
    This file is the source of truth for each `version` at rest. `xl.meta` is a msgpack file serialized from a
    well defined data structure. To understand `xl.meta` here are the few things to start with
    
    `xl.meta` carries first 8 bytes an XL header which describes the current format and the format version,
    Plain Text
    - Registered: Sun May 05 19:28:20 GMT 2024
    - Last Modified: Sun Jul 17 15:43:14 GMT 2022
    - 5.8K bytes
    - Viewed (1)
  3. docs/bucket/replication/DESIGN.md

    PutObjectTagging, PutObjectRetention, PutObjectLegalHold and COPY api are replicated in a similar manner to target version, with the `X-Amz-Replication-Status` again cycling through the same states.
    
    The description above details one way replication from source to target w.r.t incoming object uploads and metadata changes to source object version. If active-active replication is configured, any incoming uploads and metadata changes to versions created on the target, will sync back to the source and...
    Plain Text
    - Registered: Sun May 05 19:28:20 GMT 2024
    - Last Modified: Thu Sep 29 04:28:45 GMT 2022
    - 14.7K bytes
    - Viewed (0)
  4. docs/distributed/DESIGN.md

    available, let's say for example if there are 32 servers and 32 drives which is a total of 1024 drives. In this scenario 16 becomes the erasure set size. This is decided based on the greatest common divisor (GCD) of acceptable erasure set sizes ranging from *4 to 16*.
    
    - *If total drives has many common divisors the algorithm chooses the minimum amounts of erasure sets possible for a erasure set size of any N*.  In the example with 1024 drives - 4, 8, 16 are GCD factors. With 16 drives we get...
    Plain Text
    - Registered: Sun May 05 19:28:20 GMT 2024
    - Last Modified: Tue Aug 15 23:04:20 GMT 2023
    - 8K bytes
    - Viewed (0)
Back to top