Search Options

Results per page
Sort
Preferred Languages
Advance

Results 1 - 10 of 276 for shard (0.02 sec)

  1. docs/debugging/xl-meta/main.go

    									valid := 0
    									for shardIdx, shard := range splitFilled[:k] {
    										shardConfig[shardIdx] = shard[offset]
    										valid += int(shard[offset])
    										if shard[offset] == 0 {
    											shards[shardIdx] = shards[shardIdx][:0]
    										} else {
    											shards[shardIdx] = append(shards[shardIdx][:0], splitData[shardIdx][offset])
    										}
    									}
    Registered: Sun Sep 07 19:28:11 UTC 2025
    - Last Modified: Fri Aug 29 02:39:48 UTC 2025
    - 40.3K bytes
    - Viewed (0)
  2. cmd/erasure-coding.go

    				fmt.Fprintf(os.Stderr, "%v: error on self-test [d:%d,p:%d]: want %#v, got %#v\n", algo, conf[0], conf[1], a, b)
    				ok = false
    				continue
    			}
    			// Delete first shard and reconstruct...
    			first := encoded[0]
    			encoded[0] = nil
    			failOnErr(e.DecodeDataBlocks(encoded))
    			if a, b := first, encoded[0]; !bytes.Equal(a, b) {
    Registered: Sun Sep 07 19:28:11 UTC 2025
    - Last Modified: Fri Aug 29 02:39:48 UTC 2025
    - 8.5K bytes
    - Viewed (0)
  3. docs/erasure/README.md

    code is a mathematical algorithm to reconstruct missing or corrupted data. MinIO uses Reed-Solomon code to shard objects into variable data and parity blocks. For example, in a 12 drive setup, an object can be sharded to a variable number of data and parity blocks across all the drives - ranging from six data and six parity blocks to ten data and two parity blocks.
    
    By default, MinIO shards the objects across N/2 data and N/2 parity drives. Though, you can use [storage classes](https://github...
    Registered: Sun Sep 07 19:28:11 UTC 2025
    - Last Modified: Tue Aug 12 18:20:36 UTC 2025
    - 4.2K bytes
    - Viewed (0)
  4. cmd/xl-storage_test.go

    	shardSize := int64(1024 * 1024)
    	shard := make([]byte, shardSize)
    	w := newStreamingBitrotWriter(storage, "", volName, fileName, size, algo, shardSize)
    	reader := bytes.NewReader(data)
    	for {
    		// Using io.Copy instead of this loop will not work for us as io.Copy
    		// will use bytes.Reader.WriteTo() which will not do shardSize'ed writes
    		// causing error.
    		n, err := reader.Read(shard)
    		w.Write(shard[:n])
    Registered: Sun Sep 07 19:28:11 UTC 2025
    - Last Modified: Fri Aug 29 02:39:48 UTC 2025
    - 66K bytes
    - Viewed (0)
  5. cmd/notification.go

    	// To avoid these problems we must split the work at scale. With 1000 node
    	// setup becoming a reality we must try to shard the work properly such as
    	// pick 10 nodes that precisely can send those 100 requests the first node
    	// in the 10 node shard would coordinate between other 9 shards to get the
    	// rest of the `99*9` requests.
    	//
    	// This essentially splits the workload properly and also allows for network
    Registered: Sun Sep 07 19:28:11 UTC 2025
    - Last Modified: Fri Aug 29 02:39:48 UTC 2025
    - 45.9K bytes
    - Viewed (0)
  6. cmd/erasure-healing_test.go

    			disks := er.getDisks()
    			distribution := hashOrder(pathJoin(bucket, object), nDisks)
    			shuffledDisks := shuffleDisks(disks, distribution)
    
    			// remove last data shard
    			err = removeAll(pathJoin(shuffledDisks[11].String(), bucket, object))
    			if err != nil {
    				t.Fatalf("Failed to delete a file - %v", err)
    			}
    			_, err = obj.HealObject(ctx, bucket, object, "", madmin.HealOpts{
    Registered: Sun Sep 07 19:28:11 UTC 2025
    - Last Modified: Fri Aug 29 02:39:48 UTC 2025
    - 48.5K bytes
    - Viewed (0)
  7. RELEASE.md

          tensors will be split into shards when the saver writes the checkpoint
          shards to disk. `tf.train.experimental.ShardByTaskPolicy` is the default
          sharding behavior, but `tf.train.experimental.MaxShardSizePolicy` can be
          used to shard the checkpoint with a maximum shard file size. Users with
          advanced use cases can also write their own custom
    Registered: Tue Sep 09 12:39:10 UTC 2025
    - Last Modified: Mon Aug 18 20:54:38 UTC 2025
    - 740K bytes
    - Viewed (2)
  8. src/test/java/jcifs/smb/SmbFileIntegrationTest.java

                            .withCopyFileToContainer(MountableFile.forHostPath(tempDir.resolve("shared")), "/share/shared")
                            .withCommand("-u", USERNAME + ";" + PASSWORD, "-s", "public;/share/public;yes;no;yes;all;;all;all", "-s",
                                    "shared;/share/shared;no;no;no;all;" + USERNAME + ";all;all", "-g", "log level = 1", "-g",
    Registered: Sun Sep 07 00:10:21 UTC 2025
    - Last Modified: Sat Aug 30 05:58:03 UTC 2025
    - 56K bytes
    - Viewed (0)
  9. src/test/java/jcifs/smb/SmbTreeConnectionTest.java

            SmbTreeConnection c1 = newConn();
            SmbTreeConnection c2 = newConn();
            SmbTreeImpl shared = mock(SmbTreeImpl.class);
            when(shared.acquire(false)).thenReturn(shared);
    
            setTree(c1, shared);
            setTree(c2, shared);
            assertTrue(c1.isSame(c2));
    
            SmbTreeImpl other = mock(SmbTreeImpl.class);
            when(other.acquire(false)).thenReturn(other);
    Registered: Sun Sep 07 00:10:21 UTC 2025
    - Last Modified: Thu Aug 14 07:14:38 UTC 2025
    - 13K bytes
    - Viewed (0)
  10. src/main/java/jcifs/internal/smb1/com/SmbComTreeConnectAndX.java

        /**
         * Constructs a tree connect AndX request to establish a connection to a shared resource.
         *
         * @param ctx the CIFS context containing configuration
         * @param server the server data containing security information
         * @param path the UNC path to the shared resource
         * @param service the service type (e.g., "A:" for disk share, "LPT1:" for printer)
         * @param andx the next command in the AndX chain, or null
         */
    Registered: Sun Sep 07 00:10:21 UTC 2025
    - Last Modified: Sat Aug 16 01:32:48 UTC 2025
    - 7.1K bytes
    - Viewed (0)
Back to top