- Sort Score
- Result 10 results
- Languages All
Results 211 - 220 of 698 for Writes (0.05 sec)
-
guava/src/com/google/common/io/CharSource.java
} @Override public long copyTo(CharSink sink) throws IOException { checkNotNull(sink); Closer closer = Closer.create(); try { Writer writer = closer.register(sink.openStream()); writer.write((String) seq); return seq.length(); } catch (Throwable e) { throw closer.rethrow(e); } finally { closer.close(); } } }
Registered: Fri Sep 05 12:43:10 UTC 2025 - Last Modified: Wed Jul 16 17:42:14 UTC 2025 - 25.3K bytes - Viewed (0) -
cmd/erasure-healing.go
xlMetaToHealCount++ } } driveState := objectErrToDriveState(reason) result.Before.Drives = append(result.Before.Drives, madmin.HealDriveInfo{ UUID: "", Endpoint: storageEndpoints[i].String(), State: driveState, }) result.After.Drives = append(result.After.Drives, madmin.HealDriveInfo{ UUID: "", Endpoint: storageEndpoints[i].String(), State: driveState, })
Registered: Sun Sep 07 19:28:11 UTC 2025 - Last Modified: Fri Aug 29 02:39:48 UTC 2025 - 34.6K bytes - Viewed (0) -
docs/distributed/DESIGN.md
- We limited the number of drives to 16 for erasure set because, erasure code shards more than 16 can become chatty and do not have any performance advantages. Additionally since 16 drive erasure set gives you tolerance of 8 drives per object by default which is plenty in any practical scenario.
Registered: Sun Sep 07 19:28:11 UTC 2025 - Last Modified: Wed Feb 26 09:25:50 UTC 2025 - 8K bytes - Viewed (2) -
cmd/admin-handlers.go
// if a "restart/stop" was successful or not. Service signal now supports // a dry-run that helps skip the nodes that may have hung drives. By default // restart/stop will ignore the servers that are hung on drives. You can use // 'force' param to force restart even with hung drives if needed. func (a adminAPIHandlers) ServiceV2Handler(w http.ResponseWriter, r *http.Request) { ctx := r.Context() vars := mux.Vars(r)
Registered: Sun Sep 07 19:28:11 UTC 2025 - Last Modified: Fri Aug 29 02:39:48 UTC 2025 - 99.6K bytes - Viewed (0) -
utils/tests/dummy_dialecter.go
return clause.Expr{SQL: "DEFAULT"} } func (DummyDialector) Migrator(*gorm.DB) gorm.Migrator { return nil } func (DummyDialector) BindVarTo(writer clause.Writer, stmt *gorm.Statement, v interface{}) { writer.WriteByte('?') } func (DummyDialector) QuoteTo(writer clause.Writer, str string) { var ( underQuoted, selfQuoted bool continuousBacktick int8 shiftDelimiter int8 )
Registered: Sun Sep 07 09:35:13 UTC 2025 - Last Modified: Mon Mar 06 06:03:31 UTC 2023 - 2.2K bytes - Viewed (0) -
okhttp-tls/src/main/kotlin/okhttp3/tls/internal/der/Adapters.kt
bytes = bytes, ) } } override fun toDer( writer: DerWriter, value: AnyValue, ) { writer.write("ANY", value.tagClass, value.tag) { writer.writeOctetString(value.bytes) writer.constructed = value.constructed } } } internal fun parseGeneralizedTime(string: String): Long {
Registered: Fri Sep 05 11:42:10 UTC 2025 - Last Modified: Mon Jan 08 01:13:22 UTC 2024 - 15K bytes - Viewed (0) -
cmd/format-erasure.go
DistributionAlgo string `json:"distributionAlgo"` } `json:"xl"` Info DiskInfo `json:"-"` } func (f *formatErasureV3) Drives() (drives int) { for _, set := range f.Erasure.Sets { drives += len(set) } return drives } func (f *formatErasureV3) Clone() *formatErasureV3 { b, err := json.Marshal(f) if err != nil { panic(err) } var dst formatErasureV3
Registered: Sun Sep 07 19:28:11 UTC 2025 - Last Modified: Fri Aug 29 02:39:48 UTC 2025 - 23.1K bytes - Viewed (0) -
docs/erasure/README.md
drives. Therefore, the number of drives you present must be a multiple of one of these numbers. Each object is written to a single erasure-coding set. Minio uses the largest possible EC set size which divides into the number of drives given. For example, *18 drives* are configured as *2 sets of 9 drives*, and *24 drives* are configured as *2 sets of 12 drives*. This is true for scenarios when running MinIO as a standalone erasure coded deployment. In [distributed setup however node (affinity)...
Registered: Sun Sep 07 19:28:11 UTC 2025 - Last Modified: Tue Aug 12 18:20:36 UTC 2025 - 4.2K bytes - Viewed (0) -
statement.go
writer.WriteByte('(') for idx, d := range v { if idx > 0 { writer.WriteByte(',') } stmt.QuoteTo(writer, d) } writer.WriteByte(')') case clause.Expr: v.Build(stmt) case string: stmt.DB.Dialector.QuoteTo(writer, v) case []string: writer.WriteByte('(') for idx, d := range v { if idx > 0 { writer.WriteByte(',') }
Registered: Sun Sep 07 09:35:13 UTC 2025 - Last Modified: Thu Sep 04 13:13:16 UTC 2025 - 20.8K bytes - Viewed (0) -
docs/distributed/README.md
A stand-alone MinIO server would go down if the server hosting the drives goes offline. In contrast, a distributed MinIO setup with _m_ servers and _n_ drives will have your data safe as long as _m/2_ servers or _m*n_/2 or more drives are online. For example, an 16-server distributed setup with 200 drives per node would continue serving files, up to 4 servers can be offline in default configuration i.e around 800 drives down MinIO would continue to read and write objects.
Registered: Sun Sep 07 19:28:11 UTC 2025 - Last Modified: Tue Aug 12 18:20:36 UTC 2025 - 8.9K bytes - Viewed (0)