- Sort Score
- Result 10 results
- Languages All
Results 1 - 10 of 131 for Worker (0.05 sec)
-
docs/en/docs/deployment/server-workers.md
/// ## Multiple Workers { #multiple-workers } You can start multiple workers with the `--workers` command line option: //// tab | `fastapi` If you use the `fastapi` command: <div class="termy"> ```consoleRegistered: Sun Sep 07 07:19:17 UTC 2025 - Last Modified: Sun Aug 31 09:15:41 UTC 2025 - 8.3K bytes - Viewed (0) -
docs/es/llm-prompt.md
* mount (verb): montar * statement (as in code statement): statement (do not translate to "declaración" or "sentencia") * worker process: worker process (do not translate to "proceso trabajador" or "proceso de trabajo") * worker processes: worker processes (do not translate to "procesos trabajadores" or "procesos de trabajo") * worker: worker (do not translate to "trabajador") * load balancer: load balancer (do not translate to "balanceador de carga")
Registered: Sun Sep 07 07:19:17 UTC 2025 - Last Modified: Sat Jul 26 18:57:50 UTC 2025 - 5.3K bytes - Viewed (0) -
docs/en/docs/deployment/concepts.md
/// tip Don't worry if some of these items about **containers**, Docker, or Kubernetes don't make a lot of sense yet. I'll tell you more about container images, Docker, Kubernetes, etc. in a future chapter: [FastAPI in Containers - Docker](docker.md){.internal-link target=_blank}. /// ## Previous Steps Before Starting { #previous-steps-before-starting }
Registered: Sun Sep 07 07:19:17 UTC 2025 - Last Modified: Sun Aug 31 09:15:41 UTC 2025 - 18.6K bytes - Viewed (0) -
docs/en/docs/deployment/docker.md
#### Docker Compose { #docker-compose } You could be deploying to a **single server** (not a cluster) with **Docker Compose**, so you wouldn't have an easy way to manage replication of containers (with Docker Compose) while preserving the shared network and **load balancing**. Then you could want to have **a single container** with a **process manager** starting **several worker processes** inside. ---Registered: Sun Sep 07 07:19:17 UTC 2025 - Last Modified: Sun Aug 31 09:15:41 UTC 2025 - 29.5K bytes - Viewed (1) -
cmd/bucket-lifecycle.go
workers = workers[:len(workers)-1] worker <- expiryOp(nil) es.stats.workers.Add(-1) } // Atomically replace workers. es.workers.Store(&workers) } // Worker handles 4 types of expiration tasks. // 1. Expiry of objects, includes regular and transitioned objects // 2. Expiry of noncurrent versions due to NewerNoncurrentVersions
Registered: Sun Sep 07 19:28:11 UTC 2025 - Last Modified: Fri Aug 29 02:39:48 UTC 2025 - 33.7K bytes - Viewed (0) -
cmd/bucket-replication.go
if (checkOld > 0 && len(p.workers) != checkOld) || n == len(p.workers) || n < 1 { // Either already satisfied or worker count changed while we waited for the lock. return } for len(p.workers) < n { input := make(chan ReplicationWorkerOperation, 10000) p.workers = append(p.workers, input) go p.AddWorker(input, &p.activeWorkers) } for len(p.workers) > n { worker := p.workers[len(p.workers)-1]
Registered: Sun Sep 07 19:28:11 UTC 2025 - Last Modified: Fri Aug 29 02:39:48 UTC 2025 - 118K bytes - Viewed (0) -
okhttp/src/commonJvmAndroid/kotlin/okhttp3/internal/concurrent/TaskRunner.kt
import java.util.logging.Logger import okhttp3.internal.addIfAbsent import okhttp3.internal.concurrent.TaskRunner.Companion.INSTANCE import okhttp3.internal.okHttpName import okhttp3.internal.threadFactory /** * A set of worker threads that are shared among a set of task queues. * * Use [INSTANCE] for a task runner that uses daemon threads. There is not currently a shared * instance for non-daemon threads. *
Registered: Fri Sep 05 11:42:10 UTC 2025 - Last Modified: Sat Aug 30 11:30:11 UTC 2025 - 10.4K bytes - Viewed (0) -
cmd/erasure-server-pool-rebalance.go
if err != nil { rebalanceLogIf(ctx, fmt.Errorf("invalid workers value err: %v, defaulting to %d", err, len(pool.sets))) workerSize = len(pool.sets) } // Each decom worker needs one List() goroutine/worker // add that many extra workers. workerSize += len(pool.sets) wk, err := workers.New(workerSize) if err != nil { return err }Registered: Sun Sep 07 19:28:11 UTC 2025 - Last Modified: Thu Sep 04 20:47:24 UTC 2025 - 28.9K bytes - Viewed (0) -
cmd/batch-handlers.go
if err != nil { return err } wk, err := workers.New(workerSize) if err != nil { // invalid worker size. return err } retry := false for attempts := 1; attempts <= retryAttempts; attempts++ { attempts := attempts // one of source/target is s3, skip delete marker and all versions under the same object name.Registered: Sun Sep 07 19:28:11 UTC 2025 - Last Modified: Fri Aug 29 02:39:48 UTC 2025 - 63.5K bytes - Viewed (0) -
cmd/data-scanner_test.go
t.Run(fmt.Sprintf("TestApplyNewerNoncurrentVersionsLimit-%d", i), func(t *testing.T) { workers := []chan expiryOp{make(chan expiryOp)} es.workers.Store(&workers) workerReady := make(chan struct{}) var wg sync.WaitGroup wg.Add(1) var gotExpired []ObjectToDelete go expiryWorker(&wg, workerReady, workers[0], &gotExpired) <-workerReady item := scannerItem{ Path: obj,
Registered: Sun Sep 07 19:28:11 UTC 2025 - Last Modified: Fri Aug 29 02:39:48 UTC 2025 - 12K bytes - Viewed (0)