- Sort Score
- Result 10 results
- Languages All
Results 1 - 10 of 58 for cpus (0.03 sec)
-
cmd/update.go
Registered: Sun Sep 07 19:28:11 UTC 2025 - Last Modified: Tue Aug 12 18:20:36 UTC 2025 - 18.9K bytes - Viewed (0) -
docs/en/docs/deployment/server-workers.md
## Recap { #recap } You can use multiple worker processes with the `--workers` CLI option with the `fastapi` or `uvicorn` commands to take advantage of **multi-core CPUs**, to run **multiple processes in parallel**. You could use these tools and ideas if you are setting up **your own deployment system** while taking care of the other deployment concepts yourself.
Registered: Sun Sep 07 07:19:17 UTC 2025 - Last Modified: Sun Aug 31 09:15:41 UTC 2025 - 8.3K bytes - Viewed (0) -
docs/en/docs/deployment/concepts.md
On the other hand, if you have 2 servers and you are using **100% of their CPU and RAM**, at some point one process will ask for more memory, and the server will have to use the disk as "memory" (which can be thousands of times slower), or even **crash**. Or one process might need to do some computation and would have to wait until the CPU is free again.
Registered: Sun Sep 07 07:19:17 UTC 2025 - Last Modified: Sun Aug 31 09:15:41 UTC 2025 - 18.6K bytes - Viewed (0) -
cmd/notification.go
if nErr.Err != nil { peersLogOnceIf(logger.SetReqInfo(ctx, reqInfo), nErr.Err, nErr.Host.String()) } } } // GetCPUs - Get all CPU information. func (sys *NotificationSys) GetCPUs(ctx context.Context) []madmin.CPUs { reply := make([]madmin.CPUs, len(sys.peerClients)) g := errgroup.WithNErrs(len(sys.peerClients)) for index, client := range sys.peerClients { if client == nil { continue }
Registered: Sun Sep 07 19:28:11 UTC 2025 - Last Modified: Fri Aug 29 02:39:48 UTC 2025 - 45.9K bytes - Viewed (0) -
docs/metrics/v3.md
Registered: Sun Sep 07 19:28:11 UTC 2025 - Last Modified: Tue Aug 12 18:20:36 UTC 2025 - 45.2K bytes - Viewed (0) -
.bazelrc
# See https://developer.nvidia.com/cuda-gpus#compute # `compute_XY` enables PTX embedding in addition to SASS. PTX # is forward compatible beyond the current compute capability major # release while SASS is only forward compatible inside the current # major release. Example: sm_80 kernels can run on sm_89 GPUs but # not on sm_90 GPUs. compute_80 kernels though can also run on sm_90 GPUs.
Registered: Tue Sep 09 12:39:10 UTC 2025 - Last Modified: Fri Aug 22 21:03:34 UTC 2025 - 56K bytes - Viewed (0) -
RELEASE.md
`.predict` is available for Cloud TPUs, Cloud TPU, for all types of Keras models (sequential, functional and subclassing models). * Automatic outside compilation is now enabled for Cloud TPUs. This allows `tf.summary` to be used more conveniently with Cloud TPUs. * Dynamic batch sizes with DistributionStrategy and Keras are supported on Cloud TPUs.
Registered: Tue Sep 09 12:39:10 UTC 2025 - Last Modified: Mon Aug 18 20:54:38 UTC 2025 - 740K bytes - Viewed (2) -
doc/go_mem.html
</p> <p> Note that the prohibition on introducing data races does not apply if the compiler can prove that the races do not affect correct execution on the target platform. For example, on essentially all CPUs, it is valid to rewrite </p> <pre> n := 0 for i := 0; i < m; i++ { n += *shared } </pre> into: <pre> n := 0 local := *shared for i := 0; i < m; i++ { n += local
Registered: Tue Sep 09 11:13:09 UTC 2025 - Last Modified: Tue Aug 05 15:41:37 UTC 2025 - 26.6K bytes - Viewed (0) -
CHANGELOG/CHANGELOG-1.31.md
- User can choose a different static policy option `SpreadPhysicalCPUsPreferredOption` to spread cpus across physical cpus for some specific applications ([#123733](https://github.com/kubernetes/kubernetes/pull/123733), [@Jeffwan](https://github.com/Jeffwan)) [SIG Node]
Registered: Fri Sep 05 09:05:11 UTC 2025 - Last Modified: Wed Aug 13 19:49:57 UTC 2025 - 429.6K bytes - Viewed (0) -
WORKSPACE
load( "@rules_ml_toolchain//third_party/gpus/cuda/hermetic:cuda_json_init_repository.bzl", "cuda_json_init_repository", ) cuda_json_init_repository() load( "@cuda_redist_json//:distributions.bzl", "CUDA_REDISTRIBUTIONS", "CUDNN_REDISTRIBUTIONS", ) load( "@rules_ml_toolchain//third_party/gpus/cuda/hermetic:cuda_redist_init_repositories.bzl",
Registered: Tue Sep 09 12:39:10 UTC 2025 - Last Modified: Wed Sep 03 23:57:17 UTC 2025 - 4.4K bytes - Viewed (0)