- Sort Score
- Result 10 results
- Languages All
Results 1 - 10 of 37 for CPU (0.01 sec)
-
cmd/metrics-v3-system-cpu.go
sysCPUAvgIdleMD = NewGaugeMD(sysCPUAvgIdle, "Average CPU idle time") sysCPUAvgIOWaitMD = NewGaugeMD(sysCPUAvgIOWait, "Average CPU IOWait time") sysCPULoadMD = NewGaugeMD(sysCPULoad, "CPU load average 1min") sysCPULoadPercMD = NewGaugeMD(sysCPULoadPerc, "CPU load average 1min (percentage)") sysCPUNiceMD = NewGaugeMD(sysCPUNice, "CPU nice time") sysCPUStealMD = NewGaugeMD(sysCPUSteal, "CPU steal time")
Registered: Sun Sep 07 19:28:11 UTC 2025 - Last Modified: Thu Jun 20 17:55:03 UTC 2024 - 3K bytes - Viewed (0) -
tensorflow/c/README.md
- Nightly builds: - [Linux CPU-only](https://storage.googleapis.com/tensorflow-nightly/github/tensorflow/lib_package/libtensorflow-cpu-linux-x86_64.tar.gz) - [Linux GPU](https://storage.googleapis.com/tensorflow-nightly/github/tensorflow/lib_package/libtensorflow-gpu-linux-x86_64.tar.gz)
Registered: Tue Sep 09 12:39:10 UTC 2025 - Last Modified: Tue Oct 23 01:38:30 UTC 2018 - 539 bytes - Viewed (0) -
.github/bot_config.yml
Therefore on any CPU that does not have these instruction sets, either CPU or GPU version of TF will fail to load. Apparently, your CPU model does not support AVX instruction sets. You can still use TensorFlow with the alternatives given below: * Try Google Colab to use TensorFlow.
Registered: Tue Sep 09 12:39:10 UTC 2025 - Last Modified: Mon Jun 30 16:38:59 UTC 2025 - 4K bytes - Viewed (0) -
cmd/metrics-realtime.go
} if types.Contains(madmin.MetricsCPU) { m.Aggregated.CPU = &madmin.CPUMetrics{ CollectedAt: UTCNow(), } cm, err := c.Times(false) if err != nil { m.Errors = append(m.Errors, fmt.Sprintf("%s: %v (cpuTimes)", byHostName, err.Error())) } else { // not collecting per-cpu stats, so there will be only one element if len(cm) == 1 { m.Aggregated.CPU.TimesStat = &cm[0] } else {
Registered: Sun Sep 07 19:28:11 UTC 2025 - Last Modified: Sat Jun 01 05:16:24 UTC 2024 - 6.3K bytes - Viewed (0) -
ci/official/envs/linux_arm64
TFCI_BAZEL_COMMON_ARGS="--repo_env=HERMETIC_PYTHON_VERSION=$TFCI_PYTHON_VERSION --repo_env=USE_PYWRAP_RULES=True --config release_arm64_linux" TFCI_BAZEL_TARGET_SELECTING_CONFIG_PREFIX=linux_arm64 # Note: this is not set to "--cpu", because that changes the package name # to tensorflow_cpu. These ARM builds are supposed to have the name "tensorflow" # despite lacking Nvidia CUDA support. TFCI_BUILD_PIP_PACKAGE_WHEEL_NAME_ARG="--repo_env=WHEEL_NAME=tensorflow"
Registered: Tue Sep 09 12:39:10 UTC 2025 - Last Modified: Wed Jun 04 01:09:09 UTC 2025 - 1.6K bytes - Viewed (0) -
docs/tuning/tuned.conf
[main] summary=Maximum server performance for MinIO [vm] transparent_hugepage=madvise [sysfs] /sys/kernel/mm/transparent_hugepage/defrag=defer+madvise /sys/kernel/mm/transparent_hugepage/khugepaged/max_ptes_none=0 [cpu] force_latency=1 governor=performance energy_perf_bias=performance min_perf_pct=100 [sysctl] fs.xfs.xfssyncd_centisecs=72000 net.core.busy_read=50 net.core.busy_poll=50 kernel.numa_balancing=1
Registered: Sun Sep 07 19:28:11 UTC 2025 - Last Modified: Fri Jul 12 23:31:18 UTC 2024 - 1.9K bytes - Viewed (0) -
ci/official/utilities/rename_and_verify_wheels.sh
if [[ "$TFCI_WHL_NUMPY_VERSION" == 1 ]]; then # Uninstall tf nightly wheel built with numpy1. "$python" -m pip uninstall -y tf_nightly_numpy1 # Install tf nightly cpu wheel built with numpy2.x from PyPI in numpy1.x env. "$python" -m pip install tf-nightly-cpu if [[ "$TFCI_WHL_IMPORT_TEST_ENABLE" == "1" ]]; then "$python" -c 'import tensorflow as tf; t1=tf.constant([1,2,3,4]); t2=tf.constant([5,6,7,8]); print(tf.add(t1,t2).shape)'
Registered: Tue Sep 09 12:39:10 UTC 2025 - Last Modified: Fri Apr 25 00:22:38 UTC 2025 - 4.7K bytes - Viewed (0) -
docs/compression/README.md
streaming compression due to its stability and performance. This algorithm is specifically optimized for machine generated content. Write throughput is typically at least 500MB/s per CPU core, and scales with the number of available CPU cores. Decompression speed is typically at least 1GB/s. This means that in cases where raw IO is below these numbers compression will not only reduce disk usage but also help increase system throughput.
Registered: Sun Sep 07 19:28:11 UTC 2025 - Last Modified: Tue Aug 12 18:20:36 UTC 2025 - 5.2K bytes - Viewed (0) -
src/main/java/org/codelibs/fess/timer/SystemMonitorTarget.java
append(buf, "open", () -> processProbe.getOpenFileDescriptorCount()).append(','); append(buf, "max", () -> processProbe.getMaxFileDescriptorCount()); buf.append("},"); buf.append("\"cpu\":{"); append(buf, "percent", () -> processProbe.getProcessCpuPercent()).append(','); append(buf, "total", () -> processProbe.getProcessCpuTotalTime()); buf.append("},");
Registered: Thu Sep 04 12:52:25 UTC 2025 - Last Modified: Thu Jul 17 08:28:31 UTC 2025 - 7.8K bytes - Viewed (0) -
ci/official/envs/linux_x86
TFCI_DOCKER_IMAGE=us-docker.pkg.dev/ml-oss-artifacts-published/ml-public-container/ml-build:latest TFCI_DOCKER_PULL_ENABLE=1 TFCI_DOCKER_REBUILD_ARGS="--target=devel ci/official/containers/ml_build" TFCI_INDEX_HTML_ENABLE=1 TFCI_LIB_SUFFIX="-cpu-linux-x86_64" TFCI_OUTPUT_DIR=build_output TFCI_WHL_AUDIT_ENABLE=1 TFCI_WHL_AUDIT_PLAT=manylinux_2_27_x86_64 TFCI_WHL_BAZEL_TEST_ENABLE=1 TFCI_WHL_SIZE_LIMIT=260M
Registered: Tue Sep 09 12:39:10 UTC 2025 - Last Modified: Wed Jul 16 22:21:17 UTC 2025 - 1.4K bytes - Viewed (0)