- Sort Score
- Result 10 results
- Languages All
Results 31 - 40 of 52 for gpu (0.03 sec)
-
tensorflow/c/c_api_test.cc
TEST(CAPI, Session_Min_GPU) { const string gpu_device = GPUDeviceName(); // Skip this test if no GPU is available. if (gpu_device.empty()) return; RunMinTest(gpu_device, /*use_XLA=*/false); } TEST(CAPI, Session_Min_XLA_GPU) { const string gpu_device = GPUDeviceName(); // Skip this test if no GPU is available. if (gpu_device.empty()) return; RunMinTest(gpu_device, /*use_XLA=*/true); }
Registered: Tue Nov 05 12:39:12 UTC 2024 - Last Modified: Sat Oct 12 16:27:48 UTC 2024 - 97K bytes - Viewed (0) -
tensorflow/c/eager/c_api_experimental.h
// completed. This is only valid on local TFE_TensorHandles. The pointer // returned will be on the device in which the TFE_TensorHandle resides (so e.g. // for a GPU tensor this will return a pointer to GPU memory). The pointer is // only guaranteed to be valid until TFE_DeleteTensorHandle is called on this // TensorHandle. Only supports POD data types.
Registered: Tue Nov 05 12:39:12 UTC 2024 - Last Modified: Wed Feb 21 22:37:46 UTC 2024 - 39.5K bytes - Viewed (0) -
docs/de/docs/advanced/events.md
/// tip | "Tipp" Das *Herunterfahren* würde erfolgen, wenn Sie die Anwendung **stoppen**. Möglicherweise müssen Sie eine neue Version starten, oder Sie haben es einfach satt, sie auszuführen. 🤷
Registered: Sun Nov 03 07:19:11 UTC 2024 - Last Modified: Sun Oct 06 20:36:54 UTC 2024 - 9.1K bytes - Viewed (0) -
build-logic/cleanup/src/test/groovy/gradlebuild/cleanup/services/LeakingProcessKillPatternTest.groovy
def "matches google-chrome-for-testing"() { def line = '3723579 /usr/bin/google-chrome-for-testing --allow-pre-commit-input --disable-background-networking --disable-client-side-phishing-detection --disable-default-apps --disable-gpu --disable-hang-monitor --disable-popup-blocking --disab' def projectDir = "/whatever" expect: (line =~ KillLeakingJavaProcesses.generateLeakingProcessKillPattern(projectDir)).find() }
Registered: Wed Nov 06 11:36:14 UTC 2024 - Last Modified: Fri Jul 12 03:42:46 UTC 2024 - 14.8K bytes - Viewed (0) -
tensorflow/c/c_test_util.cc
TF_AddInput(desc, {zero, 0}); TF_AddInput(desc, {input, 0}); TF_SetAttrInt(desc, "num_split", 3); TF_SetAttrType(desc, "T", TF_INT32); // Set device to CPU since there is no version of split for int32 on GPU // TODO(iga): Convert all these helpers and tests to use floats because // they are usually available on GPUs. After doing this, remove TF_SetDevice // call in c_api_function_test.cc TF_SetDevice(desc, "/cpu:0");
Registered: Tue Nov 05 12:39:12 UTC 2024 - Last Modified: Fri Oct 15 03:16:52 UTC 2021 - 17.8K bytes - Viewed (0) -
CHANGELOG/CHANGELOG-1.3.md
* Do not query the metadata server to find out if running on GCE. Retry metadata server query for gcr if running on gce. ([#28871](https://github.com/kubernetes/kubernetes/pull/28871), [@vishh](https://github.com/vishh)) * Fix GPU resource validation ([#28743](https://github.com/kubernetes/kubernetes/pull/28743), [@therc](https://github.com/therc))
Registered: Fri Nov 01 09:05:11 UTC 2024 - Last Modified: Thu Dec 24 02:28:26 UTC 2020 - 84K bytes - Viewed (0) -
tensorflow/BUILD
) config_setting( name = "with_xla_support", define_values = {"with_xla_support": "true"}, visibility = ["//visibility:public"], ) # By default, XLA GPU is compiled into tensorflow when building with # --config=cuda even when `with_xla_support` is false. The config setting # here allows us to override the behavior if needed. config_setting( name = "no_xla_deps_in_cuda",
Registered: Tue Nov 05 12:39:12 UTC 2024 - Last Modified: Wed Oct 16 05:28:35 UTC 2024 - 53.5K bytes - Viewed (0) -
CHANGELOG/CHANGELOG-1.7.md
* Fix stop hook failure on kubernetes-worker charm * Fix handling of juju kubernetes-worker.restart-needed state * Fix nagios checks in charms * Enable GPU mode if GPU hardware detected ([#43467](https://github.com/kubernetes/kubernetes/pull/43467), [@tvansteenburgh](https://github.com/tvansteenburgh))
Registered: Fri Nov 01 09:05:11 UTC 2024 - Last Modified: Thu May 05 13:44:43 UTC 2022 - 308.7K bytes - Viewed (1) -
tensorflow/c/c_api_function_test.cc
for (auto input : inputs) { TF_AddInput(desc, input); } // Set device to CPU because some ops inside the function might not be // available on GPU. TF_SetDevice(desc, "/cpu:0"); *op = TF_FinishOperation(desc, s_); ASSERT_EQ(TF_OK, TF_GetCode(s_)) << TF_Message(s_); ASSERT_NE(*op, nullptr); } FunctionDef fdef() {
Registered: Tue Nov 05 12:39:12 UTC 2024 - Last Modified: Thu Jul 20 22:08:54 UTC 2023 - 63.6K bytes - Viewed (0) -
CHANGELOG/CHANGELOG-1.11.md
* Support for the `alpha.kubernetes.io/nvidia-gpu` resource, which was deprecated in 1.10, has been removed. Please use the resource exposed by DevicePlugins instead (`nvidia.com/gpu`). ([#61498](https://github.com/kubernetes/kubernetes/pull/61498), [@mindprince](https://github.com/mindprince))
Registered: Fri Nov 01 09:05:11 UTC 2024 - Last Modified: Thu Feb 06 06:04:15 UTC 2020 - 328.4K bytes - Viewed (0)