- Sort Score
- Num 10 results
- Language All
Results 1 - 10 of 14 for GPU (0.01 seconds)
-
README.md
[pip package](https://www.tensorflow.org/install/pip), to [enable GPU support](https://www.tensorflow.org/install/gpu), use a [Docker container](https://www.tensorflow.org/install/docker), and [build from source](https://www.tensorflow.org/install/source). To install the current release, which includes support for [CUDA-enabled GPU cards](https://www.tensorflow.org/install/gpu) *(Ubuntu and Windows)*: ``` pip install tensorflow ```
Created: Tue Apr 07 12:39:13 GMT 2026 - Last Modified: Thu Apr 02 10:38:57 GMT 2026 - 11.6K bytes - Click Count (0) -
CONTRIBUTING.md
and [GPU developer Dockerfile](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tools/dockerfiles/dockerfiles/devel-gpu.Dockerfile) for the required packages. Alternatively, use the said [tensorflow/build Docker images](https://hub.docker.com/r/tensorflow/build) (`tensorflow/tensorflow:devel` and `tensorflow/tensorflow:devel-gpu` are noCreated: Tue Apr 07 12:39:13 GMT 2026 - Last Modified: Sat Jan 11 04:47:59 GMT 2025 - 15.9K bytes - Click Count (0) -
configure.py
(environ_cp.get('TF_NEED_ROCM', None) == '1')): test_and_build_filters += ['-no_windows_gpu', '-no_gpu'] else: test_and_build_filters.append('-gpu') elif is_macos(): test_and_build_filters += ['-gpu', '-nomac', '-no_mac', '-mac_excluded'] elif is_linux(): if ((environ_cp.get('TF_NEED_CUDA', None) == '1') or (environ_cp.get('TF_NEED_ROCM', None) == '1')):
Created: Tue Apr 07 12:39:13 GMT 2026 - Last Modified: Fri Dec 19 16:32:04 GMT 2025 - 48.3K bytes - Click Count (0) -
tensorflow/c/eager/dlpack.cc
Created: Tue Apr 07 12:39:13 GMT 2026 - Last Modified: Thu Mar 13 23:41:52 GMT 2025 - 13K bytes - Click Count (0) -
tensorflow/c/c_api_experimental.cc
// threadpool of GPU event mgr, as that can trigger more callbacks to be // scheduled on that same threadpool, causing a deadlock in cases where the // caller of event_mgr->ThenExecute() blocks on the completion of the callback // (as in the case of ConstOp kernel creation on GPU, which involves copying a // CPU tensor to GPU).
Created: Tue Apr 07 12:39:13 GMT 2026 - Last Modified: Sat Oct 04 05:55:32 GMT 2025 - 29.4K bytes - Click Count (0) -
tensorflow/c/eager/c_api_experimental_test.cc
ASSERT_EQ(TF_OK, TF_GetCode(status.get())) << TF_Message(status.get()); ASSERT_EQ(0, device_id) << device_id; // Disable the test if no GPU is present. string gpu_device_name; if (GetDeviceName(ctx, &gpu_device_name, "GPU")) { TFE_TensorHandle* hgpu = TFE_TensorHandleCopyToDevice( hcpu, ctx, gpu_device_name.c_str(), status.get());
Created: Tue Apr 07 12:39:13 GMT 2026 - Last Modified: Thu Oct 09 05:56:18 GMT 2025 - 31.5K bytes - Click Count (0) -
tensorflow/c/eager/c_api_experimental.h
// completed. This is only valid on local TFE_TensorHandles. The pointer // returned will be on the device in which the TFE_TensorHandle resides (so e.g. // for a GPU tensor this will return a pointer to GPU memory). The pointer is // only guaranteed to be valid until TFE_DeleteTensorHandle is called on this // TensorHandle. Only supports POD data types.
Created: Tue Apr 07 12:39:13 GMT 2026 - Last Modified: Wed Feb 21 22:37:46 GMT 2024 - 39.5K bytes - Click Count (1) -
.github/workflows/build.yml
Created: Fri Apr 03 11:42:14 GMT 2026 - Last Modified: Tue Mar 10 16:19:02 GMT 2026 - 11.6K bytes - Click Count (0) -
build-logic/cleanup/src/test/groovy/gradlebuild/cleanup/services/LeakingProcessKillPatternTest.groovy
def "matches google-chrome-for-testing"() { def line = '3723579 /usr/bin/google-chrome-for-testing --allow-pre-commit-input --disable-background-networking --disable-client-side-phishing-detection --disable-default-apps --disable-gpu --disable-hang-monitor --disable-popup-blocking --disab' def projectDir = "/whatever" expect: (line =~ KillLeakingJavaProcesses.generateLeakingProcessKillPattern(projectDir)).find() }
Created: Wed Apr 01 11:36:16 GMT 2026 - Last Modified: Fri Jul 12 03:42:46 GMT 2024 - 14.8K bytes - Click Count (0) -
tensorflow/c/c_api_experimental.h
// Sets XLA's auto jit mode according to the specified string, which is parsed // as if passed in XLA_FLAGS. This has global effect. TF_CAPI_EXPORT void TF_SetXlaAutoJitMode(const char* mode); // Returns whether the single GPU or general XLA auto jit optimizations are // enabled through MarkForCompilationPassFlags. TF_CAPI_EXPORT unsigned char TF_GetXlaAutoJitEnabled(); // Sets XLA's minimum cluster size. This has global effect.
Created: Tue Apr 07 12:39:13 GMT 2026 - Last Modified: Thu Apr 27 21:07:00 GMT 2023 - 15.1K bytes - Click Count (0)