- Sort Score
- Result 10 results
- Languages All
Results 1 - 10 of 12 for GPU (0.02 sec)
-
.bazelrc
# CUDA WHEEL test:linux_cuda_wheel_test_filters --test_tag_filters=gpu,requires-gpu,-no_gpu,-no_oss,-tf_tosa,-oss_excluded,-oss_serial,-benchmark-test,-no_cuda11,-no_oss_py38,-no_oss_py39,-no_oss_py310,-no_oss_py313 test:linux_cuda_wheel_test_filters --build_tag_filters=gpu,requires-gpu,-no_gpu,-no_oss,-tf_tosa,-oss_excluded,-oss_serial,-benchmark-test,-no_cuda11,-no_oss_py38,-no_oss_py39,-no_oss_py310,-no_oss_py313
Registered: Tue Sep 09 12:39:10 UTC 2025 - Last Modified: Fri Aug 22 21:03:34 UTC 2025 - 56K bytes - Viewed (0) -
.github/bot_config.yml
cuda_comment: > From the template it looks like you are installing **TensorFlow** (TF) prebuilt binaries: * For TF-GPU - See point 1 * For TF-CPU - See point 2 ----------------------------------------------------------------------------------------------- **1. Installing **TensorFlow-GPU** (TF) prebuilt binaries** Make sure you are using compatible TF and CUDA versions.
Registered: Tue Sep 09 12:39:10 UTC 2025 - Last Modified: Mon Jun 30 16:38:59 UTC 2025 - 4K bytes - Viewed (0) -
README.md
[pip package](https://www.tensorflow.org/install/pip), to [enable GPU support](https://www.tensorflow.org/install/gpu), use a [Docker container](https://www.tensorflow.org/install/docker), and [build from source](https://www.tensorflow.org/install/source). To install the current release, which includes support for [CUDA-enabled GPU cards](https://www.tensorflow.org/install/gpu) *(Ubuntu and Windows)*: ``` $ pip install tensorflow ```
Registered: Tue Sep 09 12:39:10 UTC 2025 - Last Modified: Fri Jul 18 14:09:03 UTC 2025 - 11.6K bytes - Viewed (0) -
ci/official/utilities/rename_and_verify_wheels.sh
# Checks TFCI_BAZEL_COMMON_ARGS for "gpu" or "cuda", implying that the test is # relevant. All of the GPU test machines have CUDA installed via other means, # so I am not sure how to verify that the dependencies themselves are valid for # the moment. if [[ "$TFCI_BAZEL_COMMON_ARGS" =~ gpu|cuda ]]; then echo "Checking to make sure tensorflow[and-cuda] is installable..."
Registered: Tue Sep 09 12:39:10 UTC 2025 - Last Modified: Fri Apr 25 00:22:38 UTC 2025 - 4.7K bytes - Viewed (0) -
RELEASE.md
`XNNPACK` delegate automatically when the model has a `fp32` operation. * GPU * Allow GPU acceleration starting with internal graph nodes * Experimental support for quantized models with the Android GPU delegate * Add GPU delegate whitelist. * Rename GPU whitelist -> compatibility (list). * Improve GPU compatibility list entries from crash reports. * NNAPI * Set default value for
Registered: Tue Sep 09 12:39:10 UTC 2025 - Last Modified: Mon Aug 18 20:54:38 UTC 2025 - 740K bytes - Viewed (1) -
configure.py
(environ_cp.get('TF_NEED_ROCM', None) == '1')): test_and_build_filters += ['-no_windows_gpu', '-no_gpu'] else: test_and_build_filters.append('-gpu') elif is_macos(): test_and_build_filters += ['-gpu', '-nomac', '-no_mac', '-mac_excluded'] elif is_linux(): if ((environ_cp.get('TF_NEED_CUDA', None) == '1') or (environ_cp.get('TF_NEED_ROCM', None) == '1')):
Registered: Tue Sep 09 12:39:10 UTC 2025 - Last Modified: Wed Apr 30 15:18:54 UTC 2025 - 48.3K bytes - Viewed (0) -
tensorflow/c/c_api_experimental.cc
// threadpool of GPU event mgr, as that can trigger more callbacks to be // scheduled on that same threadpool, causing a deadlock in cases where the // caller of event_mgr->ThenExecute() blocks on the completion of the callback // (as in the case of ConstOp kernel creation on GPU, which involves copying a // CPU tensor to GPU).
Registered: Tue Sep 09 12:39:10 UTC 2025 - Last Modified: Mon Aug 18 03:53:25 UTC 2025 - 29.5K bytes - Viewed (0) -
.github/workflows/build.yml
with: api-level: ${{ matrix.api-level }} arch: ${{ matrix.api-level == '34' && 'x86_64' || 'x86' }} force-avd-creation: false emulator-options: -no-window -gpu swiftshader_indirect -noaudio -no-boot-anim -camera-back none disable-animations: false script: echo "Generated AVD snapshot for caching." - name: Run Tests
Registered: Fri Sep 05 11:42:10 UTC 2025 - Last Modified: Thu Aug 21 07:15:58 UTC 2025 - 18.1K bytes - Viewed (0) -
docs/en/docs/advanced/events.md
And then, right after the `yield`, we unload the model. This code will be executed **after** the application **finishes handling requests**, right before the *shutdown*. This could, for example, release resources like memory or a GPU. /// tip The `shutdown` would happen when you are **stopping** the application. Maybe you need to start a new version, or you just got tired of running it. 🤷 ///
Registered: Sun Sep 07 07:19:17 UTC 2025 - Last Modified: Sun Aug 31 09:15:41 UTC 2025 - 7.9K bytes - Viewed (0) -
tensorflow/BUILD
) config_setting( name = "with_xla_support", define_values = {"with_xla_support": "true"}, visibility = ["//visibility:public"], ) # By default, XLA GPU is compiled into tensorflow when building with # --config=cuda even when `with_xla_support` is false. The config setting # here allows us to override the behavior if needed. config_setting( name = "no_xla_deps_in_cuda",
Registered: Tue Sep 09 12:39:10 UTC 2025 - Last Modified: Thu Aug 28 19:11:51 UTC 2025 - 53.4K bytes - Viewed (0)