- Sort Score
- Result 10 results
- Languages All
Results 21 - 30 of 52 for gpu (0.01 sec)
-
configure.py
(environ_cp.get('TF_NEED_ROCM', None) == '1')): test_and_build_filters += ['-no_windows_gpu', '-no_gpu'] else: test_and_build_filters.append('-gpu') elif is_macos(): test_and_build_filters += ['-gpu', '-nomac', '-no_mac', '-mac_excluded'] elif is_linux(): if ((environ_cp.get('TF_NEED_CUDA', None) == '1') or (environ_cp.get('TF_NEED_ROCM', None) == '1')):
Registered: Tue Nov 05 12:39:12 UTC 2024 - Last Modified: Wed Oct 02 22:16:02 UTC 2024 - 48.2K bytes - Viewed (0) -
tensorflow/c/eager/c_api_experimental_test.cc
ASSERT_EQ(TF_OK, TF_GetCode(status.get())) << TF_Message(status.get()); ASSERT_EQ(0, device_id) << device_id; // Disable the test if no GPU is present. string gpu_device_name; if (GetDeviceName(ctx, &gpu_device_name, "GPU")) { TFE_TensorHandle* hgpu = TFE_TensorHandleCopyToDevice( hcpu, ctx, gpu_device_name.c_str(), status.get());
Registered: Tue Nov 05 12:39:12 UTC 2024 - Last Modified: Thu Aug 03 03:14:26 UTC 2023 - 31.5K bytes - Viewed (0) -
ci/official/containers/linux_arm64/devel.usertools/wheel_verification.bats
# Googlers: search for "test_tf_whl_size" case "$TF_WHEEL" in # CPU: *cpu*manylinux*) LARGEST_OK_SIZE=220 ;; # GPU: *manylinux*) LARGEST_OK_SIZE=580 ;; # Unknown: *) echo "The wheel's name is in an unknown format." exit 1 ;; esac
Registered: Tue Nov 05 12:39:12 UTC 2024 - Last Modified: Tue Jan 23 02:14:00 UTC 2024 - 2.7K bytes - Viewed (0) -
ci/official/README.md
You may invoke a CI script of your choice by following these instructions: ```bash cd tensorflow-git-dir # Here is a single-line example of running a script on Linux to build the # GPU version of TensorFlow for Python 3.12, using the public TF bazel cache and # a local build cache: TFCI=py312,linux_x86_cuda,public_cache,disk_cache ci/official/wheel.sh
Registered: Tue Nov 05 12:39:12 UTC 2024 - Last Modified: Thu Feb 01 03:21:19 UTC 2024 - 8K bytes - Viewed (0) -
tensorflow/c/eager/c_api_test_util.h
TFE_Op* MinOp(TFE_Context* ctx, TFE_TensorHandle* input, TFE_TensorHandle* axis); // If there is a device of type `device_type`, returns true // and sets 'device_name' accordingly. // `device_type` must be either "GPU" or "TPU". bool GetDeviceName(TFE_Context* ctx, tensorflow::string* device_name, const char* device_type); // Create a ServerDef with the given `job_name` and add `num_tasks` tasks in it.
Registered: Tue Nov 05 12:39:12 UTC 2024 - Last Modified: Mon Jul 17 23:43:59 UTC 2023 - 7.7K bytes - Viewed (0) -
tensorflow/c/eager/abstract_operation.h
// logic to refer to the specific device chosen. // // Example: If one calls `op->SetDeviceName("/device:GPU")`, the value // returned by DeviceName should be "/device:GPU:*" until a particular GPU is // chosen for the operation by the device placement logic in the // executor. After that, the value returned by DeviceName will be a full // device name such as "/job:localhost/replica:0/task:0/device:GPU:1".
Registered: Tue Nov 05 12:39:12 UTC 2024 - Last Modified: Sat Oct 12 05:11:17 UTC 2024 - 7.3K bytes - Viewed (0) -
.github/workflows/build.yml
with: api-level: ${{ matrix.api-level }} arch: ${{ matrix.api-level == '34' && 'x86_64' || 'x86' }} force-avd-creation: false emulator-options: -no-window -gpu swiftshader_indirect -noaudio -no-boot-anim -camera-back none disable-animations: false script: echo "Generated AVD snapshot for caching." - name: Run Tests
Registered: Fri Nov 01 11:42:11 UTC 2024 - Last Modified: Sat Aug 17 10:05:29 UTC 2024 - 17.2K bytes - Viewed (0) -
docs/pt/docs/advanced/events.md
E então, logo após o `yield`, descarregaremos o modelo. Esse código será executado **após** a aplicação **terminar de lidar com as requisições**, pouco antes do *encerramento*. Isso poderia, por exemplo, liberar recursos como memória ou GPU. /// tip | "Dica" O `shutdown` aconteceria quando você estivesse **encerrando** a aplicação. Talvez você precise inicializar uma nova versão, ou apenas cansou de executá-la. 🤷 /// ### Função _lifespan_
Registered: Sun Nov 03 07:19:11 UTC 2024 - Last Modified: Sun Oct 06 20:36:54 UTC 2024 - 8.6K bytes - Viewed (0) -
docs/en/docs/advanced/events.md
And then, right after the `yield`, we unload the model. This code will be executed **after** the application **finishes handling requests**, right before the *shutdown*. This could, for example, release resources like memory or a GPU. /// tip The `shutdown` would happen when you are **stopping** the application. Maybe you need to start a new version, or you just got tired of running it. 🤷 /// ### Lifespan function
Registered: Sun Nov 03 07:19:11 UTC 2024 - Last Modified: Mon Oct 28 10:36:22 UTC 2024 - 7.6K bytes - Viewed (0) -
tensorflow/c/c_api_experimental.h
// Sets XLA's auto jit mode according to the specified string, which is parsed // as if passed in XLA_FLAGS. This has global effect. TF_CAPI_EXPORT void TF_SetXlaAutoJitMode(const char* mode); // Returns whether the single GPU or general XLA auto jit optimizations are // enabled through MarkForCompilationPassFlags. TF_CAPI_EXPORT unsigned char TF_GetXlaAutoJitEnabled(); // Sets XLA's minimum cluster size. This has global effect.
Registered: Tue Nov 05 12:39:12 UTC 2024 - Last Modified: Thu Apr 27 21:07:00 UTC 2023 - 15.1K bytes - Viewed (0)