Search Options

Results per page
Sort
Preferred Languages
Advance

Results 21 - 30 of 44 for GPU (0.8 sec)

  1. docs/es/docs/advanced/events.md

    /// tip | Consejo
    
    El `shutdown` ocurriría cuando estás **deteniendo** la aplicación.
    
    Quizás necesites iniciar una nueva versión, o simplemente te cansaste de ejecutarla. 🤷
    
    ///
    
    Registered: Sun Dec 28 07:19:09 UTC 2025
    - Last Modified: Wed Dec 17 20:41:43 UTC 2025
    - 8.5K bytes
    - Viewed (0)
  2. tensorflow/c/c_api_test.cc

    TEST(CAPI, Session_Min_GPU) {
      const string gpu_device = GPUDeviceName();
      // Skip this test if no GPU is available.
      if (gpu_device.empty()) return;
    
      RunMinTest(gpu_device, /*use_XLA=*/false);
    }
    
    TEST(CAPI, Session_Min_XLA_GPU) {
      const string gpu_device = GPUDeviceName();
      // Skip this test if no GPU is available.
      if (gpu_device.empty()) return;
    
      RunMinTest(gpu_device, /*use_XLA=*/true);
    }
    Registered: Tue Dec 30 12:39:10 UTC 2025
    - Last Modified: Mon Nov 17 00:00:38 UTC 2025
    - 97K bytes
    - Viewed (0)
  3. docs/de/docs/advanced/events.md

    /// tip | Tipp
    
    Das `shutdown` würde erfolgen, wenn Sie die Anwendung **stoppen**.
    
    Möglicherweise müssen Sie eine neue Version starten, oder Sie haben es einfach satt, sie auszuführen. 🤷
    
    ///
    Registered: Sun Dec 28 07:19:09 UTC 2025
    - Last Modified: Wed Dec 17 20:41:43 UTC 2025
    - 9.5K bytes
    - Viewed (0)
  4. tensorflow/c/c_api_experimental.h

    // Sets XLA's auto jit mode according to the specified string, which is parsed
    // as if passed in XLA_FLAGS. This has global effect.
    TF_CAPI_EXPORT void TF_SetXlaAutoJitMode(const char* mode);
    
    // Returns whether the single GPU or general XLA auto jit optimizations are
    // enabled through MarkForCompilationPassFlags.
    TF_CAPI_EXPORT unsigned char TF_GetXlaAutoJitEnabled();
    
    // Sets XLA's minimum cluster size. This has global effect.
    Registered: Tue Dec 30 12:39:10 UTC 2025
    - Last Modified: Thu Apr 27 21:07:00 UTC 2023
    - 15.1K bytes
    - Viewed (0)
  5. docs/zh/docs/advanced/events.md

    {!../../docs_src/events/tutorial003.py!}
    ```
    
    在这里,我们在 `yield` 之前将(虚拟的)模型函数放入机器学习模型的字典中,以此模拟加载模型的耗时**启动**操作。这段代码将在应用程序**开始处理请求之前**执行,即**启动**期间。
    
    然后,在 `yield` 之后,我们卸载模型。这段代码将会在应用程序**完成处理请求后**执行,即在**关闭**之前。这可以释放诸如内存或 GPU 之类的资源。
    
    /// tip | 提示
    
    **关闭**事件只会在你停止应用时触发。
    
    可能你需要启动一个新版本,或者你只是你厌倦了运行它。 🤷
    
    ///
    
    ## 生命周期函数
    
    首先要注意的是,我们定义了一个带有 `yield` 的异步函数。这与带有 `yield` 的依赖项非常相似。
    
    ```Python hl_lines="14-19"
    Registered: Sun Dec 28 07:19:09 UTC 2025
    - Last Modified: Sat Oct 11 17:48:49 UTC 2025
    - 7K bytes
    - Viewed (0)
  6. build-logic/cleanup/src/test/groovy/gradlebuild/cleanup/services/LeakingProcessKillPatternTest.groovy

        def "matches google-chrome-for-testing"() {
            def line = '3723579 /usr/bin/google-chrome-for-testing --allow-pre-commit-input --disable-background-networking --disable-client-side-phishing-detection --disable-default-apps --disable-gpu --disable-hang-monitor --disable-popup-blocking --disab'
    
            def projectDir = "/whatever"
    
            expect:
            (line =~ KillLeakingJavaProcesses.generateLeakingProcessKillPattern(projectDir)).find()
        }
    
    Registered: Wed Dec 31 11:36:14 UTC 2025
    - Last Modified: Fri Jul 12 03:42:46 UTC 2024
    - 14.8K bytes
    - Viewed (0)
  7. tensorflow/c/c_test_util.cc

      TF_AddInput(desc, {zero, 0});
      TF_AddInput(desc, {input, 0});
      TF_SetAttrInt(desc, "num_split", 3);
      TF_SetAttrType(desc, "T", TF_INT32);
      // Set device to CPU since there is no version of split for int32 on GPU
      // TODO(iga): Convert all these helpers and tests to use floats because
      // they are usually available on GPUs. After doing this, remove TF_SetDevice
      // call in c_api_function_test.cc
      TF_SetDevice(desc, "/cpu:0");
    Registered: Tue Dec 30 12:39:10 UTC 2025
    - Last Modified: Sat Oct 04 05:55:32 UTC 2025
    - 17.8K bytes
    - Viewed (1)
  8. CHANGELOG/CHANGELOG-1.3.md

    * Do not query the metadata server to find out if running on GCE.  Retry metadata server query for gcr if running on gce. ([#28871](https://github.com/kubernetes/kubernetes/pull/28871), [@vishh](https://github.com/vishh))
    * Fix GPU resource validation ([#28743](https://github.com/kubernetes/kubernetes/pull/28743), [@therc](https://github.com/therc))
    Registered: Fri Dec 26 09:05:12 UTC 2025
    - Last Modified: Thu Dec 24 02:28:26 UTC 2020
    - 84K bytes
    - Viewed (0)
  9. tensorflow/BUILD

    )
    
    config_setting(
        name = "with_xla_support",
        define_values = {"with_xla_support": "true"},
        visibility = ["//visibility:public"],
    )
    
    # By default, XLA GPU is compiled into tensorflow when building with
    # --config=cuda even when `with_xla_support` is false. The config setting
    # here allows us to override the behavior if needed.
    config_setting(
        name = "no_xla_deps_in_cuda",
    Registered: Tue Dec 30 12:39:10 UTC 2025
    - Last Modified: Wed Nov 12 19:21:56 UTC 2025
    - 53.1K bytes
    - Viewed (0)
  10. CHANGELOG/CHANGELOG-1.7.md

        * Fix stop hook failure on kubernetes-worker charm
    
        * Fix handling of juju kubernetes-worker.restart-needed state
    
        * Fix nagios checks in charms
    
      * Enable GPU mode if GPU hardware detected ([#43467](https://github.com/kubernetes/kubernetes/pull/43467), [@tvansteenburgh](https://github.com/tvansteenburgh))
    
    Registered: Fri Dec 26 09:05:12 UTC 2025
    - Last Modified: Thu May 05 13:44:43 UTC 2022
    - 308.7K bytes
    - Viewed (1)
Back to top