Search Options

Results per page
Sort
Preferred Languages
Advance

Results 1 - 10 of 12 for xla (0.02 sec)

  1. tensorflow/BUILD

            [],
            otherwise = [
                "@local_xla//xla/stream_executor/cuda:all_runtime",
                "@local_xla//xla/stream_executor/cuda:cuda_platform",
                "@local_xla//xla/stream_executor/cuda:cudnn_plugin",
                "@local_xla//xla/stream_executor/cuda:cufft_plugin",
                "@local_xla//xla/stream_executor:cuda_platform",
            ],
        ),
        deps = if_cuda([
    Registered: Tue Sep 09 12:39:10 UTC 2025
    - Last Modified: Thu Aug 28 19:11:51 UTC 2025
    - 53.4K bytes
    - Viewed (0)
  2. tensorflow/c/BUILD

            "//tensorflow/core/transforms:__subpackages__",
        ],
        deps = [
            ":c_api_macros_hdrs",
            "@local_xla//xla/tsl/c:tsl_status",
            "@local_xla//xla/tsl/c:tsl_status_internal",
            "@local_xla//xla/tsl/platform:status",
        ] + select({
            "//tensorflow:android": [
                "//tensorflow/core:portable_tensorflow_lib_lite",  # TODO(annarev): exclude runtime srcs
    Registered: Tue Sep 09 12:39:10 UTC 2025
    - Last Modified: Mon Aug 18 03:53:25 UTC 2025
    - 30.6K bytes
    - Viewed (0)
  3. ci/official/containers/ml_build/README.md

    WIP ML Build Docker container for ML repositories (Tensorflow, JAX and XLA).
    
    This container branches off from
    /tensorflow/tools/tf_sig_build_dockerfiles/. However, since
    hermetic CUDA and hermetic Python is now available for Tensorflow, a lot of the
    requirements installed on the original container can be removed to reduce the
    footprint of the container and make it more reusable across different ML
    Registered: Tue Sep 09 12:39:10 UTC 2025
    - Last Modified: Tue Sep 24 20:45:58 UTC 2024
    - 416 bytes
    - Viewed (0)
  4. ci/official/utilities/repack_libtensorflow.sh

      cp tensorflow/core/platform/ctstring.h \
        tensorflow/core/platform/ctstring_internal.h \
        ${LIB_PKG}/include/tensorflow/core/platform
      cp third_party/xla/xla/tsl/c/tsl_status.h ${LIB_PKG}/include/xla/tsl/c
      cp third_party/xla/third_party/tsl/tsl/platform/ctstring.h \
         third_party/xla/third_party/tsl/tsl/platform/ctstring_internal.h \
         ${LIB_PKG}/include/tsl/platform
      cd ${LIB_PKG}
    Registered: Tue Sep 09 12:39:10 UTC 2025
    - Last Modified: Fri Jan 17 16:25:18 UTC 2025
    - 5.7K bytes
    - Viewed (0)
  5. tensorflow/c/c_api_experimental.h

    #ifdef __cplusplus
    extern "C" {
    #endif
    
    // When `enable` is true, set
    // tensorflow.ConfigProto.OptimizerOptions.global_jit_level to ON_1, and also
    // set XLA flag values to prepare for XLA compilation. Otherwise set
    // global_jit_level to OFF.
    //
    // This and the next API are syntax sugar over TF_SetConfig(), and is used by
    // clients that cannot read/write the tensorflow.ConfigProto proto.
    Registered: Tue Sep 09 12:39:10 UTC 2025
    - Last Modified: Thu Apr 27 21:07:00 UTC 2023
    - 15.1K bytes
    - Viewed (0)
  6. .bazelrc

    #     --config=dbg --per_file_copt=+tensorflow/core/kernels/identity_op.*@-g
    # Since this .bazelrc file is synced between the tensorflow/tensorflow repo and
    # the openxla/xla repo, also include debug info for files under xla/.
    build:dbg --per_file_copt=+.*,-tensorflow.*,-xla.*@-g0
    build:dbg --per_file_copt=+tensorflow/core/kernels.*@-g0
    # for now, disable arm_neon. see: https://github.com/tensorflow/tensorflow/issues/33360
    Registered: Tue Sep 09 12:39:10 UTC 2025
    - Last Modified: Fri Aug 22 21:03:34 UTC 2025
    - 56K bytes
    - Viewed (0)
  7. README.md

    **Linux XLA**                 | [![Status](https://storage.googleapis.com/tensorflow-kokoro-build-badges/ubuntu-xla.svg)](https://storage.googleapis.com/tensorflow-kokoro-build-badges/ubuntu-xla.html)         | TBA
    Registered: Tue Sep 09 12:39:10 UTC 2025
    - Last Modified: Fri Jul 18 14:09:03 UTC 2025
    - 11.6K bytes
    - Viewed (0)
  8. tensorflow/c/c_api_experimental.cc

      auto* optimizer_options =
          config.mutable_graph_options()->mutable_optimizer_options();
      if (enable) {
        optimizer_options->set_global_jit_level(tensorflow::OptimizerOptions::ON_1);
    
        // These XLA flags are needed to trigger XLA properly from C (more generally
        // non-Python) clients. If this API is called again with `enable` set to
        // false, it is safe to keep these flag values as is.
        tensorflow::MarkForCompilationPassFlags* flags =
    Registered: Tue Sep 09 12:39:10 UTC 2025
    - Last Modified: Mon Aug 18 03:53:25 UTC 2025
    - 29.5K bytes
    - Viewed (0)
  9. SECURITY.md

    reachable and exploitable through production-grade, benign models.
    
    ### Compilation
    
    Compiling models via the recommended entry points described in
    [XLA](https://www.tensorflow.org/xla) and
    [JAX](https://jax.readthedocs.io/en/latest/jax-101/02-jitting.html)
    documentation should be safe, while some of the testing and debugging tools that
    Registered: Tue Sep 09 12:39:10 UTC 2025
    - Last Modified: Wed Oct 16 16:10:43 UTC 2024
    - 9.6K bytes
    - Viewed (0)
  10. RELEASE.md

            Eager mode.
    
    *   `tf.lite`:
    
        *   Enable TFLite experimental new converter by default.
    
    *   XLA
    
        *   XLA now builds and works on windows. All prebuilt packages come with XLA
            available.
        *   XLA can be
            [enabled for a `tf.function`](https://www.tensorflow.org/xla#explicit_compilation_with_tffunction)
            with “compile or throw exception” semantics on CPU and GPU.
    
    Registered: Tue Sep 09 12:39:10 UTC 2025
    - Last Modified: Mon Aug 18 20:54:38 UTC 2025
    - 740K bytes
    - Viewed (2)
Back to top