Search Options

Results per page
Sort
Preferred Languages
Advance

Results 1 - 10 of 13 for xla (0.03 sec)

  1. tensorflow/c/BUILD

            "//tensorflow/core/transforms:__subpackages__",
        ],
        deps = [
            ":c_api_macros_hdrs",
            "@local_tsl//tsl/platform:status",
            "@local_xla//xla/tsl/c:tsl_status",
            "@local_xla//xla/tsl/c:tsl_status_internal",
        ] + select({
            "//tensorflow:android": [
                "//tensorflow/core:portable_tensorflow_lib_lite",  # TODO(annarev): exclude runtime srcs
            ],
    Registered: Tue Nov 05 12:39:12 UTC 2024
    - Last Modified: Sat Nov 02 06:47:06 UTC 2024
    - 30.4K bytes
    - Viewed (0)
  2. tensorflow/BUILD

            [],
            otherwise = [
                "@local_xla//xla/stream_executor/cuda:all_runtime",
                "@local_xla//xla/stream_executor/cuda:cuda_driver",
                "@local_xla//xla/stream_executor/cuda:cuda_platform",
                "@local_xla//xla/stream_executor/cuda:cudnn_plugin",
                "@local_xla//xla/stream_executor/cuda:cufft_plugin",
                "@local_xla//xla/stream_executor:cuda_platform",
            ],
        ),
    Registered: Tue Nov 05 12:39:12 UTC 2024
    - Last Modified: Wed Oct 16 05:28:35 UTC 2024
    - 53.5K bytes
    - Viewed (0)
  3. ci/official/containers/ml_build/README.md

    WIP ML Build Docker container for ML repositories (Tensorflow, JAX and XLA).
    
    This container branches off from
    /tensorflow/tools/tf_sig_build_dockerfiles/. However, since
    hermetic CUDA and hermetic Python is now available for Tensorflow, a lot of the
    requirements installed on the original container can be removed to reduce the
    footprint of the container and make it more reusable across different ML
    Registered: Tue Nov 05 12:39:12 UTC 2024
    - Last Modified: Tue Sep 24 20:45:58 UTC 2024
    - 416 bytes
    - Viewed (0)
  4. tensorflow/c/eager/parallel_device/parallel_device_lib_test.cc

    #include "tensorflow/c/eager/tfe_context_internal.h"
    #include "tensorflow/c/tf_buffer.h"
    #include "tensorflow/c/tf_datatype.h"
    #include "tensorflow/c/tf_status.h"
    #include "xla/tsl/lib/core/status_test_util.h"
    #include "tensorflow/core/common_runtime/eager/context.h"
    #include "tensorflow/core/framework/cancellation.h"
    #include "tensorflow/core/framework/function.h"
    Registered: Tue Nov 05 12:39:12 UTC 2024
    - Last Modified: Mon Oct 21 04:14:14 UTC 2024
    - 15.6K bytes
    - Viewed (0)
  5. .bazelrc

    #     --config=dbg --per_file_copt=+tensorflow/core/kernels/identity_op.*@-g
    # Since this .bazelrc file is synced between the tensorflow/tensorflow repo and
    # the openxla/xla repo, also include debug info for files under xla/.
    build:dbg --per_file_copt=+.*,-tensorflow.*,-xla.*@-g0
    build:dbg --per_file_copt=+tensorflow/core/kernels.*@-g0
    # for now, disable arm_neon. see: https://github.com/tensorflow/tensorflow/issues/33360
    Registered: Tue Nov 05 12:39:12 UTC 2024
    - Last Modified: Mon Oct 28 22:02:31 UTC 2024
    - 51.3K bytes
    - Viewed (0)
  6. tensorflow/c/c_api_experimental.cc

      auto* optimizer_options =
          config.mutable_graph_options()->mutable_optimizer_options();
      if (enable) {
        optimizer_options->set_global_jit_level(tensorflow::OptimizerOptions::ON_1);
    
        // These XLA flags are needed to trigger XLA properly from C (more generally
        // non-Python) clients. If this API is called again with `enable` set to
        // false, it is safe to keep these flag values as is.
        tensorflow::MarkForCompilationPassFlags* flags =
    Registered: Tue Nov 05 12:39:12 UTC 2024
    - Last Modified: Sat Oct 12 16:27:48 UTC 2024
    - 29.5K bytes
    - Viewed (0)
  7. SECURITY.md

    reachable and exploitable through production-grade, benign models.
    
    ### Compilation
    
    Compiling models via the recommended entry points described in
    [XLA](https://www.tensorflow.org/xla) and
    [JAX](https://jax.readthedocs.io/en/latest/jax-101/02-jitting.html)
    documentation should be safe, while some of the testing and debugging tools that
    Registered: Tue Nov 05 12:39:12 UTC 2024
    - Last Modified: Wed Oct 16 16:10:43 UTC 2024
    - 9.6K bytes
    - Viewed (0)
  8. tensorflow/c/eager/c_api_experimental.cc

    #include "tensorflow/c/eager/tfe_tensorhandle_internal.h"
    #include "tensorflow/c/tf_status.h"
    #include "tensorflow/c/tf_status_helper.h"
    #include "xla/tsl/c/tsl_status_internal.h"
    #include "xla/tsl/distributed_runtime/coordination/coordination_service_agent.h"
    #include "xla/tsl/framework/cancellation.h"
    #include "tensorflow/core/common_runtime/composite_device.h"
    #include "tensorflow/core/common_runtime/device.h"
    Registered: Tue Nov 05 12:39:12 UTC 2024
    - Last Modified: Sat Oct 12 05:11:17 UTC 2024
    - 35.9K bytes
    - Viewed (0)
  9. RELEASE.md

            Eager mode.
    
    *   `tf.lite`:
    
        *   Enable TFLite experimental new converter by default.
    
    *   XLA
    
        *   XLA now builds and works on windows. All prebuilt packages come with XLA
            available.
        *   XLA can be
            [enabled for a `tf.function`](https://www.tensorflow.org/xla#explicit_compilation_with_tffunction)
            with “compile or throw exception” semantics on CPU and GPU.
    
    Registered: Tue Nov 05 12:39:12 UTC 2024
    - Last Modified: Tue Oct 22 14:33:53 UTC 2024
    - 735.3K bytes
    - Viewed (0)
  10. configure.py

                    ' of compute capabilities excluding version %s.' % ver)
                all_valid = False
              if ver < 35:
                print('WARNING: XLA does not support CUDA compute capabilities '
                      'lower than sm_35. Disable XLA when running on older GPUs.')
          else:
            ver = float(m.group(0))
            if ver < 3.0:
    Registered: Tue Nov 05 12:39:12 UTC 2024
    - Last Modified: Wed Oct 02 22:16:02 UTC 2024
    - 48.2K bytes
    - Viewed (0)
Back to top