Search Options

Results per page
Sort
Preferred Languages
Advance

Results 11 - 20 of 23 for cuja (0.14 sec)

  1. ci/official/README.md

    #    build. This should also match the system you're using--you cannot build
    #    the TF MacOS package from Linux.
    #      Ex. linux_x86        -- x86_64 Linux platform
    #      Ex. linux_x86_cuda   -- x86_64 Linux platform, with Nvidia CUDA support
    #      Ex. macos_arm64      -- arm64 MacOS platform
    # 3. Add modifiers. Some modifiers for local execution are:
    #      Ex. disk_cache -- Use a local cache
    Plain Text
    - Registered: Tue May 07 12:40:20 GMT 2024
    - Last Modified: Thu Feb 01 03:21:19 GMT 2024
    - 8K bytes
    - Viewed (0)
  2. ci/official/containers/linux_arm64/Dockerfile

    # Install devtoolset devel dependencies
    COPY setup.sources.sh /setup.sources.sh
    COPY setup.packages.sh /setup.packages.sh
    COPY devel.packages.txt /devel.packages.txt
    COPY cuda.packages.txt /cuda.packages.txt
    RUN /setup.sources.sh && /setup.packages.sh /devel.packages.txt
    
    # Install various tools.
    # - bats: bash unit testing framework
    #         NOTE: v1.6.0 seems to have a bug that made "git" in setup_file break
    Plain Text
    - Registered: Tue May 07 12:40:20 GMT 2024
    - Last Modified: Mon Jan 08 09:32:19 GMT 2024
    - 4.1K bytes
    - Viewed (1)
  3. CONTRIBUTING.md

        export flags="--config=opt -k"
        ```
    
        If the tests are to be run on the GPU, add CUDA paths to LD_LIBRARY_PATH and
        add the `cuda` option flag
    
        ```bash
        export LD_LIBRARY_PATH="${LD_LIBRARY_PATH}:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:$LD_LIBRARY_PATH"
        export flags="--config=opt --config=cuda -k"
        ```
    
        For example, to run all tests under tensorflow/python, do:
    
        ```bash
    Plain Text
    - Registered: Tue May 07 12:40:20 GMT 2024
    - Last Modified: Thu Mar 21 11:45:51 GMT 2024
    - 15.6K bytes
    - Viewed (0)
  4. ci/official/libtensorflow.sh

    # limitations under the License.
    # ==============================================================================
    source "${BASH_SOURCE%/*}/utilities/setup.sh"
    
    # Record GPU count and CUDA version status
    if [[ "$TFCI_NVIDIA_SMI_ENABLE" == 1 ]]; then
      tfrun nvidia-smi
    fi
    
    # Update the version numbers for Nightly only
    if [[ "$TFCI_NIGHTLY_UPDATE_VERSION_ENABLE" == 1 ]]; then
    Shell Script
    - Registered: Tue Apr 30 12:39:09 GMT 2024
    - Last Modified: Fri Jan 19 19:07:48 GMT 2024
    - 1.5K bytes
    - Viewed (0)
  5. ci/official/wheel.sh

    # limitations under the License.
    # ==============================================================================
    source "${BASH_SOURCE%/*}/utilities/setup.sh"
    
    # Record GPU count and CUDA version status
    if [[ "$TFCI_NVIDIA_SMI_ENABLE" == 1 ]]; then
      tfrun nvidia-smi
    fi
    
    # Update the version numbers for Nightly only
    if [[ "$TFCI_NIGHTLY_UPDATE_VERSION_ENABLE" == 1 ]]; then
    Shell Script
    - Registered: Tue Apr 30 12:39:09 GMT 2024
    - Last Modified: Wed Mar 06 21:54:13 GMT 2024
    - 1.8K bytes
    - Viewed (0)
  6. ci/official/envs/linux_arm64

    TFCI_BAZEL_TARGET_SELECTING_CONFIG_PREFIX=linux_arm64
    # Note: this is not set to "--cpu", because that changes the package name
    # to tensorflow_cpu. These ARM builds are supposed to have the name "tensorflow"
    # despite lacking Nvidia CUDA support.
    TFCI_BUILD_PIP_PACKAGE_ARGS="--repo_env=WHEEL_NAME=tensorflow"
    TFCI_DOCKER_ENABLE=1
    TFCI_DOCKER_IMAGE=gcr.io/tensorflow-sigs/build-arm64:tf-2-16-multi-python
    TFCI_DOCKER_PULL_ENABLE=1
    Plain Text
    - Registered: Tue Apr 30 12:39:09 GMT 2024
    - Last Modified: Thu Feb 15 23:12:40 GMT 2024
    - 1.5K bytes
    - Viewed (1)
  7. ci/official/containers/linux_arm64/devel.usertools/aarch64.bazelrc

    # Change the value of CACHEBUSTER when upgrading the toolchain, or when testing
    # different compilation methods. E.g. for a PR to test a new CUDA version, set
    # the CACHEBUSTER to the PR number.
    build --action_env=CACHEBUSTER=20220325
    
    # Use Python 3.X as installed in container image
    build --action_env PYTHON_BIN_PATH="/usr/local/bin/python3"
    Plain Text
    - Registered: Tue May 07 12:40:20 GMT 2024
    - Last Modified: Tue Nov 21 12:25:39 GMT 2023
    - 5.8K bytes
    - Viewed (0)
  8. ci/official/containers/linux_arm64/devel.usertools/aarch64_clang.bazelrc

    # Change the value of CACHEBUSTER when upgrading the toolchain, or when testing
    # different compilation methods. E.g. for a PR to test a new CUDA version, set
    # the CACHEBUSTER to the PR number.
    build --action_env=CACHEBUSTER=20220325
    
    # Use Python 3.X as installed in container image
    build --action_env PYTHON_BIN_PATH="/usr/local/bin/python3"
    Plain Text
    - Registered: Tue May 07 12:40:20 GMT 2024
    - Last Modified: Tue Nov 21 12:25:39 GMT 2023
    - 6.3K bytes
    - Viewed (0)
  9. lib/time/zoneinfo.zip

    Australia/Yancowinna Brazil/Acre Brazil/DeNoronha Brazil/East Brazil/West CET CST6CDT Canada/Atlantic Canada/Central Canada/Eastern Canada/Mountain Canada/Newfoundland Canada/Pacific Canada/Saskatchewan Canada/Yukon Chile/Continental Chile/EasterIsland Cuba EET EST EST5EDT Egypt Eire Etc/GMT Etc/GMT+0 Etc/GMT+1 Etc/GMT+10 Etc/GMT+11 Etc/GMT+12 Etc/GMT+2 Etc/GMT+3 Etc/GMT+4 Etc/GMT+5 Etc/GMT+6 Etc/GMT+7 Etc/GMT+8 Etc/GMT+9 Etc/GMT-0 Etc/GMT-1 Etc/GMT-10 Etc/GMT-11 Etc/GMT-12 Etc/GMT-13 Etc/GMT-14 Etc/GMT-2...
    ZIP Archive
    - Registered: Tue Apr 30 11:13:12 GMT 2024
    - Last Modified: Fri Feb 02 18:20:41 GMT 2024
    - 392.3K bytes
    - Viewed (1)
  10. RELEASE.md

    *   Move `layers_dense_variational_impl.py` to `layers_dense_variational.py`.
    
    ## Known Bugs
    
    *   Using XLA:GPU with CUDA 9 and CUDA 9.1 results in garbage results and/or
        `CUDA_ILLEGAL_ADDRESS` failures.
    
        Google discovered in mid-December 2017 that the PTX-to-SASS compiler in CUDA
        9 and CUDA 9.1 sometimes does not properly compute the carry bit when
    Plain Text
    - Registered: Tue May 07 12:40:20 GMT 2024
    - Last Modified: Mon Apr 29 19:17:57 GMT 2024
    - 727.7K bytes
    - Viewed (8)
Back to top