Search Options

Results per page
Sort
Preferred Languages
Advance

Results 1 - 3 of 3 for Developer (0.18 sec)

  1. .bazelrc

    build:cuda_clang --config=cuda
    # Enable TensorRT optimizations https://developer.nvidia.com/tensorrt
    build:cuda_clang --config=tensorrt
    build:cuda_clang --action_env=TF_CUDA_CLANG="1"
    build:cuda_clang --@local_config_cuda//:cuda_compiler=clang
    # Select supported compute capabilities (supported graphics cards).
    # This is the same as the official TensorFlow builds.
    # See https://developer.nvidia.com/cuda-gpus#compute
    Plain Text
    - Registered: Tue Apr 30 12:39:09 GMT 2024
    - Last Modified: Wed Apr 24 20:50:35 GMT 2024
    - 52.6K bytes
    - Viewed (2)
  2. configure.py

        ask_cuda_compute_capabilities = (
            'Please specify a list of comma-separated CUDA compute capabilities '
            'you want to build with.\nYou can find the compute capability of your '
            'device at: https://developer.nvidia.com/cuda-gpus. Each capability '
            'can be specified as "x.y" or "compute_xy" to include both virtual and'
            ' binary GPU code, or as "sm_xy" to only include the binary '
    Python
    - Registered: Tue Apr 30 12:39:09 GMT 2024
    - Last Modified: Mon Apr 15 18:25:36 GMT 2024
    - 53.8K bytes
    - Viewed (0)
  3. RELEASE.md

            `tf.distribute.experimental.MultiWorkerMirroredStrategy`
        *   Update NVIDIA `NCCL` to `2.5.7-1` for better performance and performance
            tuning. Please see
            [nccl developer guide](https://docs.nvidia.com/deeplearning/sdk/nccl-developer-guide/docs/env.html)
            for more information on this.
        *   Support gradient `allreduce` in `float16`. See this
    Plain Text
    - Registered: Tue Apr 30 12:39:09 GMT 2024
    - Last Modified: Mon Apr 29 19:17:57 GMT 2024
    - 727.7K bytes
    - Viewed (8)
Back to top