Search Options

Results per page
Sort
Preferred Languages
Advance

Results 1 - 2 of 2 for PRECISION (0.08 sec)

  1. RELEASE.md

    *   TF Core:
    
        *   Certain float32 ops run in lower precision on Ampere based GPUs,
            including matmuls and convolutions, due to the use of
            [TensorFloat-32](https://blogs.nvidia.com/blog/2020/05/14/tensorfloat-32-precision-format/).
            Specifically, inputs to such ops are rounded from 23 bits of precision
            to 10 bits of precision. This is unlikely to cause issues in practice
    Registered: Tue Nov 05 12:39:12 UTC 2024
    - Last Modified: Tue Oct 22 14:33:53 UTC 2024
    - 735.3K bytes
    - Viewed (0)
  2. src/main/webapp/css/admin/bootstrap.min.css.map

    {\n      $remainder: $remainder - $divisor;\n      $quotient: $quotient + 1;\n    }\n    $result: $result * 10 + $quotient;\n    $factor: $factor * .1;\n    $remainder: $remainder * 10;\n    $precision: $precision - 1;\n    @if ($precision < 0 and $remainder >= $divisor * 5) {\n      $result: $result + 1;\n    }\n  }\n  $result: $result * $factor * $sign;\n  $dividend-unit: unit($dividend);\n  $divisor-unit: unit($divisor);\n  $unit-map: (\n    \"px\": 1px,\n    \"rem\": 1rem,\n    \"em\": 1em,\n...
    Registered: Thu Oct 31 13:40:30 UTC 2024
    - Last Modified: Sat Oct 26 01:49:09 UTC 2024
    - 639.3K bytes
    - Viewed (0)
Back to top