Search Options

Results per page
Sort
Preferred Languages
Advance

Results 1 - 4 of 4 for directionality (0.17 sec)

  1. src/go/types/unify.go

    				// inexactly unifying channel type remains assignable (go.dev/issue/62157).
    				//
    				// If we have multiple defined channel types, they are either identical or we
    				// have assignment conflicts, so we can ignore directionality in this case.
    				//
    				// If we have defined and literal channel types, a defined type wins to avoid
    				// order dependencies.
    				if mode&exact == 0 {
    					switch {
    					case xn:
    Registered: Wed Jun 12 16:32:35 UTC 2024
    - Last Modified: Tue Jun 11 16:24:39 UTC 2024
    - 27.9K bytes
    - Viewed (0)
  2. src/cmd/compile/internal/types2/unify.go

    				// inexactly unifying channel type remains assignable (go.dev/issue/62157).
    				//
    				// If we have multiple defined channel types, they are either identical or we
    				// have assignment conflicts, so we can ignore directionality in this case.
    				//
    				// If we have defined and literal channel types, a defined type wins to avoid
    				// order dependencies.
    				if mode&exact == 0 {
    					switch {
    					case xn:
    Registered: Wed Jun 12 16:32:35 UTC 2024
    - Last Modified: Tue Jun 11 16:24:39 UTC 2024
    - 27.8K bytes
    - Viewed (0)
  3. doc/go_spec.html

    </pre>
    
    <p>
    the variable <code>s</code> of type <code>Slice</code> must be assignable to
    the function parameter type <code>S</code> for the program to be valid.
    To reduce complexity, type inference ignores the directionality of assignments,
    so the type relationship between <code>Slice</code> and <code>S</code> can be
    expressed via the (symmetric) type equation <code>Slice ≡<sub>A</sub> S</code>
    Registered: Wed Jun 12 16:32:35 UTC 2024
    - Last Modified: Tue Jun 04 21:07:21 UTC 2024
    - 281.5K bytes
    - Viewed (1)
  4. tensorflow/compiler/mlir/lite/transforms/optimize_patterns.td

      // 2. The rank of the input to reshape is <= reshape output.
      // 3. The rank of the output to reshape is <= binary rhs.
      // The conditions 2 and 3 will make sure any required increase in
      // dimentionality dure to reshape op is not lost.
      def RemoveRedundantReshapeUsedAsLhsTo#BinaryOp : Pat<
        (BinaryOp (TFL_ReshapeOp:$lhs $input, (Arith_ConstantOp:$shape $s)),
                  $rhs, $act_fn),
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Thu May 16 20:31:41 UTC 2024
    - 66.4K bytes
    - Viewed (0)
Back to top