- Sort Score
- Result 10 results
- Languages All
Results 1 - 10 of 11 for x_1 (0.04 sec)
-
tensorflow/cc/gradients/math_grad.cc
auto x_1 = ConjugateHelper(scope, op.input(0)); auto x_2 = ConjugateHelper(scope, op.input(1)); // y = (x_1 - x_2)^2 // dy/dx_1 = 2 * (x_1 - x_2) // dy/dx_2 = -2 * (x_1 - x_2) auto two = Cast(scope, Const(scope, 2), grad_inputs[0].type()); auto gx_1 = Mul(scope, grad_inputs[0], Mul(scope, two, Sub(scope, x_1, x_2))); auto gx_2 = Neg(scope, gx_1);
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Fri Aug 25 18:20:20 UTC 2023 - 50.7K bytes - Viewed (0) -
tensorflow/cc/ops/const_op_test.cc
auto c_1 = ops::Const(root, {{2.0}, {3.0}}); EXPECT_EQ(c_1.node()->name(), "Const_1"); auto x = ops::Const(root.WithOpName("x"), 1); EXPECT_EQ(x.node()->name(), "x"); auto x_1 = ops::Const(root.WithOpName("x"), 1); EXPECT_EQ(x_1.node()->name(), "x_1"); Scope child = root.NewSubScope("c"); auto c_y = ops::Const(child.WithOpName("y"), 1); EXPECT_EQ(c_y.node()->name(), "c/y");
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Mon Aug 12 14:38:21 UTC 2019 - 4.9K bytes - Viewed (0) -
src/crypto/internal/edwards25519/scalarmult.go
// as described in the Ed25519 paper // // Group even and odd coefficients // x*B = x_0*16^0*B + x_2*16^2*B + ... + x_62*16^62*B // + x_1*16^1*B + x_3*16^3*B + ... + x_63*16^63*B // x*B = x_0*16^0*B + x_2*16^2*B + ... + x_62*16^62*B // + 16*( x_1*16^0*B + x_3*16^2*B + ... + x_63*16^62*B) // // We use a lookup table for each i to get x_i*16^(2*i)*B // and do four doublings to multiply by 16.
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Thu May 05 21:53:10 UTC 2022 - 6.3K bytes - Viewed (0) -
tensorflow/cc/gradients/math_grad_test.cc
} TEST_F(NaryGradTest, Div) { TensorShape x_shape({3, 2, 5}); auto x = Placeholder(scope_, DT_FLOAT, Placeholder::Shape(x_shape)); // Test x / (1 + |x|) rather than x_1 / x_2 to avoid triggering large // division errors in the numeric estimator used by the gradient checker. auto y = Div(scope_, x, Add(scope_, Const<float>(scope_, 1), Abs(scope_, x))); RunTest({x}, {x_shape}, {y}, {x_shape});
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Fri Aug 25 18:20:20 UTC 2023 - 36K bytes - Viewed (0) -
internal/s3select/sql/parser_test.go
p := participle.MustBuild( &ObjectKey{}, participle.Lexer(sqlLexer), participle.CaseInsensitive("Keyword"), ) validCases := []string{ "['abc']", "['ab''c']", "['a''b''c']", "['abc-x_1##@(*&(#*))/\\']", } for i, tc := range validCases { err := p.ParseString(tc, &k) if err != nil { t.Fatalf("%d: %v", i, err) } if string(*k.Lit) == "" { t.Fatalf("Incorrect parse %#v", k)
Registered: Sun Jun 16 00:44:34 UTC 2024 - Last Modified: Thu Jan 18 07:03:17 UTC 2024 - 9.2K bytes - Viewed (0) -
tensorflow/cc/framework/gradient_checker.cc
// every pair y_i in y and x_j in x. Note that the Jacobian is defined directly // over the elements of tensors y and x, and doesn't depend on their shapes. // // If x = (x_1, x_2, ..., x_m) and y = (y_1, y_2, .., y_n) the matrix evaluated // is actually the Jacobian transpose, defined as this mxn matrix: // dy_1/d_x1 dy_2/dx_1 ... dy_n/dx_1 // dy_1/dx_2 dy_2/dx_2 ... dy_n/dx_2 // .
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Sat Apr 13 05:57:22 UTC 2024 - 18.2K bytes - Viewed (0) -
tensorflow/compiler/mlir/tensorflow/transforms/tf_passes.td
For example, the code x.a() x.b() %c = y.c() x.d(%c) would be transformed into something like call @x_1() %c = call @y_1() call @x_2(%c) with @x_1, @x_2 and @y_1 filled in. }]; let constructor = "TF::CreateGroupByDialectPass()"; } def RemoveUnusedArgumentsPass : Pass<"tf-remove-unused-arguments", "ModuleOp"> {
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed Jun 12 21:18:05 UTC 2024 - 99.6K bytes - Viewed (0) -
tensorflow/compiler/mlir/g3doc/_includes/tf_passes.md
"top" function is configurable. For example, the code x.a() x.b() %c = y.c() x.d(%c) would be transformed into something like call @x_1() %c = call @y_1() call @x_2(%c) with @x_1, @x_2 and @y_1 filled in. ### `-tf-guarantee-all-funcs-one-use` _Guarantee all FuncOp's have only a single use._ ### `-tf-hoist-loop-invariant`
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed Aug 02 02:26:39 UTC 2023 - 96.4K bytes - Viewed (0) -
src/cmd/compile/internal/ssa/rewritegeneric.go
for { if v_0.Op != OpTrunc32to16 { break } x := v_0.Args[0] if x.Op != OpRsh32x64 { break } _ = x.Args[1] x_1 := x.Args[1] if x_1.Op != OpConst64 { break } s := auxIntToInt64(x_1.AuxInt) if !(s >= 16) { break } v.copyOf(x) return true } return false }
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Mon Apr 22 18:24:47 UTC 2024 - 812.2K bytes - Viewed (0) -
tensorflow/compiler/mlir/tf2xla/transforms/legalize_tf.cc
// `ReduceWindowOp` with `AddOp` as body. // // Example: // Let f : R^4 -> R^2 be an average pool function with window size 3, stride 2, // and SAME padding with 0's. It is defined by // f(x) = [ (x_1 + x_2 + x_3) / 3 ] ( x = (x_1, x_2, x_3, x_4) ) // [ (x_3 + x_4 + 0) / 2 ] (the 0 results from right padding) // Note that for SAME padding in `AvgPool` the padded entries are not counted
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue Jun 11 20:00:43 UTC 2024 - 291.8K bytes - Viewed (0)