- Sort Score
- Result 10 results
- Languages All
Results 11 - 17 of 17 for Edges (0.05 sec)
-
tensorflow/compiler/jit/deadness_analysis.cc
// node in `should_revisit` denotes that the deadness flowing out from any // output from said node may have changed. This is fine; only switches // propagate different deadness along different output edges, and since the // delta is solely due to the input *values* (and not input deadness), the // delta should not change in the second iteration. std::vector<bool> should_revisit;
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue Mar 12 06:33:33 UTC 2024 - 60.4K bytes - Viewed (0) -
src/cmd/compile/internal/ssa/debug.go
// Otherwise, it is ignored. GetPC func(block, value ID) int64 } type BlockDebug struct { // State at the start and end of the block. These are initialized, // and updated from new information that flows on back edges. startState, endState abt.T // Use these to avoid excess work in the merge. If none of the // predecessors has changed since the last check, the old answer is // still good. lastCheckedTime, lastChangedTime int32
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Mon Jun 10 19:44:43 UTC 2024 - 58.4K bytes - Viewed (0) -
tensorflow/c/c_api.h
// the body. In particular, it is an error to have a control edge going from // a node outside of the body into a node in the body. This applies to control // edges going from nodes referenced in `inputs` to nodes in the body when // the former nodes are not in the body (automatically skipped or not // included in explicitly specified body). // // Returns:
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu Oct 26 21:08:15 UTC 2023 - 82.3K bytes - Viewed (0) -
tensorflow/c/c_api_test.cc
ASSERT_EQ(1, TF_OperationNumInputs(neg)); TF_Output neg_input = TF_OperationInput({neg, 0}); EXPECT_EQ(scalar, neg_input.oper); EXPECT_EQ(0, neg_input.index); // Test that we can't see control edges involving the source and sink nodes. TF_Operation* control_ops[100]; EXPECT_EQ(0, TF_OperationNumControlInputs(scalar)); EXPECT_EQ(0, TF_OperationGetControlInputs(scalar, control_ops, 100));
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Mon Apr 15 03:35:10 UTC 2024 - 96.9K bytes - Viewed (0) -
src/cmd/go/internal/test/test.go
// pass as meta-data file for "a" (emitted during the // package "a" build) to the package "c" run action, so // that it can be incorporated with "c"'s regular // metadata. To do this, we add edges from each compile // action to a "writeCoverMeta" action, then from the // writeCoverMeta action to each run action. Updated // graph: // // build("a") build("b") build("c")
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Thu May 16 14:34:32 UTC 2024 - 71.9K bytes - Viewed (0) -
src/runtime/mheap.go
extraPages := physPageSize / pageSize // Find a big enough region first, but then only allocate the // aligned portion. We can't just allocate and then free the // edges because we need to account for scavenged memory, and // that's difficult with alloc. // // Note that we skip updates to searchAddr here. It's OK if // it's stale and higher than normal; it'll operate correctly,
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Wed May 22 22:31:00 UTC 2024 - 78K bytes - Viewed (0) -
tensorflow/compiler/mlir/g3doc/_includes/tf_passes.md
``` ### `-tf-tpu-colocate-splits` _Colocates each Split op with its predecessor_ It is beneficial for performance to assign a `Split` op to the same device as its predecessor. This is because the weight of cut edges is always minimized when the `Split` is with its predecessor. This colocation constraint will be used by the placer graph optimization to assign a device to the op.
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Wed Aug 02 02:26:39 UTC 2023 - 96.4K bytes - Viewed (0)