- Sort Score
- Result 10 results
- Languages All
Results 1 - 7 of 7 for Edges (0.06 sec)
-
platforms/software/dependency-management/src/main/java/org/gradle/api/internal/artifacts/ivyservice/resolveengine/graph/builder/NodeState.java
// If none of the incoming edges are transitive, remove previous state and do not traverse. // If not traversed before, simply add all selected outgoing edges (either hard or pending edges) // If traversed before: // If net exclusions for this node have not changed, ignore // If net exclusions for this node have changed, remove previous state and traverse outgoing edges again.
Registered: Wed Jun 12 18:38:38 UTC 2024 - Last Modified: Fri Jun 07 14:19:34 UTC 2024 - 58.9K bytes - Viewed (0) -
tensorflow/compiler/mlir/tensorflow/transforms/sparsecore/embedding_pipelining.cc
// that was extracted.. // Find the input edges to form the set of operands to the new function call. llvm::SetVector<Value> inputs; for (Operation* op : ops) { for (Value operand : op->getOperands()) { Operation* defining_op = operand.getDefiningOp(); if (!ops.contains(defining_op)) inputs.insert(operand); } } // Find the output edges to form the set of resutls of the new function call.
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu Apr 25 16:01:03 UTC 2024 - 92.9K bytes - Viewed (0) -
tensorflow/compiler/mlir/lite/flatbuffer_import.cc
context, {mlir::StringAttr::get(context, signature->signature_key)})); } // There are control nodes at each end of each control edge. For each of them, // we store the source vertices of the incoming edges (if any) and the control // node's output token. To improve testability, we use an ordered set for the // source vertices. struct ControlNodeDesc { std::set<int> incoming; std::optional<mlir::Value> outgoing;
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Tue May 21 18:21:50 UTC 2024 - 66.8K bytes - Viewed (0) -
src/cmd/compile/internal/ssa/debug.go
// Otherwise, it is ignored. GetPC func(block, value ID) int64 } type BlockDebug struct { // State at the start and end of the block. These are initialized, // and updated from new information that flows on back edges. startState, endState abt.T // Use these to avoid excess work in the merge. If none of the // predecessors has changed since the last check, the old answer is // still good. lastCheckedTime, lastChangedTime int32
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Mon Jun 10 19:44:43 UTC 2024 - 58.4K bytes - Viewed (0) -
tensorflow/c/c_api_test.cc
ASSERT_EQ(1, TF_OperationNumInputs(neg)); TF_Output neg_input = TF_OperationInput({neg, 0}); EXPECT_EQ(scalar, neg_input.oper); EXPECT_EQ(0, neg_input.index); // Test that we can't see control edges involving the source and sink nodes. TF_Operation* control_ops[100]; EXPECT_EQ(0, TF_OperationNumControlInputs(scalar)); EXPECT_EQ(0, TF_OperationGetControlInputs(scalar, control_ops, 100));
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Mon Apr 15 03:35:10 UTC 2024 - 96.9K bytes - Viewed (0) -
src/cmd/go/internal/test/test.go
// pass as meta-data file for "a" (emitted during the // package "a" build) to the package "c" run action, so // that it can be incorporated with "c"'s regular // metadata. To do this, we add edges from each compile // action to a "writeCoverMeta" action, then from the // writeCoverMeta action to each run action. Updated // graph: // // build("a") build("b") build("c")
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Thu May 16 14:34:32 UTC 2024 - 71.9K bytes - Viewed (0) -
src/runtime/mheap.go
extraPages := physPageSize / pageSize // Find a big enough region first, but then only allocate the // aligned portion. We can't just allocate and then free the // edges because we need to account for scavenged memory, and // that's difficult with alloc. // // Note that we skip updates to searchAddr here. It's OK if // it's stale and higher than normal; it'll operate correctly,
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Wed May 22 22:31:00 UTC 2024 - 78K bytes - Viewed (0)