- Sort Score
- Result 10 results
- Languages All
Results 91 - 100 of 197 for Edges (0.04 sec)
-
tensorflow/cc/framework/gradients.cc
auto const& pair = visited.insert(nout.node()); if (pair.second) { queue.push_back(std::make_pair(nout.node(), static_cast<Node*>(nullptr))); } } // BFS from nodes in 'inputs_' along out edges for the entire graph. Internal // output nodes are recorded during the traversal. All nodes that are output // nodes but not internal output nodes are considered the frontier of the
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Sat Apr 13 05:57:22 UTC 2024 - 22K bytes - Viewed (0) -
subprojects/diagnostics/src/main/java/org/gradle/api/tasks/diagnostics/internal/insight/DependencyInsightReporter.java
return current; } private Collection<DependencyEdge> toDependencyEdges(Collection<DependencyResult> dependencies) { List<DependencyEdge> edges = CollectionUtils.collect(dependencies, TO_EDGES); return DependencyResultSorter.sort(edges, versionSelectorScheme, versionComparator, versionParser); }
Registered: Wed Jun 12 18:38:38 UTC 2024 - Last Modified: Mon Dec 11 13:37:56 UTC 2023 - 10.5K bytes - Viewed (0) -
tensorflow/compiler/jit/rearrange_function_argument_pass_test.cc
ASSERT_EQ(f1_rewritten->signature().output_arg_size(), 1); EXPECT_EQ(f1_rewritten->signature().output_arg(0).type(), DT_BOOL); // Check node "if" input and output edges. auto node_name_index = g->BuildNodeNameIndex(); const Node *if_node = node_name_index.at("if"); ASSERT_NE(if_node, nullptr); const Node *input_node; TF_CHECK_OK(if_node->input_node(1, &input_node));
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Fri Feb 09 11:36:41 UTC 2024 - 10.5K bytes - Viewed (0) -
android/guava/src/com/google/common/util/concurrent/CycleDetectingLockFactory.java
* locks---to each of the acquired locks, an edge from the soon-to-be-acquired lock is either * verified or created. * <li>If a new edge needs to be created, the outgoing edges of the acquired locks are traversed * to check for a cycle that reaches the lock to be acquired. If no cycle is detected, a new * "safe" edge is created.
Registered: Wed Jun 12 16:38:11 UTC 2024 - Last Modified: Fri Dec 15 19:31:54 UTC 2023 - 35.9K bytes - Viewed (0) -
tensorflow/compiler/jit/encapsulate_xla_computations_pass.h
// We need to introduce this version to adapt to the output of gpu inference // converter. The single argument overload version calls this function. // // When add_edges_to_output_of_downstream_nodes is true, the output edges of // the xla_launch_node's immediate downstream nodes would be attached to the // generated xla node. For example, if the original graph is // StatefulPartitionedCall{_xla_compile_id=1} -> XlaClusterOutput -> NodeA
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu Feb 22 06:59:07 UTC 2024 - 3.6K bytes - Viewed (0) -
tensorflow/compiler/mlir/tf2xla/api/v2/cluster_tf.h
// and transforms the module in place to cluster the given ops for compilation // that is compatible with the given device_type. The MLIR should be in the TF // Executor Dialect for graph nodes and edges or be in TF Functional already. // Individual Op inside a node should be the Tensorflow Functional Dialect. The // output MLIR is in the TF Functional Dialect. Returns OkStatus if passed, // otherwise an error. // // Inputs:
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Fri Feb 16 23:11:04 UTC 2024 - 2.9K bytes - Viewed (0) -
subprojects/diagnostics/src/main/java/org/gradle/api/tasks/diagnostics/internal/graph/DependencyGraphsRenderer.java
} renderer.completeChildren(); } } private void doRender(final RenderableDependency node, boolean last, Set<Object> visited) { // Do a shallow render of any constraint edges, and do not mark the node as visited. if (node.getResolutionState() == RenderableDependency.ResolutionState.RESOLVED_CONSTRAINT) { renderNode(node, last, false, dependenciesRenderer);
Registered: Wed Jun 12 18:38:38 UTC 2024 - Last Modified: Mon Dec 11 13:37:56 UTC 2023 - 4.7K bytes - Viewed (0) -
src/cmd/compile/internal/ssa/compile.go
{"tighten tuple selectors", "schedule"}, // remove critical edges before phi tighten, so that phi args get better placement {"critical", "phi tighten"}, // don't layout blocks until critical edges have been removed {"critical", "layout"}, // regalloc requires the removal of all critical edges {"critical", "regalloc"}, // regalloc requires all the values in a block to be scheduled
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Mon Apr 22 14:55:18 UTC 2024 - 18.6K bytes - Viewed (0) -
tensorflow/cc/framework/scope.h
/// successful. Otherwise, return the error status. // TODO(josh11b, keveman): Make this faster; right now it converts // Graph->GraphDef->Graph. This cleans up the graph (e.g. adds // edges from the source and to the sink node, resolves back edges // by name), and makes sure the resulting graph is valid. Status ToGraph( Graph* g, GraphConstructorOptions opts = GraphConstructorOptions{}) const;
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Sat Apr 13 09:08:33 UTC 2024 - 10.5K bytes - Viewed (0) -
platforms/software/dependency-management/src/main/java/org/gradle/api/internal/artifacts/ivyservice/resolveengine/graph/builder/VirtualPlatformState.java
* to resolve the platform itself. If the platform was declared as a dependency, * then the engine thinks that the platform module is unresolved. We need to * remember such edges, because in case a virtual platform gets defined, the error * is no longer valid and we can attach the target revision. * * @param edge the orphan edge */ void addOrphanEdge(EdgeState edge) {
Registered: Wed Jun 12 18:38:38 UTC 2024 - Last Modified: Mon Dec 11 13:37:56 UTC 2023 - 6.2K bytes - Viewed (0)