- Sort Score
- Result 10 results
- Languages All
Results 91 - 100 of 174 for onStop (0.22 sec)
-
src/runtime/asm_mipsx.s
// Set m->sched.sp = SP, so that if a panic happens // during the function we are about to execute, it will // have a valid SP to run on the g0 stack. // The next few lines (after the havem label) // will save this SP onto the stack and then write // the same SP back to m->sched.sp. That seems redundant, // but if an unrecovered panic happens, unwindm will // restore the g->sched.sp from the stack location
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Mon May 06 11:46:29 UTC 2024 - 26.3K bytes - Viewed (0) -
cni/README.md
This component accomplishes that in the following ways: 1. By installing a separate, very basic "CNI plugin" binary onto the node to forward low-level pod lifecycle events (CmdAdd/CmdDel/etc) from whatever node-level CNI subsystem is in use to this node agent for processing via socket. 1. By running as a node-level daemonset that:
Registered: Fri Jun 14 15:00:06 UTC 2024 - Last Modified: Fri May 03 19:29:42 UTC 2024 - 12.3K bytes - Viewed (0) -
src/runtime/signal_windows.go
gp.sigcode1 = info.exceptioninformation[1] gp.sigpc = r.ip() // Only push runtime·sigpanic if r.ip() != 0. // If r.ip() == 0, probably panicked because of a // call to a nil func. Not pushing that onto sp will // make the trace look like a call to runtime·sigpanic instead. // (Otherwise the trace will end at runtime·sigpanic and we // won't get to see who faulted.) // Also don't push a sigpanic frame if the faulting PC
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Tue Oct 17 20:32:29 UTC 2023 - 14.5K bytes - Viewed (0) -
src/runtime/asm_riscv64.s
// Set m->sched.sp = SP, so that if a panic happens // during the function we are about to execute, it will // have a valid SP to run on the g0 stack. // The next few lines (after the havem label) // will save this SP onto the stack and then write // the same SP back to m->sched.sp. That seems redundant, // but if an unrecovered panic happens, unwindm will // restore the g->sched.sp from the stack location
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Thu Nov 09 13:57:06 UTC 2023 - 27K bytes - Viewed (0) -
src/runtime/asm_s390x.s
// Set m->sched.sp = SP, so that if a panic happens // during the function we are about to execute, it will // have a valid SP to run on the g0 stack. // The next few lines (after the havem label) // will save this SP onto the stack and then write // the same SP back to m->sched.sp. That seems redundant, // but if an unrecovered panic happens, unwindm will // restore the g->sched.sp from the stack location
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Thu Jan 25 09:18:28 UTC 2024 - 28.1K bytes - Viewed (0) -
tensorflow/compiler/mlir/tensorflow/transforms/convert_control_to_data_outputs.cc
IslandOp GetDummyConstant(OpBuilder builder, ShapedType const_type, Location loc) { DenseIntElementsAttr val = DenseIntElementsAttr::get(const_type, 1); auto const_op = builder.create<TF::ConstOp>(loc, val); auto const_island = CreateIsland(const_op, {}, builder); return const_island; } // Rewrites the while op with extra chaining operands and results. Uses a
Registered: Sun Jun 16 05:45:23 UTC 2024 - Last Modified: Thu Apr 25 16:01:03 UTC 2024 - 28.7K bytes - Viewed (0) -
src/cmd/compile/internal/ssagen/abi.go
// allocate stack space, but this seems like an unlikely scenario. // Hence: mark these wrappers NOSPLIT. // // ABIInternal-to-ABI0 wrappers on the other hand will be taking // things in registers and pushing them onto the stack prior to // the ABI0 call, meaning that they will always need to allocate // stack space. If the compiler marks them as NOSPLIT this seems // as though it could lead to situations where the linker's
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Wed May 15 19:57:43 UTC 2024 - 13.8K bytes - Viewed (0) -
src/runtime/mgcwork.go
newb.nobj = 0 lfnodeValidate(&newb.node) if i == 0 { b = newb } else { putempty(newb) } } } return b } // putempty puts a workbuf onto the work.empty list. // Upon entry this goroutine owns b. The lfstack.push relinquishes ownership. // //go:nowritebarrier func putempty(b *workbuf) { b.checkempty() work.empty.push(&b.node) }
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Mon Mar 25 19:53:03 UTC 2024 - 12.9K bytes - Viewed (0) -
src/cmd/compile/internal/ssa/likelyadjust.go
if dominatedByCall[l.header.ID] { l.containsUnavoidableCall = true continue } callfreepath := false tovisit := make([]*Block, 0, len(l.header.Succs)) // Push all non-loop non-exit successors of header onto toVisit. for _, s := range l.header.Succs { nb := s.Block() // This corresponds to loop with zero iterations. if !l.iterationEnd(nb, b2l) { tovisit = append(tovisit, nb) } }
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Mon Oct 31 21:41:20 UTC 2022 - 15.4K bytes - Viewed (0) -
test/chan/powser1.go
d.nam = c return d } func mkdch2() *dch2 { d2 := new(dch2) d2[0] = mkdch() d2[1] = mkdch() return d2 } // split reads a single demand channel and replicates its // output onto two, which may be read at different rates. // A process is created at first demand for a rat and dies // after the rat has been sent to both outputs. // When multiple generations of split exist, the newest
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Wed Mar 25 22:22:20 UTC 2020 - 12.7K bytes - Viewed (0)