Search Options

Results per page
Sort
Preferred Languages
Advance

Results 1 - 10 of 73 for spent (0.05 sec)

  1. src/runtime/mgcscavenge.go

    	// some lower target to keep the scavenger working.
    	memoryLimitGoal atomic.Uint64
    
    	// assistTime is the time spent by the allocator scavenging in the last GC cycle.
    	//
    	// This is reset once a GC cycle ends.
    	assistTime atomic.Int64
    
    	// backgroundTime is the time spent by the background scavenger in the last GC cycle.
    	//
    	// This is reset once a GC cycle ends.
    	backgroundTime atomic.Int64
    }
    
    Registered: Wed Jun 12 16:32:35 UTC 2024
    - Last Modified: Wed May 08 17:48:45 UTC 2024
    - 52.3K bytes
    - Viewed (0)
  2. src/runtime/mgcpacer.go

    	// assistTime is the nanoseconds spent in mutator assists
    	// during this cycle. This is updated atomically, and must also
    	// be updated atomically even during a STW, because it is read
    	// by sysmon. Updates occur in bounded batches, since it is both
    	// written and read throughout the cycle.
    	assistTime atomic.Int64
    
    	// dedicatedMarkTime is the nanoseconds spent in dedicated mark workers
    Registered: Wed Jun 12 16:32:35 UTC 2024
    - Last Modified: Mon Mar 25 19:53:03 UTC 2024
    - 55.4K bytes
    - Viewed (0)
  3. src/runtime/mgcmark.go

    	enteredMarkAssistForTracing := false
    retry:
    	if gcCPULimiter.limiting() {
    		// If the CPU limiter is enabled, intentionally don't
    		// assist to reduce the amount of CPU time spent in the GC.
    		if enteredMarkAssistForTracing {
    			trace := traceAcquire()
    			if trace.ok() {
    				trace.GCMarkAssistDone()
    				// Set this *after* we trace the end to make sure
    Registered: Wed Jun 12 16:32:35 UTC 2024
    - Last Modified: Thu Apr 18 21:25:11 UTC 2024
    - 52.5K bytes
    - Viewed (0)
  4. src/runtime/mprof.go

    // SetBlockProfileRate controls the fraction of goroutine blocking events
    // that are reported in the blocking profile. The profiler aims to sample
    // an average of one blocking event per rate nanoseconds spent blocked.
    //
    // To include every blocking event in the profile, pass rate = 1.
    // To turn off profiling entirely, pass rate <= 0.
    func SetBlockProfileRate(rate int) {
    	var r int64
    	if rate <= 0 {
    Registered: Wed Jun 12 16:32:35 UTC 2024
    - Last Modified: Thu May 30 17:57:37 UTC 2024
    - 53.3K bytes
    - Viewed (0)
  5. src/runtime/mheap.go

    //
    // reclaim implements the page-reclaimer half of the sweeper.
    //
    // h.lock must NOT be held.
    func (h *mheap) reclaim(npage uintptr) {
    	// TODO(austin): Half of the time spent freeing spans is in
    	// locking/unlocking the heap (even with low contention). We
    	// could make the slow path here several times faster by
    	// batching heap frees.
    
    	// Bail early if there's no more reclaim work.
    Registered: Wed Jun 12 16:32:35 UTC 2024
    - Last Modified: Wed May 22 22:31:00 UTC 2024
    - 78K bytes
    - Viewed (0)
  6. pkg/controller/nodelifecycle/node_lifecycle_controller.go

    	// can take a while if we're processing each node serially as well. So we
    	// process them with bounded concurrency instead, since most of the time is
    	// spent waiting on io.
    	workqueue.ParallelizeUntil(ctx, nc.nodeUpdateWorkerSize, len(nodes), updateNodeFunc)
    
    	nc.handleDisruption(ctx, zoneToNodeConditions, nodes)
    
    	return nil
    }
    
    Registered: Sat Jun 15 01:39:40 UTC 2024
    - Last Modified: Sat May 04 18:33:12 UTC 2024
    - 51.6K bytes
    - Viewed (0)
  7. CHANGELOG/CHANGELOG-1.31.md

    - Add apiserver.latency.k8s.io/apf-queue-wait annotation to the audit log to record the time spent waiting in apf queue ([#123919](https://github.com/kubernetes/kubernetes/pull/123919), [@hakuna-matatah](https://github.com/hakuna-matatah)) [SIG API Machinery]
    Registered: Sat Jun 15 01:39:40 UTC 2024
    - Last Modified: Wed Jun 12 20:34:14 UTC 2024
    - 60.3K bytes
    - Viewed (0)
  8. src/runtime/pprof/pprof_test.go

    		for _, s := range p.Sample {
    			total += s.Value[i]
    		}
    		// Want d to be at least N*D, but give some wiggle-room to avoid
    		// a test flaking. Set an upper-bound proportional to the total
    		// wall time spent in blockMutexN. Generally speaking, the total
    		// contention time could be arbitrarily high when considering
    		// OS scheduler delays, or any other delays from the environment:
    Registered: Wed Jun 12 16:32:35 UTC 2024
    - Last Modified: Thu May 23 18:42:28 UTC 2024
    - 68.8K bytes
    - Viewed (0)
  9. pilot/pkg/model/push_context.go

    	Push *PushContext
    
    	// Start represents the time a push was started. This represents the time of adding to the PushQueue.
    	// Note that this does not include time spent debouncing.
    	Start time.Time
    
    	// Reason represents the reason for requesting a push. This should only be a fixed set of values,
    Registered: Fri Jun 14 15:00:06 UTC 2024
    - Last Modified: Wed May 15 09:02:11 UTC 2024
    - 91.8K bytes
    - Viewed (0)
  10. platforms/documentation/docs/src/docs/userguide/optimizing-performance/configuration_cache.adoc

    Gradle then loads the task graph from the configuration cache, so that it can apply optimizations to the tasks, and then runs the execution phase as normal.
    Configuration time will still be spent the first time you run a particular set of tasks.
    However, you should see build performance improvement immediately because <<#config_cache:intro:performance_improvements,tasks will run in parallel>>.
    
    Registered: Wed Jun 12 18:38:38 UTC 2024
    - Last Modified: Fri Mar 29 16:24:12 UTC 2024
    - 71.1K bytes
    - Viewed (0)
Back to top