Search Options

Results per page
Sort
Preferred Languages
Advance

Results 1 - 10 of 199 for spent (0.06 sec)

  1. internal/grid/benchmark_test.go

    					atomic.AddInt64(&lat, latency)
    				})
    				spent := time.Since(t)
    				if spent > 0 && n > 0 {
    					// Since we are benchmarking n parallel servers we need to multiply by n.
    					// This will give an estimate of the total ops/s.
    					latency := float64(atomic.LoadInt64(&lat)) / float64(time.Millisecond)
    					b.ReportMetric(float64(n)*float64(ops)/spent.Seconds(), "vops/s")
    					b.ReportMetric(latency/float64(ops), "ms/op")
    Registered: Sun Jun 16 00:44:34 UTC 2024
    - Last Modified: Fri Jun 07 15:51:52 UTC 2024
    - 15.7K bytes
    - Viewed (0)
  2. src/runtime/metrics/doc.go

    		Estimated total CPU time goroutines spent performing GC
    		tasks to assist the GC and prevent it from falling behind the
    		application. This metric is an overestimate, and not directly
    		comparable to system CPU time measurements. Compare only with
    		other /cpu/classes metrics.
    
    	/cpu/classes/gc/mark/dedicated:cpu-seconds
    		Estimated total CPU time spent performing GC tasks on processors
    Registered: Wed Jun 12 16:32:35 UTC 2024
    - Last Modified: Wed May 22 22:58:43 UTC 2024
    - 20K bytes
    - Viewed (0)
  3. src/cmd/trace/goroutines.go

    {{end}}
    </table>
    
    <h3 id="ranges">Special ranges</h3>
    
    The table below describes how much of the traced period each goroutine spent in
    certain special time ranges.
    If a goroutine has spent no time in any special time ranges, it is excluded from
    the table.
    For example, how much time it spent helping the GC. Note that these times do
    overlap with the times from the first table.
    Registered: Wed Jun 12 16:32:35 UTC 2024
    - Last Modified: Fri May 17 18:48:18 UTC 2024
    - 10.9K bytes
    - Viewed (0)
  4. src/cmd/trace/regions.go

    {{end}}
    </table>
    
    <h3 id="ranges">Special ranges</h3>
    
    The table below describes how much of the traced period each goroutine spent in
    certain special time ranges.
    If a goroutine has spent no time in any special time ranges, it is excluded from
    the table.
    For example, how much time it spent helping the GC. Note that these times do
    overlap with the times from the first table.
    Registered: Wed Jun 12 16:32:35 UTC 2024
    - Last Modified: Fri May 17 18:48:18 UTC 2024
    - 14.3K bytes
    - Viewed (0)
  5. src/cmd/trace/pprof.go

    // IO wait, currently only network blocking event).
    func computePprofIO() computePprofFunc {
    	return makeComputePprofFunc(trace.GoWaiting, func(reason string) bool {
    		return reason == "network"
    	})
    }
    
    // computePprofBlock returns a computePprofFunc that generates blocking pprof-like profile
    // (time spent blocked on synchronization primitives).
    Registered: Wed Jun 12 16:32:35 UTC 2024
    - Last Modified: Fri May 17 18:48:18 UTC 2024
    - 10.1K bytes
    - Viewed (0)
  6. src/runtime/mgclimit.go

    // utilization without hurting throughput.
    //
    // Note that the bucket in the leaky bucket mechanism can never go negative,
    // so the GC never gets credit for a lot of CPU time spent without the GC
    // running. This is intentional, as an application that stays idle for, say,
    // an entire day, could build up enough credit to fail to prevent a death
    Registered: Wed Jun 12 16:32:35 UTC 2024
    - Last Modified: Mon Apr 22 22:07:41 UTC 2024
    - 17.3K bytes
    - Viewed (0)
  7. src/runtime/mstats.go

    	ScavengeTotalTime  int64
    
    	IdleTime int64 // Time Ps spent in _Pidle.
    	UserTime int64 // Time Ps spent in _Prunning or _Psyscall that's not any of the above.
    
    	TotalTime int64 // GOMAXPROCS * (monotonic wall clock time elapsed)
    }
    
    // accumulateGCPauseTime add dt*stwProcs to the GC CPU pause time stats. dt should be
    // the actual time spent paused, for orthogonality. maxProcs should be GOMAXPROCS,
    Registered: Wed Jun 12 16:32:35 UTC 2024
    - Last Modified: Mon Apr 08 21:03:13 UTC 2024
    - 34.2K bytes
    - Viewed (0)
  8. cmd/xl-storage-disk-id-check.go

    			select {
    			case <-dctx.Done():
    				if !timeout.Stop() {
    					<-timeout.C
    				}
    			case <-timeout.C:
    				spent := time.Since(started)
    				goOffline(fmt.Errorf("unable to write+read for %v", spent.Round(time.Millisecond)), spent)
    			}
    		}()
    
    		func() {
    			defer dcancel()
    
    			err := p.storage.WriteAll(ctx, minioMetaTmpBucket, fn, toWrite)
    			if err != nil {
    Registered: Sun Jun 16 00:44:34 UTC 2024
    - Last Modified: Mon Jun 10 15:51:27 UTC 2024
    - 33.4K bytes
    - Viewed (0)
  9. src/runtime/pprof/pprof.go

    //
    // # Block profile
    //
    // The block profile tracks time spent blocked on synchronization primitives,
    // such as [sync.Mutex], [sync.RWMutex], [sync.WaitGroup], [sync.Cond], and
    // channel send/receive/select.
    //
    // Stack traces correspond to the location that blocked (for example,
    // [sync.Mutex.Lock]).
    //
    // Sample values correspond to cumulative time spent blocked at that stack
    Registered: Wed Jun 12 16:32:35 UTC 2024
    - Last Modified: Thu May 30 17:52:17 UTC 2024
    - 30.6K bytes
    - Viewed (0)
  10. tensorflow/compiler/mlir/tf2xla/api/v1/compile_tf_graph.cc

            /* metric description */ "status" /* metric label */);
    
    auto* phase2_bridge_compilation_time = tsl::monitoring::Sampler<1>::New(
        {"/tensorflow/core/tf2xla/api/v1/phase2_compilation_time",
         "The wall-clock time spent on executing graphs in milliseconds.",
         "configuration"},
        // Power of 1.5 with bucket count 45 (> 23 hours)
        {tsl::monitoring::Buckets::Exponential(1, 1.5, 45)});
    
    Registered: Sun Jun 16 05:45:23 UTC 2024
    - Last Modified: Wed Jun 12 22:19:26 UTC 2024
    - 14K bytes
    - Viewed (0)
Back to top