- Sort Score
- Result 10 results
- Languages All
Results 31 - 40 of 275 for Implementation (0.15 sec)
-
src/runtime/arena.go
// necessary in order to make new(T) a valid implementation of arenas. Such a property // is desirable to allow for a trivial implementation. (It also avoids complexities // that arise from synchronization with the GC when trying to set the arena chunks to // fault while the GC is active.) // // The implementation works in layers. At the bottom, arenas are managed in chunks.
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Wed May 08 17:44:56 UTC 2024 - 37.9K bytes - Viewed (0) -
src/syscall/syscall_darwin.go
// but it is also input to mksyscall, // which parses the //sys lines and generates system call stubs. // Note that sometimes we use a lowercase //sys name and wrap // it in our own nicer implementation, either here or in // syscall_bsd.go or syscall_unix.go. package syscall import ( "internal/abi" "unsafe" ) func Syscall(trap, a1, a2, a3 uintptr) (r1, r2 uintptr, err Errno)
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Thu May 23 01:16:50 UTC 2024 - 11K bytes - Viewed (0) -
src/slices/zsortordered.go
// The algorithm based on pattern-defeating quicksort(pdqsort), but without the optimizations from BlockQuicksort. // pdqsort paper: https://arxiv.org/pdf/2106.05123.pdf // C++ implementation: https://github.com/orlp/pdqsort // Rust implementation: https://docs.rs/pdqsort/latest/pdqsort/ // limit is the number of allowed bad (very unbalanced) pivots before falling back to heapsort. func pdqsortOrdered[E cmp.Ordered](data []E, a, b, limit int) {
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Tue May 23 23:33:29 UTC 2023 - 12.4K bytes - Viewed (0) -
src/math/big/natdiv.go
starts with a blunt critique of Knuth's presentation (among others) and then presents a more detailed and easier to follow treatment of long division, including an implementation in Pascal. But the algorithm and implementation work entirely in terms of 3-by-2 division, which is much less useful on modern hardware than an algorithm using 2-by-1 division. The proofs are a bit too
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Thu Mar 14 17:02:38 UTC 2024 - 34.4K bytes - Viewed (0) -
src/internal/concurrent/hashtriemap.go
package concurrent import ( "internal/abi" "internal/goarch" "math/rand/v2" "sync" "sync/atomic" "unsafe" ) // HashTrieMap is an implementation of a concurrent hash-trie. The implementation // is designed around frequent loads, but offers decent performance for stores // and deletes as well, especially if the map is larger. It's primary use-case is
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Wed May 22 16:01:55 UTC 2024 - 11.8K bytes - Viewed (0) -
src/log/slog/doc.go
} and you call it like this in main.go: Infof(slog.Default(), "hello, %s", "world") then slog will report the source file as mylog.go, not main.go. A correct implementation of Infof will obtain the source location (pc) and pass it to NewRecord. The Infof function in the package-level example called "wrapping" demonstrates how to do this. # Working with Records
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Thu Feb 15 14:35:48 UTC 2024 - 12.3K bytes - Viewed (0) -
src/unsafe/unsafe.go
// // Provided that T2 is no larger than T1 and that the two share an equivalent // memory layout, this conversion allows reinterpreting data of one type as // data of another type. An example is the implementation of // math.Float64bits: // // func Float64bits(f float64) uint64 { // return *(*uint64)(unsafe.Pointer(&f)) // } // // (2) Conversion of a Pointer to a uintptr (but not back to Pointer). //
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Tue May 21 19:45:20 UTC 2024 - 12.1K bytes - Viewed (0) -
src/cmd/go/internal/cache/prog.go
// This is effectively the versioning mechanism. can map[ProgCmd]bool // fuzzDirCache is another Cache implementation to use for the FuzzDir // method. In practice this is the default GOCACHE disk-based // implementation. // // TODO(bradfitz): maybe this isn't ideal. But we'd need to extend the Cache // interface and the fuzzing callers to be less disk-y to do more here.
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Mon Aug 14 19:23:25 UTC 2023 - 11.8K bytes - Viewed (0) -
src/cmd/compile/internal/types/fmt.go
fmt.Fprintf(f, "%%!%c(*types.Sym=%p)", verb, s) } } func (s *Sym) String() string { return sconv(s, 0, fmtGo) } // See #16897 for details about performance implications // before changing the implementation of sconv. func sconv(s *Sym, verb rune, mode fmtMode) string { if verb == 'L' { panic("linksymfmt") } if s == nil { return "<S>" } q := pkgqual(s.Pkg, verb, mode) if q == "" {
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Tue Sep 12 15:41:17 UTC 2023 - 15.7K bytes - Viewed (0) -
src/runtime/mpallocbits.go
s += uint(sys.OnesCount64(b[j/64] & ((1 << (j%64 + 1)) - 1))) return } // pallocBits is a bitmap that tracks page allocations for at most one // palloc chunk. // // The precise representation is an implementation detail, but for the // sake of documentation, 0s are free pages and 1s are allocated pages. type pallocBits pageBits // summarize returns a packed summary of the bitmap in pallocBits.
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Sat May 18 15:13:43 UTC 2024 - 12.5K bytes - Viewed (0)