- Sort Score
- Result 10 results
- Languages All
Results 111 - 120 of 191 for worst (1.3 sec)
-
src/runtime/memclr_386.s
TEXT runtime·memclrNoHeapPointers(SB), NOSPLIT, $0-8 MOVL ptr+0(FP), DI MOVL n+4(FP), BX XORL AX, AX // MOVOU seems always faster than REP STOSL. tail: // BSR+branch table make almost all memmove/memclr benchmarks worse. Not worth doing. TESTL BX, BX JEQ _0 CMPL BX, $2 JBE _1or2 CMPL BX, $4 JB _3 JE _4 CMPL BX, $8 JBE _5through8 CMPL BX, $16 JBE _9through16 #ifdef GO386_softfloat JMP nosse2
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Sat Nov 06 10:24:44 UTC 2021 - 2.4K bytes - Viewed (0) -
src/go/printer/comment.go
// /* // * Comment // * text here. // */ // Should not happen, since it will not work well as a // doc comment, but if it does, just ignore: // reformatting it will only make the situation worse. return list } text = text[2 : len(text)-2] // cut /* and */ } else if strings.HasPrefix(list[0].Text, "//") { kind = "//" var b strings.Builder for _, c := range list {
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Tue Sep 27 07:35:19 UTC 2022 - 3.5K bytes - Viewed (0) -
manifests/charts/UPDATING-CHARTS.md
eventually someone will want to customize every one of those fields. If all fields are exposed in `values.yaml`, we end up with an massive API that is also likely worse than just using the Kubernetes API directly. To avoid this, the project attempts to minimize additions to the `values.yaml` API where possible.
Registered: Fri Jun 14 15:00:06 UTC 2024 - Last Modified: Thu Jul 27 18:28:55 UTC 2023 - 3.2K bytes - Viewed (0) -
pkg/volume/util/nested_volumes.go
// grouped. For example, the following strings are sorted in this exact order: // /dir/nested, /dir/nested-vol, /dir/nested.vol, /dir/nested/double, /dir/nested2 // The issue is a bit worse for Windows paths, since the \'s value is higher than /'s: // \dir\nested, \dir\nested-vol, \dir\nested.vol, \dir\nested2, \dir\nested\double
Registered: Sat Jun 15 01:39:40 UTC 2024 - Last Modified: Tue Oct 18 12:19:17 UTC 2022 - 4.1K bytes - Viewed (0) -
platforms/core-configuration/model-core/src/main/java/org/gradle/model/internal/method/WeaklyTypeReferencingMethod.java
public int hashCode() { if (cachedHashCode != -1) { return cachedHashCode; } // there's a risk, for some methods, that the hash is always // recomputed but it won't be worse than before cachedHashCode = new HashCodeBuilder() .append(declaringType) .append(returnType) .append(name) .append(paramTypes)
Registered: Wed Jun 12 18:38:38 UTC 2024 - Last Modified: Thu Sep 28 09:51:04 UTC 2023 - 6K bytes - Viewed (0) -
internal/http/dial_linux.go
// since Linux 4.11. _ = syscall.SetsockoptInt(fd, syscall.IPPROTO_TCP, unix.TCP_FASTOPEN_CONNECT, 1) // Enable TCP quick ACK, John Nagle says // "Set TCP_QUICKACK. If you find a case where that makes things worse, let me know." _ = syscall.SetsockoptInt(fd, syscall.IPPROTO_TCP, unix.TCP_QUICKACK, 1) /// Enable keep-alive { _ = unix.SetsockoptInt(fd, unix.SOL_SOCKET, unix.SO_KEEPALIVE, 1)
Registered: Sun Jun 16 00:44:34 UTC 2024 - Last Modified: Wed May 22 23:07:14 UTC 2024 - 4.8K bytes - Viewed (3) -
guava/src/com/google/common/collect/Platform.java
} /** Equivalent to Arrays.copyOfRange(source, from, to, arrayOfType.getClass()). */ /* * Arrays are a mess from a nullness perspective, and Class instances for object-array types are * even worse. For now, we just suppress and move on with our lives. * * - https://github.com/jspecify/jspecify/issues/65 * * - https://github.com/jspecify/jdk/commit/71d826792b8c7ef95d492c50a274deab938f2552 */ /*
Registered: Wed Jun 12 16:38:11 UTC 2024 - Last Modified: Thu Feb 22 21:19:52 UTC 2024 - 5.1K bytes - Viewed (0) -
android/guava/src/com/google/common/collect/Platform.java
} /** Equivalent to Arrays.copyOfRange(source, from, to, arrayOfType.getClass()). */ /* * Arrays are a mess from a nullness perspective, and Class instances for object-array types are * even worse. For now, we just suppress and move on with our lives. * * - https://github.com/jspecify/jspecify/issues/65 * * - https://github.com/jspecify/jdk/commit/71d826792b8c7ef95d492c50a274deab938f2552 */ /*
Registered: Wed Jun 12 16:38:11 UTC 2024 - Last Modified: Thu Feb 22 21:19:52 UTC 2024 - 4.9K bytes - Viewed (0) -
src/runtime/memmove_386.s
// 128 because that is the maximum SSE register load (loading all data // into registers lets us ignore copy direction). tail: // BSR+branch table make almost all memmove/memclr benchmarks worse. Not worth doing. TESTL BX, BX JEQ move_0 CMPL BX, $2 JBE move_1or2 CMPL BX, $4 JB move_3 JE move_4 CMPL BX, $8 JBE move_5through8 CMPL BX, $16 JBE move_9through16 #ifdef GO386_softfloat
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Sat Nov 06 10:24:44 UTC 2021 - 4.4K bytes - Viewed (0) -
src/math/j1.go
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Mon Apr 11 16:34:30 UTC 2022 - 13.3K bytes - Viewed (0)