- Sort Score
- Result 10 results
- Languages All
Results 1 - 10 of 18 for VLD1 (0.03 sec)
-
src/crypto/aes/asm_arm64.s
MOVD src+24(FP), R12 VLD1 (R12), [V0.B16] CMP $12, R9 BLT enc128 BEQ enc196 enc256: VLD1.P 32(R10), [V1.B16, V2.B16] AESE V1.B16, V0.B16 AESMC V0.B16, V0.B16 AESE V2.B16, V0.B16 AESMC V0.B16, V0.B16 enc196: VLD1.P 32(R10), [V3.B16, V4.B16] AESE V3.B16, V0.B16 AESMC V0.B16, V0.B16 AESE V4.B16, V0.B16 AESMC V0.B16, V0.B16 enc128: VLD1.P 64(R10), [V5.B16, V6.B16, V7.B16, V8.B16]
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Mon Mar 04 17:29:44 UTC 2024 - 6.9K bytes - Viewed (0) -
src/crypto/aes/gcm_arm64.s
VPMULL T0.D1, T2.D1, ACCM.Q1 mulRound(B1) VLD1.P 32(aut), [B2.B16, B3.B16] mulRound(B2) mulRound(B3) VLD1.P 32(aut), [B4.B16, B5.B16] mulRound(B4) mulRound(B5) VLD1.P 32(aut), [B6.B16, B7.B16] mulRound(B6) mulRound(B7) MOVD pTblSave, pTbl reduce() B octetsLoop startSinglesLoop: ADD $14*16, pTbl VLD1.P (pTbl), [T1.B16, T2.B16] singlesLoop:
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Mon Mar 04 17:29:44 UTC 2024 - 21.5K bytes - Viewed (0) -
src/crypto/sha256/sha256block_arm64.s
VLD1 (R0), [V0.S4, V1.S4] // load h(a,b,c,d,e,f,g,h) VLD1.P 64(R2), [V16.S4, V17.S4, V18.S4, V19.S4] VLD1.P 64(R2), [V20.S4, V21.S4, V22.S4, V23.S4] VLD1.P 64(R2), [V24.S4, V25.S4, V26.S4, V27.S4] VLD1 (R2), [V28.S4, V29.S4, V30.S4, V31.S4] //load 64*4bytes K constant(K0-K63) blockloop: VLD1.P 16(R1), [V4.B16] // load 16bytes message
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Mon Mar 04 17:29:44 UTC 2024 - 5.7K bytes - Viewed (0) -
src/crypto/subtle/xor_arm64.s
TEXT ·xorBytes(SB), NOSPLIT|NOFRAME, $0 MOVD dst+0(FP), R0 MOVD a+8(FP), R1 MOVD b+16(FP), R2 MOVD n+24(FP), R3 CMP $64, R3 BLT tail loop_64: VLD1.P 64(R1), [V0.B16, V1.B16, V2.B16, V3.B16] VLD1.P 64(R2), [V4.B16, V5.B16, V6.B16, V7.B16] VEOR V0.B16, V4.B16, V4.B16 VEOR V1.B16, V5.B16, V5.B16 VEOR V2.B16, V6.B16, V6.B16 VEOR V3.B16, V7.B16, V7.B16
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Wed Aug 17 18:47:33 UTC 2022 - 1.5K bytes - Viewed (0) -
src/crypto/sha1/sha1block_arm64.s
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Mon Mar 04 17:29:44 UTC 2024 - 3.5K bytes - Viewed (0) -
src/crypto/sha512/sha512block_arm64.s
// long enough to prefetch PRFM (R3), PLDL3KEEP // load digest VLD1 (R0), [V8.D2, V9.D2, V10.D2, V11.D2] loop: // load digest in V0-V3 keeping original in V8-V11 VMOV V8.B16, V0.B16 VMOV V9.B16, V1.B16 VMOV V10.B16, V2.B16 VMOV V11.B16, V3.B16 // load message data in V12-V19 VLD1.P 64(R1), [V12.D2, V13.D2, V14.D2, V15.D2] VLD1.P 64(R1), [V16.D2, V17.D2, V18.D2, V19.D2]
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Mon Mar 04 17:29:44 UTC 2024 - 5K bytes - Viewed (0) -
src/cmd/asm/internal/asm/testdata/arm64enc.s
VLD1.P (R19)(R4), [V24.B8, V25.B8] // 78a2c40c VLD1.P (R20)(R8), [V7.H8, V8.H8, V9.H8] // 8766c84c VLD1.P 32(R30), [V5.B8, V6.B8, V7.B8, V8.B8] // c523df0c VLD1 (R19), V14.B[15] // 6e1e404d VLD1 (R29), V0.H[1] // a04b400d VLD1 (R27), V2.S[0] // 6283400d
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Mon Jul 24 01:11:41 UTC 2023 - 43.9K bytes - Viewed (0) -
src/runtime/asm_arm64.s
VLD1.P 4(R0), V2.S[2] less_than_4: TBZ $1, R2, less_than_2 VLD1.P 2(R0), V2.H[6] less_than_2: TBZ $0, R2, done VLD1 (R0), V2.B[14] done: AESE V0.B16, V2.B16 AESMC V2.B16, V2.B16 AESE V0.B16, V2.B16 AESMC V2.B16, V2.B16 AESE V0.B16, V2.B16 AESMC V2.B16, V2.B16 VMOV V2.D[0], R0 RET aes0: VMOV V0.D[0], R0 RET aes16: VLD1 (R0), [V2.B16]
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Sat May 11 20:38:24 UTC 2024 - 43.4K bytes - Viewed (0) -
src/internal/bytealg/equal_arm64.s
BEQ one CMP $16, R2 // handle specially if length < 16 BLO tail BIC $0x3f, R2, R3 CBZ R3, chunk16 // work with 64-byte chunks ADD R3, R0, R6 // end of chunks chunk64_loop: VLD1.P (R0), [V0.D2, V1.D2, V2.D2, V3.D2] VLD1.P (R1), [V4.D2, V5.D2, V6.D2, V7.D2] VCMEQ V0.D2, V4.D2, V8.D2 VCMEQ V1.D2, V5.D2, V9.D2 VCMEQ V2.D2, V6.D2, V10.D2 VCMEQ V3.D2, V7.D2, V11.D2 VAND V8.B16, V9.B16, V8.B16
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Wed Jan 24 16:07:25 UTC 2024 - 2.5K bytes - Viewed (0) -
src/internal/bytealg/indexbyte_arm64.s
AND $0x1f, R2, R10 BEQ loop // Input string is not 32-byte aligned. We calculate the // syndrome value for the aligned 32 bytes block containing // the first bytes and mask off the irrelevant part. VLD1.P (R3), [V1.B16, V2.B16] SUB $0x20, R9, R4 ADDS R4, R2, R2 VCMEQ V0.B16, V1.B16, V3.B16 VCMEQ V0.B16, V2.B16, V4.B16 VAND V5.B16, V3.B16, V3.B16 VAND V5.B16, V4.B16, V4.B16
Registered: Wed Jun 12 16:32:35 UTC 2024 - Last Modified: Thu Nov 08 20:52:47 UTC 2018 - 3.3K bytes - Viewed (0)