Fast 24-bit array -> 32-bit array conversion? - c

Quick Summary:
I have an array of 24-bit values. Any suggestion on how to quickly expand the individual 24-bit array elements into 32-bit elements?
Details:
I'm processing incoming video frames in realtime using Pixel Shaders in DirectX 10. A stumbling block is that my frames are coming in from the capture hardware with 24-bit pixels (either as YUV or RGB images), but DX10 takes 32-bit pixel textures. So, I have to expand the 24-bit values to 32-bits before I can load them into the GPU.
I really don't care what I set the remaining 8 bits to, or where the incoming 24-bits are in that 32-bit value - I can fix all that in a pixel shader. But I need to do the conversion from 24-bit to 32-bit really quickly.
I'm not terribly familiar with SIMD SSE operations, but from my cursory glance it doesn't look like I can do the expansion using them, given my reads and writes aren't the same size. Any suggestions? Or am I stuck sequentially massaging this data set?
This feels so very silly - I'm using the pixel shaders for parallelism, but I have to do a sequential per-pixel operation before that. I must be missing something obvious...

The code below should be pretty fast. It copies 4 pixels in each iteration, using only 32-bit read/write instructions. The source and destination pointers should be aligned to 32 bits.
uint32_t *src = ...;
uint32_t *dst = ...;
for (int i=0; i<num_pixels; i+=4) {
uint32_t sa = src[0];
uint32_t sb = src[1];
uint32_t sc = src[2];
dst[i+0] = sa;
dst[i+1] = (sa>>24) | (sb<<8);
dst[i+2] = (sb>>16) | (sc<<16);
dst[i+3] = sc>>8;
src += 3;
}
Edit:
Here is a way to do this using the SSSE3 instructions PSHUFB and PALIGNR. The code is written using compiler intrinsics, but it shouldn't be hard to translate to assembly if needed. It copies 16 pixels in each iteration. The source and destination pointers Must be aligned to 16 bytes, or it will fault. If they aren't aligned, you can make it work by replacing _mm_load_si128 with _mm_loadu_si128 and _mm_store_si128 with _mm_storeu_si128, but this will be slower.
#include <emmintrin.h>
#include <tmmintrin.h>
__m128i *src = ...;
__m128i *dst = ...;
__m128i mask = _mm_setr_epi8(0,1,2,-1, 3,4,5,-1, 6,7,8,-1, 9,10,11,-1);
for (int i=0; i<num_pixels; i+=16) {
__m128i sa = _mm_load_si128(src);
__m128i sb = _mm_load_si128(src+1);
__m128i sc = _mm_load_si128(src+2);
__m128i val = _mm_shuffle_epi8(sa, mask);
_mm_store_si128(dst, val);
val = _mm_shuffle_epi8(_mm_alignr_epi8(sb, sa, 12), mask);
_mm_store_si128(dst+1, val);
val = _mm_shuffle_epi8(_mm_alignr_epi8(sc, sb, 8), mask);
_mm_store_si128(dst+2, val);
val = _mm_shuffle_epi8(_mm_alignr_epi8(sc, sc, 4), mask);
_mm_store_si128(dst+3, val);
src += 3;
dst += 4;
}
SSSE3 (not to be confused with SSE3) will require a relatively new processor: Core 2 or newer, and I believe AMD doesn't support it yet. Performing this with SSE2 instructions only will take a lot more operations, and may not be worth it.

SSE3 is awesome, but for those who can't use it for whatever reason, here's the conversion in x86 assembler, hand-optimized by yours truly. For completeness, I give the conversion in both directions: RGB32->RGB24 and RGB24->RGB32.
Note that interjay's C code leaves trash in the MSB (the alpha channel) of the destination pixels. This might not matter in some applications, but it matters in mine, hence my RGB24->RGB32 code forces the MSB to zero. Similarly, my RGB32->RGB24 code ignores the MSB; this avoids garbage output if the source data has a non-zero alpha channel. These features cost almost nothing in terms of performance, as verified by benchmarks.
For RGB32->RGB24 I was able to beat the VC++ optimizer by about 20%. For RGB24->RGB32 the gain was insignificant. Benchmarking was done on an i5 2500K. I omit the benchmarking code here, but if anyone wants it I'll provide it. The most important optimization was bumping the source pointer as soon as possible (see the ASAP comment). My best guess is that this increases parallelism by allowing the instruction pipeline to prefetch sooner. Other than that I just reordered some instructions to reduce dependencies and overlap memory accesses with bit-bashing.
void ConvRGB32ToRGB24(const UINT *Src, UINT *Dst, UINT Pixels)
{
#if !USE_ASM
for (UINT i = 0; i < Pixels; i += 4) {
UINT sa = Src[i + 0] & 0xffffff;
UINT sb = Src[i + 1] & 0xffffff;
UINT sc = Src[i + 2] & 0xffffff;
UINT sd = Src[i + 3];
Dst[0] = sa | (sb << 24);
Dst[1] = (sb >> 8) | (sc << 16);
Dst[2] = (sc >> 16) | (sd << 8);
Dst += 3;
}
#else
__asm {
mov ecx, Pixels
shr ecx, 2 // 4 pixels at once
jz ConvRGB32ToRGB24_$2
mov esi, Src
mov edi, Dst
ConvRGB32ToRGB24_$1:
mov ebx, [esi + 4] // sb
and ebx, 0ffffffh // sb & 0xffffff
mov eax, [esi + 0] // sa
and eax, 0ffffffh // sa & 0xffffff
mov edx, ebx // copy sb
shl ebx, 24 // sb << 24
or eax, ebx // sa | (sb << 24)
mov [edi + 0], eax // Dst[0]
shr edx, 8 // sb >> 8
mov eax, [esi + 8] // sc
and eax, 0ffffffh // sc & 0xffffff
mov ebx, eax // copy sc
shl eax, 16 // sc << 16
or eax, edx // (sb >> 8) | (sc << 16)
mov [edi + 4], eax // Dst[1]
shr ebx, 16 // sc >> 16
mov eax, [esi + 12] // sd
add esi, 16 // Src += 4 (ASAP)
shl eax, 8 // sd << 8
or eax, ebx // (sc >> 16) | (sd << 8)
mov [edi + 8], eax // Dst[2]
add edi, 12 // Dst += 3
dec ecx
jnz SHORT ConvRGB32ToRGB24_$1
ConvRGB32ToRGB24_$2:
}
#endif
}
void ConvRGB24ToRGB32(const UINT *Src, UINT *Dst, UINT Pixels)
{
#if !USE_ASM
for (UINT i = 0; i < Pixels; i += 4) {
UINT sa = Src[0];
UINT sb = Src[1];
UINT sc = Src[2];
Dst[i + 0] = sa & 0xffffff;
Dst[i + 1] = ((sa >> 24) | (sb << 8)) & 0xffffff;
Dst[i + 2] = ((sb >> 16) | (sc << 16)) & 0xffffff;
Dst[i + 3] = sc >> 8;
Src += 3;
}
#else
__asm {
mov ecx, Pixels
shr ecx, 2 // 4 pixels at once
jz SHORT ConvRGB24ToRGB32_$2
mov esi, Src
mov edi, Dst
push ebp
ConvRGB24ToRGB32_$1:
mov ebx, [esi + 4] // sb
mov edx, ebx // copy sb
mov eax, [esi + 0] // sa
mov ebp, eax // copy sa
and ebx, 0ffffh // sb & 0xffff
shl ebx, 8 // (sb & 0xffff) << 8
and eax, 0ffffffh // sa & 0xffffff
mov [edi + 0], eax // Dst[0]
shr ebp, 24 // sa >> 24
or ebx, ebp // (sa >> 24) | ((sb & 0xffff) << 8)
mov [edi + 4], ebx // Dst[1]
shr edx, 16 // sb >> 16
mov eax, [esi + 8] // sc
add esi, 12 // Src += 12 (ASAP)
mov ebx, eax // copy sc
and eax, 0ffh // sc & 0xff
shl eax, 16 // (sc & 0xff) << 16
or eax, edx // (sb >> 16) | ((sc & 0xff) << 16)
mov [edi + 8], eax // Dst[2]
shr ebx, 8 // sc >> 8
mov [edi + 12], ebx // Dst[3]
add edi, 16 // Dst += 16
dec ecx
jnz SHORT ConvRGB24ToRGB32_$1
pop ebp
ConvRGB24ToRGB32_$2:
}
#endif
}
And while we're at it, here are the same conversions in actual SSE3 assembly. This only works if you have an assembler (FASM is free) and have a CPU that supports SSE3 (likely but it's better to check). Note that the intrinsics don't necessarily output something this efficient, it totally depends on the tools you use and what platform you're compiling for. Here, it's straightforward: what you see is what you get. This code generates the same output as the x86 code above, and it's about 1.5x faster (on an i5 2500K).
format MS COFF
section '.text' code readable executable
public _ConvRGB32ToRGB24SSE3
; ebp + 8 Src (*RGB32, 16-byte aligned)
; ebp + 12 Dst (*RGB24, 16-byte aligned)
; ebp + 16 Pixels
_ConvRGB32ToRGB24SSE3:
push ebp
mov ebp, esp
mov eax, [ebp + 8]
mov edx, [ebp + 12]
mov ecx, [ebp + 16]
shr ecx, 4
jz done1
movupd xmm7, [mask1]
top1:
movupd xmm0, [eax + 0] ; sa = Src[0]
pshufb xmm0, xmm7 ; sa = _mm_shuffle_epi8(sa, mask)
movupd xmm1, [eax + 16] ; sb = Src[1]
pshufb xmm1, xmm7 ; sb = _mm_shuffle_epi8(sb, mask)
movupd xmm2, xmm1 ; sb1 = sb
pslldq xmm1, 12 ; sb = _mm_slli_si128(sb, 12)
por xmm0, xmm1 ; sa = _mm_or_si128(sa, sb)
movupd [edx + 0], xmm0 ; Dst[0] = sa
psrldq xmm2, 4 ; sb1 = _mm_srli_si128(sb1, 4)
movupd xmm0, [eax + 32] ; sc = Src[2]
pshufb xmm0, xmm7 ; sc = _mm_shuffle_epi8(sc, mask)
movupd xmm1, xmm0 ; sc1 = sc
pslldq xmm0, 8 ; sc = _mm_slli_si128(sc, 8)
por xmm0, xmm2 ; sc = _mm_or_si128(sb1, sc)
movupd [edx + 16], xmm0 ; Dst[1] = sc
psrldq xmm1, 8 ; sc1 = _mm_srli_si128(sc1, 8)
movupd xmm0, [eax + 48] ; sd = Src[3]
pshufb xmm0, xmm7 ; sd = _mm_shuffle_epi8(sd, mask)
pslldq xmm0, 4 ; sd = _mm_slli_si128(sd, 4)
por xmm0, xmm1 ; sd = _mm_or_si128(sc1, sd)
movupd [edx + 32], xmm0 ; Dst[2] = sd
add eax, 64
add edx, 48
dec ecx
jnz top1
done1:
pop ebp
ret
public _ConvRGB24ToRGB32SSE3
; ebp + 8 Src (*RGB24, 16-byte aligned)
; ebp + 12 Dst (*RGB32, 16-byte aligned)
; ebp + 16 Pixels
_ConvRGB24ToRGB32SSE3:
push ebp
mov ebp, esp
mov eax, [ebp + 8]
mov edx, [ebp + 12]
mov ecx, [ebp + 16]
shr ecx, 4
jz done2
movupd xmm7, [mask2]
top2:
movupd xmm0, [eax + 0] ; sa = Src[0]
movupd xmm1, [eax + 16] ; sb = Src[1]
movupd xmm2, [eax + 32] ; sc = Src[2]
movupd xmm3, xmm0 ; sa1 = sa
pshufb xmm0, xmm7 ; sa = _mm_shuffle_epi8(sa, mask)
movupd [edx], xmm0 ; Dst[0] = sa
movupd xmm4, xmm1 ; sb1 = sb
palignr xmm1, xmm3, 12 ; sb = _mm_alignr_epi8(sb, sa1, 12)
pshufb xmm1, xmm7 ; sb = _mm_shuffle_epi8(sb, mask);
movupd [edx + 16], xmm1 ; Dst[1] = sb
movupd xmm3, xmm2 ; sc1 = sc
palignr xmm2, xmm4, 8 ; sc = _mm_alignr_epi8(sc, sb1, 8)
pshufb xmm2, xmm7 ; sc = _mm_shuffle_epi8(sc, mask)
movupd [edx + 32], xmm2 ; Dst[2] = sc
palignr xmm3, xmm3, 4 ; sc1 = _mm_alignr_epi8(sc1, sc1, 4)
pshufb xmm3, xmm7 ; sc1 = _mm_shuffle_epi8(sc1, mask)
movupd [edx + 48], xmm3 ; Dst[3] = sc1
add eax, 48
add edx, 64
dec ecx
jnz top2
done2:
pop ebp
ret
section '.data' data readable writeable align 16
label mask1 dqword
db 0,1,2,4, 5,6,8,9, 10,12,13,14, -1,-1,-1,-1
label mask2 dqword
db 0,1,2,-1, 3,4,5,-1, 6,7,8,-1, 9,10,11,-1

The different input/output sizes are not a barrier to using simd, just a speed bump. You would need to chunk the data so that you read and write in full simd words (16 bytes).
In this case, you would read 3 SIMD words (48 bytes == 16 rgb pixels), do the expansion, then write 4 SIMD words.
I'm just saying you can use SIMD, I'm not saying you should. The middle bit, the expansion, is still tricky since you have non-uniform shift sizes in different parts of the word.

SSE 4.1 .ASM:
PINSRD XMM0, DWORD PTR[ESI], 0
PINSRD XMM0, DWORD PTR[ESI+3], 1
PINSRD XMM0, DWORD PTR[ESI+6], 2
PINSRD XMM0, DWORD PTR[ESI+9], 3
PSLLD XMM0, 8
PSRLD XMM0, 8
MOVNTDQ [EDI], XMM1
add ESI, 12
add EDI, 16

Related

Kernel falls into a boot loop at STI instruction

I am writing an x86_64 kernel for an exam and it seems to reboot every time it runs an STI instruction, which, as defined in my code, is on every boot. I have set up GDT, IDT and ICWs, and masked all IRQs except IRQ1, which is for my keyboard input. The kernel runs perfectly without STI, except for keyboard input.
Here is my bootloader:
main.asm
global start
extern long_mode_start
bits 32
section .text
start:
mov esp, stack_top
call check_multiboot
call check_cpuid
call check_long_mode
call setup_page_tables
call enable_paging
lgdt [gdt64.pointer]
jmp gdt64.code_segment:long_mode_start
hlt
check_multiboot:
cmp eax, 0x36d76289
jne .no_multiboot
ret
.no_multiboot:
mov al, "M"
jmp error
check_cpuid:
pushfd
pop eax
mov ecx, eax
xor eax, 1 << 21
push eax
popfd
pushfd
pop eax
push ecx
popfd
cmp eax, ecx
je .no_cpuid
ret
.no_cpuid:
mov al, "C"
jmp error
check_long_mode:
mov eax, 0x80000000
cpuid
cmp eax, 0x80000001
jb .no_long_mode
mov eax, 0x80000001
cpuid
test edx, 1 << 29
jz .no_long_mode
ret
.no_long_mode:
mov al, "L"
jmp error
setup_page_tables:
mov eax, page_table_l3
or eax, 0b11 ; present, writable
mov [page_table_l4], eax
mov eax, page_table_l2
or eax, 0b11 ; present, writable
mov [page_table_l3], eax
mov ecx, 0 ; counter
.loop:
mov eax, 0x200000 ; 2MiB
mul ecx
or eax, 0b10000011 ; present, writable, huge page
mov [page_table_l2 + ecx * 8], eax
inc ecx ; increment counter
cmp ecx, 512 ; checks if the whole table is mapped
jne .loop ; if not, continue
ret
enable_paging:
; pass page table location to cpu
mov eax, page_table_l4
mov cr3, eax
; enable PAE
mov eax, cr4
or eax, 1 << 5
mov cr4, eax
; enable long mode
mov ecx, 0xC0000080
rdmsr
or eax, 1 << 8
wrmsr
; enable paging
mov eax, cr0
or eax, 1 << 31
mov cr0, eax
ret
error:
; print "ERR: X" where X is the error code
mov dword [0xb8000], 0x4f524f45
mov dword [0xb8004], 0x4f3a4f52
mov dword [0xb8008], 0x4f204f20
mov byte [0xb800a], al
hlt
section .bss
align 4096
page_table_l4:
resb 4096
page_table_l3:
resb 4096
page_table_l2:
resb 4096
stack_bottom:
resb 4096 * 4
stack_top:
section .rodata
gdt64:
dq 0 ; zero entry
.code_segment: equ $ - gdt64
dq (1 << 43) | (1 << 44) | (1 << 47) | (1 << 53) ; code segment
.pointer:
dw $ - gdt64 - 1 ; length
dq gdt64 ; address
main64.asm
global long_mode_start
global load_gdt
global load_idt
global keyboard_handler
global ioport_in
global ioport_out
global enable_interrupts
extern main
extern handle_keyboard_interrupt
section .text
bits 64
long_mode_start:
; load null into all data segment registers
mov ax, 0
mov ss, ax
mov ds, ax
mov es, ax
mov fs, ax
mov gs, ax
call main
hlt
bits 32
load_idt:
mov edx, [esp + 4]
lidt [edx]
ret
keyboard_handler:
pushad
cld
call handle_keyboard_interrupt
popad
iretd
ioport_in:
mov edx, [esp + 4]
in al, dx
ret
ioport_out:
mov edx, [esp + 4]
mov eax, [esp + 8]
out dx, al
ret
bits 16
enable_interrupts:
sti
ret
And here is my kernel:
main.c
#include "io/print.h"
#include "io/input.h"
void print_prompt(){
print_str("> ");
}
void kernel_main() {
print_clear();
print_set_color(PRINT_COLOR_YELLOW, PRINT_COLOR_BLACK);
print_str("Welcome to vgOS v0.1!!");
print_newline();
print_newline();
print_prompt();
}
int main(){
kernel_main();
init_idt();
enable_interrupts();
init_kb();
print_str("here");
print_newline();
while(1);
return 0;
}
input.h
#pragma once
#include <stdint.h>
#define IDT_SIZE 256
#define KERNEL_CODE_SEGMENT_OFFSET 0x8
#define IDT_INTERRUPT_GATE_64BIT 0x0e
#define PIC1_COMMAND_PORT 0x20
#define PIC1_DATA_PORT 0x21
#define PIC2_COMMAND_PORT 0xA0
#define PIC2_DATA_PORT 0xA1
#define KEYBOARD_DATA_PORT 0x60
#define KEYBOARD_STATUS_PORT 0x64
extern void load_gdt();
extern void load_idt(unsigned int idt_address);
extern void keyboard_handler();
extern char ioport_in(unsigned short port);
extern void ioport_out(unsigned short port, unsigned char data);
extern void enable_interrupts();
struct IDTPointer{
uint16_t limit;
unsigned long long base;
} __attribute__((packed));
struct IDTEntry{
uint16_t offset_1; // Offset bits 0-15
uint16_t selector; // Code segment selector
uint8_t ist; // Interrupt Stack Table offset
uint32_t zero;
uint8_t type_attr; // Gate, type, dpl and p fields
uint16_t offset_2; // Offset bits 16-31
uint32_t offset_3; // Offset bits 32-63
} __attribute__((packed));
void init_idt();
void init_kb();
input.c
#include "input.h"
#include "print.h"
// Declare IDT
struct IDTEntry IDT[IDT_SIZE];
void init_idt(){
// Set IDT keyboard entry
uint64_t offset = (uint64_t)keyboard_handler;
IDT[0x21].offset_1 = offset & 0x000000000000FFFF;
IDT[0x21].selector = KERNEL_CODE_SEGMENT_OFFSET;
IDT[0x21].ist = 0xE; // Set gate type to 'Interrupt'
IDT[0x21].zero = 0; // 0 for testing purposes
IDT[0x21].type_attr = IDT_INTERRUPT_GATE_64BIT;
IDT[0x21].offset_2 = (offset & 0x00000000FFFF0000) >> 16;
IDT[0x21].offset_3 = (offset & 0xFFFFFFFF00000000) >> 32;
// Setup ICWs
// ICW1
ioport_out(PIC1_COMMAND_PORT, 0x11);
ioport_out(PIC2_COMMAND_PORT, 0x11);
// ICW2
ioport_out(PIC1_DATA_PORT, 0x20);
ioport_out(PIC2_DATA_PORT, 0x28);
// ICW3
ioport_out(PIC1_DATA_PORT, 0x4);
ioport_out(PIC2_DATA_PORT, 0x2);
// ICW4
ioport_out(PIC1_DATA_PORT, 0x01);
ioport_out(PIC2_DATA_PORT, 0x01);
// Mask all interrupts
ioport_out(PIC1_DATA_PORT, 0xff);
ioport_out(PIC2_DATA_PORT, 0xff);
// Load IDT data structure
struct IDTPointer idt_ptr;
idt_ptr.limit = (sizeof(struct IDTEntry) * IDT_SIZE) - 1;
idt_ptr.base = (unsigned long long)(&IDT);
load_idt(&idt_ptr);
}
void init_kb(){
// 0xFD = 1111 1101 - Unmask IRQ1
ioport_out(PIC1_DATA_PORT, 0xFD);
}
void handle_keyboard_interrupt(){
ioport_out(PIC1_COMMAND_PORT, 0x20);
unsigned char status = ioport_in(KEYBOARD_STATUS_PORT);
if(status & 0x1){
char keycode = ioport_in(KEYBOARD_DATA_PORT);
if(keycode < 0) return;
print_char(keycode);
}
}

Understanding Clang's optimization when pointer is zero

In short: try switching foos pointer from 0 to 1 here:
godbolt - compiler explorer link - what is happening?
I was surprised at how many instruction came out of clang when I compiled the following C code. - And I noticed that it only happens when the pointer foos is zero. (x86-64 clang 12.0.1 with -O2 or -O3).
#include <stdint.h>
typedef uint8_t u8;
typedef uint32_t u32;
typedef struct {
u32 x;
u32 y;
}Foo;
u32 count = 500;
int main()
{
u8 *foos = (u8 *)0;
u32 element_size = 8;
u32 offset = 0;
for(u32 i=0;i<count;i++)
{
u32 *p = (u32 *)(foos + element_size*i);
*p = i;
}
return 0;
}
This is the output when the pointer is zero.
main: # #main
mov r8d, dword ptr [rip + count]
test r8, r8
je .LBB0_6
lea rcx, [r8 - 1]
mov eax, r8d
and eax, 3
cmp rcx, 3
jae .LBB0_7
xor ecx, ecx
jmp .LBB0_3
.LBB0_7:
and r8d, -4
mov esi, 16
xor ecx, ecx
.LBB0_8: # =>This Inner Loop Header: Depth=1
lea edi, [rsi - 16]
and edi, -32
mov dword ptr [rdi], ecx
lea edi, [rsi - 8]
and edi, -24
lea edx, [rcx + 1]
mov dword ptr [rdi], edx
mov edx, esi
and edx, -16
lea edi, [rcx + 2]
mov dword ptr [rdx], edi
lea edx, [rsi + 8]
and edx, -8
lea edi, [rcx + 3]
mov dword ptr [rdx], edi
add rcx, 4
add rsi, 32
cmp r8, rcx
jne .LBB0_8
.LBB0_3:
test rax, rax
je .LBB0_6
lea rdx, [8*rcx]
.LBB0_5: # =>This Inner Loop Header: Depth=1
mov esi, edx
and esi, -8
mov dword ptr [rsi], ecx
add rdx, 8
add ecx, 1
add rax, -1
jne .LBB0_5
.LBB0_6:
xor eax, eax
ret
count:
.long 500 # 0x1f4
Can you please help me understand what is happening here? I don't know assembly very well. The AND with 3 suggest to me that there's some alignment branching. The top part of LBB0_8 looks very strange to me...
This is loop unrolling.
The code first checks if count is greater than 3, and if so, branches to LBB0_7, which sets up loop variables and drops into the loop at LBB0_8. This loop does 4 steps per iteration, as long as there are still 4 or more to do. Afterwards it falls through to the "slow path" at LBB0_3/LBB0_5 that just does one step per iteration.
That slow path is also very similar to what you get when you compile the code with a non-zero value for that pointer.
As for why this happens, I don't know. Initially I was thinking that the compiler proves that a NULL deref will happen inside the loop and optimises on that, but usually that's akin to replacing the loop contents with __builtin_unreachable();, which causes it to throw out the loop entirely. Still can't rule it out, but I've seen the compiler throw out a lot of code many times, so it seems at least unlikely that UB causes this.
Then I was thinking maybe the fact that 0 requires no additional calculation, but all it'd have to change was mov esi, 16 to mov esi, 17, so it'd have the same amount of instructions.
What's also interesting is that on x86_64, it generates a loop with 4 steps per iteration, whereas on arm64 it generates one with 2 steps per iteration.

Division performance for a x32 ELF on a x64 OS

In the following example running a 32-bit ELF on a 64-bit architecture is faster and I don't understand why. I have tried with two examples one using a division the other one with a multiplication. The performance is as expected, however, the performance for the division is surprizing.
We see on the assembly that the compiler is calling _alldiv which emulates a 64-bit division on a 32-bit architecture, so it must be slower than simply using the assembly instruction idiv. So I don't understand the results I got:
My setup is: Windows 10 x64, Visual Studio 2019
To time the code I use Measure-Command { .\out.exe }:
Multiplication
32-bit ELF: 3360 ms
64-bit ELF: 1469 ms
Division
32-bit ELF: 7383 ms
64-bit ELF: 8567 ms
Code
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#include <limits.h>
#include <Windows.h>
volatile int64_t m = 32;
volatile int64_t n = 12;
volatile int64_t result;
int main(void)
{
for (size_t i = 0; i < (1 << 30); i++)
{
# ifdef DIVISION
result = m / n;
# else
result = m * n;
# endif
m += 1;
n += 3;
}
}
64-bit disassembly (division)
for (size_t i = 0; i < (1 << 30); i++)
00007FF60DA81000 mov r8d,40000000h
00007FF60DA81006 nop word ptr [rax+rax]
{
result = m / n;
00007FF60DA81010 mov rcx,qword ptr [n (07FF60DA83038h)]
00007FF60DA81017 mov rax,qword ptr [m (07FF60DA83040h)]
00007FF60DA8101E cqo
00007FF60DA81020 idiv rax,rcx
00007FF60DA81023 mov qword ptr [result (07FF60DA83648h)],rax
m += 1;
00007FF60DA8102A mov rax,qword ptr [m (07FF60DA83040h)]
00007FF60DA81031 inc rax
00007FF60DA81034 mov qword ptr [m (07FF60DA83040h)],rax
n += 3;
00007FF60DA8103B mov rax,qword ptr [n (07FF60DA83038h)]
00007FF60DA81042 add rax,3
00007FF60DA81046 mov qword ptr [n (07FF60DA83038h)],rax
00007FF60DA8104D sub r8,1
00007FF60DA81051 jne main+10h (07FF60DA81010h)
}
}
32-bit disassembly (division)
for (size_t i = 0; i < (1 << 30); i++)
00A41002 mov edi,40000000h
00A41007 nop word ptr [eax+eax]
{
result = m / n;
00A41010 mov edx,dword ptr [n (0A43018h)]
00A41016 mov eax,dword ptr ds:[00A4301Ch]
00A4101B mov esi,dword ptr [m (0A43020h)]
00A41021 mov ecx,dword ptr ds:[0A43024h]
00A41027 push eax
00A41028 push edx
00A41029 push ecx
00A4102A push esi
00A4102B call _alldiv (0A41CD0h)
00A41030 mov dword ptr [result (0A433A0h)],eax
00A41035 mov dword ptr ds:[0A433A4h],edx
m += 1;
00A4103B mov eax,dword ptr [m (0A43020h)]
00A41040 mov ecx,dword ptr ds:[0A43024h]
00A41046 add eax,1
00A41049 mov dword ptr [m (0A43020h)],eax
00A4104E adc ecx,0
00A41051 mov dword ptr ds:[0A43024h],ecx
n += 3;
00A41057 mov eax,dword ptr [n (0A43018h)]
00A4105C mov ecx,dword ptr ds:[0A4301Ch]
00A41062 add eax,3
00A41065 mov dword ptr [n (0A43018h)],eax
00A4106A adc ecx,0
00A4106D mov dword ptr ds:[0A4301Ch],ecx
00A41073 sub edi,1
00A41076 jne main+10h (0A41010h)
}
}
Edit
To investigate further as Chris Dodd, I have slightly modified my code as follow:
volatile int64_t m = 32000000000;
volatile int64_t n = 12000000000;
volatile int64_t result;
This time I have these results:
Division
32-bit ELF: 22407 ms
64-bit ELF: 17812 ms
If you look at instruction timings for x86 processors, it turns out that on recent Intel processors, a 64-bit divide is 3-4x as expensive as a 32-bit divide -- and if you look at the internals of alldiv (link in the comments above), for your values which will always fit in 32 bits, it will use a single 32-bit divide...

Seeking maximum bitmap (aka bit array) performance with C/Intel assembly

Following on from my two previous questions, How to improve memory performance/data locality of 64-bit C/intel assembly program and Using C/Intel assembly, what is the fastest way to test if a 128-byte memory block contains all zeros?, I have further reduced the running time of the test program mentioned in these questions from 150 seconds down to 62 seconds, as I will describe below.
The 64-bit program has five 4 GB lookup tables (bytevecM, bytevecD, bytevecC, bytevecL, bytevecX). To reduce the (huge) number of cache misses, analysed in my last question, I added five 4 MB bitmaps, one per lookup table.
Here is the original inner loop:
psz = (size_t*)&bytevecM[(unsigned int)m7 & 0xffffff80];
if (psz[0] == 0 && psz[1] == 0
&& psz[2] == 0 && psz[3] == 0
&& psz[4] == 0 && psz[5] == 0
&& psz[6] == 0 && psz[7] == 0
&& psz[8] == 0 && psz[9] == 0
&& psz[10] == 0 && psz[11] == 0
&& psz[12] == 0 && psz[13] == 0
&& psz[14] == 0 && psz[15] == 0) continue;
// ... rinse and repeat for bytevecD, bytevecC, bytevecL, bytevecX
// expensive inner loop that scans 128 byte chunks from the 4 GB lookup tables...
The idea behind this simple "pre-check" was to avoid the expensive inner loop if all 128 bytes were zero. However, profiling showed that this pre-check was the primary bottleneck due to huge numbers of cache misses, as discussed last time. So I created a 4 MB bitmap to do the pre-check. (BTW, around 36% of 128-byte blocks are zero, not 98% as I mistakenly reported last time).
Here is the code I used to create a 4 MB bitmap from a 4 GB lookup table:
// Last chunk index (bitmap size=((LAST_CHUNK_IDX+1)>>3)=4,194,304 bytes)
#define LAST_CHUNK_IDX 33554431
void make_bitmap(
const unsigned char* bytevec, // in: byte vector
unsigned char* bitvec // out: bitmap
)
{
unsigned int uu;
unsigned int ucnt = 0;
unsigned int byte;
unsigned int bit;
const size_t* psz;
for (uu = 0; uu <= LAST_CHUNK_IDX; ++uu)
{
psz = (size_t*)&bytevec[uu << 7];
if (psz[0] == 0 && psz[1] == 0
&& psz[2] == 0 && psz[3] == 0
&& psz[4] == 0 && psz[5] == 0
&& psz[6] == 0 && psz[7] == 0
&& psz[8] == 0 && psz[9] == 0
&& psz[10] == 0 && psz[11] == 0
&& psz[12] == 0 && psz[13] == 0
&& psz[14] == 0 && psz[15] == 0) continue;
++ucnt;
byte = uu >> 3;
bit = (uu & 7);
bitvec[byte] |= (1 << bit);
}
printf("ucnt=%u hits from %u\n", ucnt, LAST_CHUNK_IDX+1);
}
Suggestions for a better way to do this are welcome.
With the bitmaps created via the function above, I then changed the "pre-check" to use the 4 MB bitmaps, instead of the 4 GB lookup tables, like so:
if ( (bitvecM[m7 >> 10] & (1 << ((m7 >> 7) & 7))) == 0 ) continue;
// ... rinse and repeat for bitvecD, bitvecC, bitvecL, bitvecX
// expensive inner loop that scans 128 byte chunks from the 4 GB lookup tables...
This was "successful" in that the running time was reduced from 150 seconds down to 62 seconds in the simple single-threaded case. However, VTune still reports some pretty big numbers, as shown below.
I profiled a more realistic test with eight simultaneous threads running across different ranges. The VTune output of the inner loop check for zero blocks is shown below:
> m7 = (unsigned int)( (m6 ^ q7) * H_PRIME );
> if ( (bitvecM[m7 >> 10] & (1 << ((m7 >> 7) & 7))) == 0 ) continue;
0x1400025c7 Block 15:
mov eax, r15d 1.058s
mov edx, ebx 0.109s
xor eax, ecx 0.777s
imul eax, eax, 0xf4243 1.088s
mov r9d, eax 3.369s
shr eax, 0x7 0.123s
and eax, 0x7 1.306s
movzx ecx, al 1.319s
mov eax, r9d 0.156s
shr rax, 0xa 0.248s
shl edx, cl 1.321s
test byte ptr [rax+r10*1], dl 1.832s
jz 0x140007670 2.037s
> d7 = (unsigned int)( (s6.m128i_i32[0] ^ q7) * H_PRIME );
> if ( (bitvecD[d7 >> 10] & (1 << ((d7 >> 7) & 7))) == 0 ) continue;
0x1400025f3 Block 16:
mov eax, dword ptr [rsp+0x30] 104.983s
mov edx, ebx 1.663s
xor eax, r15d 0.062s
imul eax, eax, 0xf4243 0.513s
mov edi, eax 1.172s
shr eax, 0x7 0.140s
and eax, 0x7 0.062s
movzx ecx, al 0.575s
mov eax, edi 0.689s
shr rax, 0xa 0.016s
shl edx, cl 0.108s
test byte ptr [rax+r11*1], dl 1.591s
jz 0x140007670 1.087s
> c7 = (unsigned int)( (s6.m128i_i32[1] ^ q7) * H_PRIME );
> if ( (bitvecC[c7 >> 10] & (1 << ((c7 >> 7) & 7))) == 0 ) continue;
0x14000261f Block 17:
mov eax, dword ptr [rsp+0x34] 75.863s
mov edx, 0x1 1.097s
xor eax, r15d 0.031s
imul eax, eax, 0xf4243 0.265s
mov ebx, eax 0.512s
shr eax, 0x7 0.016s
and eax, 0x7 0.233s
movzx ecx, al 0.233s
mov eax, ebx 0.279s
shl edx, cl 0.109s
mov rcx, qword ptr [rsp+0x58] 0.652s
shr rax, 0xa 0.171s
movzx ecx, byte ptr [rax+rcx*1] 0.126s
test cl, dl 77.918s
jz 0x140007667
> l7 = (unsigned int)( (s6.m128i_i32[2] ^ q7) * H_PRIME );
> if ( (bitvecL[l7 >> 10] & (1 << ((l7 >> 7) & 7))) == 0 ) continue;
0x140002655 Block 18:
mov eax, dword ptr [rsp+0x38] 0.980s
mov edx, 0x1 0.794s
xor eax, r15d 0.062s
imul eax, eax, 0xf4243 0.187s
mov r11d, eax 0.278s
shr eax, 0x7 0.062s
and eax, 0x7 0.218s
movzx ecx, al 0.218s
mov eax, r11d 0.186s
shl edx, cl 0.031s
mov rcx, qword ptr [rsp+0x50] 0.373s
shr rax, 0xa 0.233s
movzx ecx, byte ptr [rax+rcx*1] 0.047s
test cl, dl 55.060s
jz 0x14000765e
In addition to that, large amounts of time were (confusingly to me) attributed to this line:
> for (q6 = 1; q6 < 128; ++q6) {
0x1400075a1 Block 779:
inc edx 0.124s
mov dword ptr [rsp+0x10], edx
cmp edx, 0x80 0.031s
jl 0x140002574
mov ecx, dword ptr [rsp+0x4]
mov ebx, dword ptr [rsp+0x48]
...
0x140007575 Block 772:
mov edx, dword ptr [rsp+0x10] 0.699s
...
0x14000765e Block 789 (note: jz in l7 section above jumps here if zero):
mov edx, dword ptr [rsp+0x10] 1.169s
jmp 0x14000757e 0.791s
0x140007667 Block 790 (note: jz in c7 section above jumps here if zero):
mov edx, dword ptr [rsp+0x10] 2.261s
jmp 0x140007583 1.461s
0x140007670 Block 791 (note: jz in m7/d7 section above jumps here if zero):
mov edx, dword ptr [rsp+0x10] 108.355s
jmp 0x140007588 6.922s
I don't fully understand the big numbers in the VTune output above. If anyone can shed more light on these numbers, I'm all ears.
It seems to me that my five 4 MB bitmaps are bigger than my Core i7 3770 processor can fit into its 8 MB L3 cache, leading to many cache misses (though far fewer than before). If my CPU had a 30 MB L3 cache (as the upcoming Ivy Bridge-E has), I speculate that this program would run a lot faster because all five bitmaps would comfortably fit into the L3 cache. Is that right?
Further to that, since the code to test the bitmaps, namely:
m7 = (unsigned int)( (m6 ^ q7) * H_PRIME );
bitvecM[m7 >> 10] & (1 << ((m7 >> 7) & 7))) == 0
now appears five times in the inner loop, any suggestions for speeding up this code are very welcome.
Within the core bits of the loop, using the _bittest() MSVC intrinsic for the bitmap check combines the shl/test combo the compiler creates into a single instruction with (on SandyBridge) no latency/throughput penalty, i.e. it should shave a few cycles off.
Beyond that, can only think of calculating the bitmaps by map-reducing bit sets via recursive POR, as a variation on your zero testing that might be worth benchmarking:
for (int i = 0; i < MAX_IDX; i++) {
__m128i v[8];
__m128i* ptr = ...[i << ...];
v[0] = _mm_load_si128(ptr[0]);
v[1] = _mm_load_si128(ptr[1]);
v[2] = _mm_load_si128(ptr[2]);
v[3] = _mm_load_si128(ptr[3]);
v[4] = _mm_load_si128(ptr[4]);
v[5] = _mm_load_si128(ptr[5]);
v[6] = _mm_load_si128(ptr[6]);
v[7] = _mm_load_si128(ptr[7]);
v[0] = _mm_or_si128(v[0], v[1]);
v[2] = _mm_or_si128(v[2], v[3]);
v[4] = _mm_or_si128(v[4], v[5]);
v[6] = _mm_or_si128(v[6], v[7]);
v[0] = _mm_or_si128(v[0], v[2]);
v[2] = _mm_or_si128(v[4], v[6]);
v[0] = _mm_or_si128(v[0], v[2]);
if (_mm_movemask_epi8(_mm_cmpeq_epi8(_mm_setzero_si128(), v[0]))) {
// the contents aren't all zero
}
...
}
At this point, the pure load / accumulate-OR / extract mask might be better than a tight loop of SSE4.2 PTEST because there's no flags dependency and no branches.
For the 128-byte buffer, do the comparisons with larger integers.
unsigned char cbuf[128];
unsigned long long *lbuf = cbuf;
int i;
for (i=0; i < 128/sizeof(long long); i++) {
if (lbuf[i]) return false; // something not a zero
}
return true; // all zero

How to optimize C-code with SSE-intrinsics for packed 32x32 => 64-bit multiplies, and unpacking the halves of those results for (Galois Fields)

I've been struggling for a while with the performance of the network coding in an application I'm developing (see Optimzing SSE-code, Improving performance of network coding-encoding and OpenCL distribution). Now I'm quite close to achieve acceptable performance. This is the current state of the innermost loop (which is where >99% of the execution time is being spent):
while(elementIterations-- >0)
{
unsigned int firstMessageField = *(currentMessageGaloisFieldsArray++);
unsigned int secondMessageField = *(currentMessageGaloisFieldsArray++);
__m128i valuesToMultiply = _mm_set_epi32(0, secondMessageField, 0, firstMessageField);
__m128i mulitpliedHalves = _mm_mul_epu32(valuesToMultiply, fragmentCoefficentVector);
}
Do you have any suggestions on how to further optimize this? I understand that it's hard to do without more context but any help is appreciated!
Now that I'm awake, here's my answer:
In your original code, the bottleneck is almost certainly _mm_set_epi32. This single intrinsic gets compiled into this mess in your assembly:
633415EC xor edi,edi
633415EE movd xmm3,edi
...
633415F6 xor ebx,ebx
633415F8 movd xmm4,edi
633415FC movd xmm5,ebx
63341600 movd xmm0,esi
...
6334160B punpckldq xmm5,xmm3
6334160F punpckldq xmm0,xmm4
...
63341618 punpckldq xmm0,xmm5
What is this? 9 instructions?!?!?! Pure overhead...
Another place that seems odd is that the compiler didn't merge the adds and loads:
movdqa xmm3,xmmword ptr [ecx-10h]
paddq xmm0,xmm3
should have been merged into:
paddq xmm0,xmmword ptr [ecx-10h]
I'm not sure if the compiler went brain-dead, or if it actually had a legitimate reason to do that... Anyways, it's a small thing compared to the _mm_set_epi32.
Disclaimer: The code I will present from here on violates strict-aliasing. But non-standard compliant methods are often needed to achieve maximum performance.
Solution 1: No Vectorization
This solution assumes allZero is really all zeros.
The loop is actually simpler than it looks. Since there isn't a lot of arithmetic, it might be better to just not vectorize:
// Test Data
unsigned __int32 fragmentCoefficentVector = 1000000000;
__declspec(align(16)) int currentMessageGaloisFieldsArray_[8] = {10,11,12,13,14,15,16,17};
int *currentMessageGaloisFieldsArray = currentMessageGaloisFieldsArray_;
__m128i currentUnModdedGaloisFieldFragments_[8];
__m128i *currentUnModdedGaloisFieldFragments = currentUnModdedGaloisFieldFragments_;
memset(currentUnModdedGaloisFieldFragments,0,8 * sizeof(__m128i));
int elementIterations = 4;
// The Loop
while (elementIterations > 0){
elementIterations -= 1;
// Default 32 x 32 -> 64-bit multiply code
unsigned __int64 r0 = currentMessageGaloisFieldsArray[0] * (unsigned __int64)fragmentCoefficentVector;
unsigned __int64 r1 = currentMessageGaloisFieldsArray[1] * (unsigned __int64)fragmentCoefficentVector;
// Use this for Visual Studio. VS doesn't know how to optimize 32 x 32 -> 64-bit multiply
// unsigned __int64 r0 = __emulu(currentMessageGaloisFieldsArray[0], fragmentCoefficentVector);
// unsigned __int64 r1 = __emulu(currentMessageGaloisFieldsArray[1], fragmentCoefficentVector);
((__int64*)currentUnModdedGaloisFieldFragments)[0] += r0 & 0x00000000ffffffff;
((__int64*)currentUnModdedGaloisFieldFragments)[1] += r0 >> 32;
((__int64*)currentUnModdedGaloisFieldFragments)[2] += r1 & 0x00000000ffffffff;
((__int64*)currentUnModdedGaloisFieldFragments)[3] += r1 >> 32;
currentMessageGaloisFieldsArray += 2;
currentUnModdedGaloisFieldFragments += 2;
}
Which compiles to this on x64:
$LL4#main:
mov ecx, DWORD PTR [rbx]
mov rax, r11
add r9, 32 ; 00000020H
add rbx, 8
mul rcx
mov ecx, DWORD PTR [rbx-4]
mov r8, rax
mov rax, r11
mul rcx
mov ecx, r8d
shr r8, 32 ; 00000020H
add QWORD PTR [r9-48], rcx
add QWORD PTR [r9-40], r8
mov ecx, eax
shr rax, 32 ; 00000020H
add QWORD PTR [r9-24], rax
add QWORD PTR [r9-32], rcx
dec r10
jne SHORT $LL4#main
and this on x86:
$LL4#main:
mov eax, DWORD PTR [esi]
mul DWORD PTR _fragmentCoefficentVector$[esp+224]
mov ebx, eax
mov eax, DWORD PTR [esi+4]
mov DWORD PTR _r0$31463[esp+228], edx
mul DWORD PTR _fragmentCoefficentVector$[esp+224]
add DWORD PTR [ecx-16], ebx
mov ebx, DWORD PTR _r0$31463[esp+228]
adc DWORD PTR [ecx-12], edi
add DWORD PTR [ecx-8], ebx
adc DWORD PTR [ecx-4], edi
add DWORD PTR [ecx], eax
adc DWORD PTR [ecx+4], edi
add DWORD PTR [ecx+8], edx
adc DWORD PTR [ecx+12], edi
add esi, 8
add ecx, 32 ; 00000020H
dec DWORD PTR tv150[esp+224]
jne SHORT $LL4#main
It's possible that both of these are already faster than your original (SSE) code... On x64, Unrolling it will make it even better.
Solution 2: SSE2 Integer Shuffle
This solution unrolls the loop to 2 iterations:
// Test Data
__m128i allZero = _mm_setzero_si128();
__m128i fragmentCoefficentVector = _mm_set1_epi32(1000000000);
__declspec(align(16)) int currentMessageGaloisFieldsArray_[8] = {10,11,12,13,14,15,16,17};
int *currentMessageGaloisFieldsArray = currentMessageGaloisFieldsArray_;
__m128i currentUnModdedGaloisFieldFragments_[8];
__m128i *currentUnModdedGaloisFieldFragments = currentUnModdedGaloisFieldFragments_;
memset(currentUnModdedGaloisFieldFragments,0,8 * sizeof(__m128i));
int elementIterations = 4;
// The Loop
while(elementIterations > 1){
elementIterations -= 2;
// Load 4 elements. If needed use unaligned load instead.
// messageField = {a, b, c, d}
__m128i messageField = _mm_load_si128((__m128i*)currentMessageGaloisFieldsArray);
// Get into this form:
// values0 = {a, x, b, x}
// values1 = {c, x, d, x}
__m128i values0 = _mm_shuffle_epi32(messageField,216);
__m128i values1 = _mm_shuffle_epi32(messageField,114);
// Multiply by "fragmentCoefficentVector"
values0 = _mm_mul_epu32(values0, fragmentCoefficentVector);
values1 = _mm_mul_epu32(values1, fragmentCoefficentVector);
__m128i halves0 = _mm_unpacklo_epi32(values0, allZero);
__m128i halves1 = _mm_unpackhi_epi32(values0, allZero);
__m128i halves2 = _mm_unpacklo_epi32(values1, allZero);
__m128i halves3 = _mm_unpackhi_epi32(values1, allZero);
halves0 = _mm_add_epi64(halves0, currentUnModdedGaloisFieldFragments[0]);
halves1 = _mm_add_epi64(halves1, currentUnModdedGaloisFieldFragments[1]);
halves2 = _mm_add_epi64(halves2, currentUnModdedGaloisFieldFragments[2]);
halves3 = _mm_add_epi64(halves3, currentUnModdedGaloisFieldFragments[3]);
currentUnModdedGaloisFieldFragments[0] = halves0;
currentUnModdedGaloisFieldFragments[1] = halves1;
currentUnModdedGaloisFieldFragments[2] = halves2;
currentUnModdedGaloisFieldFragments[3] = halves3;
currentMessageGaloisFieldsArray += 4;
currentUnModdedGaloisFieldFragments += 4;
}
which gets compiled to this (x86): (x64 isn't too different)
$LL4#main:
movdqa xmm1, XMMWORD PTR [esi]
pshufd xmm0, xmm1, 216 ; 000000d8H
pmuludq xmm0, xmm3
movdqa xmm4, xmm0
punpckhdq xmm0, xmm2
paddq xmm0, XMMWORD PTR [eax-16]
pshufd xmm1, xmm1, 114 ; 00000072H
movdqa XMMWORD PTR [eax-16], xmm0
pmuludq xmm1, xmm3
movdqa xmm0, xmm1
punpckldq xmm4, xmm2
paddq xmm4, XMMWORD PTR [eax-32]
punpckldq xmm0, xmm2
paddq xmm0, XMMWORD PTR [eax]
punpckhdq xmm1, xmm2
paddq xmm1, XMMWORD PTR [eax+16]
movdqa XMMWORD PTR [eax-32], xmm4
movdqa XMMWORD PTR [eax], xmm0
movdqa XMMWORD PTR [eax+16], xmm1
add esi, 16 ; 00000010H
add eax, 64 ; 00000040H
dec ecx
jne SHORT $LL4#main
Only slightly longer than the non-vectorized version for two iterations. This uses very few registers, so you can further unroll this even on x86.
Explanations:
As Paul R mentioned, unrolling to two iterations allows you to combine the initial load into one SSE load. This also has the benefit of getting your data into the SSE registers.
Since the data starts off in the SSE registers, _mm_set_epi32 (which gets compiled into about ~9 instructions in your original code) can be replaced with a single _mm_shuffle_epi32.
I suggest you unroll your loop by a factor of 2 so that you can load 4 messageField values using one _mm_load_XXX, and then unpack these four values into two vector pairs and process them as per the current loop. That way you won't have a lot of messy code being generated by the compiler for _mm_set_epi32 and all your loads and stores will be 128 bit SSE loads/stores. This will also give the compiler more opportunity to schedule instructions optimally within the loop.

Resources