addl instruction does something different? [duplicate] - c

When reviewing gdb output and looking at the assembly calls, usually I can find a command using hard-coded values to determine whether the registers are being loaded right to left or vice versa.
Usually something like the following:
sub rsp, 16
or
sub 16, rsp
But other times, no values like above are visible.
All I see are calls like the following :
(gdb) disassemble
Dump of assembler code for function main:
0x0000000100000f54 <main+4>: mov $rdi,%r15
0x0000000100000f59 <main+9>: mov $rsi,%r14
0x0000000100000f60 <main+16>: mov $rdx,%r13
0x0000000100000f67 <main+23>: mov $ecx,$r12d
End of assembler dump.
How does one determine if values are processed left to right or vice versa?

Normally, Gnu tools use AT&T syntax. You can tell that it is AT&T syntax by the presence of little symbols, like the $ preceding literals, and the % preceding registers. For example, this instruction:
sub $16, %rax
is obviously using AT&T syntax. It subtracts 16 from the value in the rax register, and stores the result back in rax.
In AT&T syntax, the destination operand is on the right:
insn source, destination # AT&T syntax
There is also Intel syntax. This is ubiquitous on Windows platforms, and usually also available as an option for Gnu/Linux tools. Intel syntax is unadorned—e.g.:
sub rax, 16
which is the same as the AT&T instruction above—it subtracts 16 from the value in the rax register, and stores the result back in the rax register.
In Intel syntax, the destination operand is always on the left:
insn destination, source ; Intel syntax
To be absolutely certain of which version you've got, you'd need to check the settings for your disassembler/debugger and see what syntax it is configured to use, but it's usually dead-simple to tell at a glance just by looking to see if the symbolic adornments are there (a dead give-away for AT&T syntax).
Summary:
If the registers have a % prefix → AT&T syntax → src, dst order.
Otherwise, unadorned registers → Intel syntax → dst, src order.
If you've somehow ended up looking at code that doesn't use any registers (???), another good heuristic clue is that Intel syntax will prepend size specifiers (like DWORD, QWORD, and BYTE) to the associated operand, whereas AT&T syntax will append a suffix (l, q, b, etc.) to the instruction mnemonic itself.

Related

"movl" instruction in assembly

I am learning assembly and reading "Computer Systems: A programmer's perspective". In Practice Problem 3.3, it says movl %eax,%rdx will generate an error. The answer keys says movl %eax,%dx Destination operand incorrect size. I am not sure if this is a typo or not, but my question is: is movl %eax,%rdx a legal instruction? I think it is moving the 32 bits in %eax with zero extension to %rdx, which will not be generated as movzql since
an instruction generating a 4-byte value with a register as the destination will fill the upper 4 bytes with zeros` (from the book).
I tried to write some C code to generate it, but I always get movslq %eax, %rdx(GCC 4.8.5 -Og). I am completely confused.
The GNU assembler doesn't accept movl %eax,%rdx. It also doesn't make sense for the encoding, since mov must have a single operand size (using a prefix byte if needed), not two different sized operands.
The effect you want is achieved by movl %eax, %edx since writes to a 32-bit register always zero-extend into the corresponding 64-bit register. See Why do x86-64 instructions on 32-bit registers zero the upper part of the full 64-bit register?.
movzlq %eax, %rdx might make logical sense, but it's not supported since it would be redundant.

What is the lea instruction doing in this piece of code? [duplicate]

I was trying to understand how Address Computation Instruction works, especially with leaq command. Then I get confused when I see examples using leaq to do arithmetic computation. For example, the following C code,
long m12(long x) {
return x*12;
}
In assembly,
leaq (%rdi, %rdi, 2), %rax
salq $2, $rax
If my understanding is right, leaq should move whatever address (%rdi, %rdi, 2), which should be 2*%rdi+%rdi, evaluate to into %rax. What I get confused is since value x is stored in %rdi, which is just memory address, why does times %rdi by 3 then left shift this memory address by 2 is equal to x times 12? Isn't that when we times %rdi by 3, we jump to another memory address which does not hold value x?
lea (see Intel's instruction-set manual entry) is a shift-and-add instruction that uses memory-operand syntax and machine encoding. This explains the name, but it's not the only thing it's good for. It never actually accesses memory, so it's like using & in C.
See for example How to multiply a register by 37 using only 2 consecutive leal instructions in x86?
In C, it's like uintptr_t foo = (uintptr_t) &arr[idx]. Note the & to give you arr + idx (scaling for the object size of arr since this is C not asm). In C, this would be abuse of the language syntax and types, but in x86 assembly pointers and integers are the same thing. Everything is just bytes, and it's up to the program put instructions in the right order to get useful results.
Effective address is a technical term in x86: it means the "offset" part of a seg:off logical address, especially when a base_reg + index*scale + displacement calculation was needed. e.g. the rax + (rcx<<2) in a %gs:(%rax,%rcx,4) addressing mode. (But EA still applies to %rdi for stosb, or the absolute displacement for movabs load/store, or other cases without a ModRM addr mode). Its use in this context doesn't mean it must be a valid / useful memory address, it's telling you that the calculation doesn't involve the segment base so it's not calculating a linear address. (Adding the seg base would make it unusable for actual address math in a non-flat memory model.)
The original designer / architect of 8086's instruction set (Stephen Morse) might or might not have had pointer math in mind as the main use-case, but modern compilers think of it as just another option for doing arithmetic on pointers / integers, and so should humans.
(Note that 16-bit addressing modes don't include shifts, just [BP|BX] + [SI|DI] + disp8/disp16, so LEA wasn't as useful for non-pointer math before 386. See this Q&A for more about 32/64-bit addressing modes, although that answer uses Intel syntax like [rax + rdi*4] instead of the AT&T syntax used in this question. x86 machine code is the same regardless of what syntax you use to create it.)
Maybe the 8086 architects did simply want to expose the address-calculation hardware for arbitrary uses because they could do it without using a lot of extra transistors. The decoder already has to be able to decode addressing modes, and other parts of the CPU have to be able to do address calculations. Putting the result in a register instead of using it with a segment-register value for memory access doesn't take many extra transistors. Ross Ridge confirms that LEA on original 8086 reuses the CPUs effective-address decoding and calculation hardware.
Note that most modern CPUs run LEA on the same ALUs as normal add and shift instructions. They have dedicated AGUs (address-generation units), but only use them for actual memory operands. In-order Atom is one exception; LEA runs earlier in the pipeline than the ALUs: inputs have to be ready sooner, but outputs are also ready sooner. Out-of-order execution CPUs (all modern x86) don't want LEA to interfere with actual loads/stores so they run it on an ALU.
lea has good latency and throughput, but not as good throughput as add or mov r32, imm32 on most CPUs, so only use lea when you can save an instructions with it instead of add. (See Agner Fog's x86 microarch guide and asm optimization manual and https://uops.info/)
Ice Lake improved on that for Intel, now able to run LEA on all four ALU ports.
Rules for which kinds of LEA are "complex", running on fewer of the ports that can handle it, vary by microarchitecture. e.g. 3-component (two + operations) is the slower case on SnB-family, having a scaled index is the lower-throughput case on Ice Lake. Alder Lake E-cores (Gracemont) are 4/clock, but 1/clock when there's an index at all, and 2-cycle latency when there's an index and displacement (whether or not there's a base reg). Zen is slower when there's a scaled index or 3 components. (2c latency and 2/clock down from 1c and 4/clock).
The internal implementation is irrelevant, but it's a safe bet that decoding the operands to LEA shares transistors with decoding addressing modes for any other instruction. (So there is hardware reuse / sharing even on modern CPUs that don't execute lea on an AGU.) Any other way of exposing a multi-input shift-and-add instruction would have taken a special encoding for the operands.
So 386 got a shift-and-add ALU instruction for "free" when it extended the addressing modes to include scaled-index, and being able to use any register in an addressing mode made LEA much easier to use for non-pointers, too.
x86-64 got cheap access to the program counter (instead of needing to read what call pushed) "for free" via LEA because it added the RIP-relative addressing mode, making access to static data significantly cheaper in x86-64 position-independent code than in 32-bit PIC. (RIP-relative does need special support in the ALUs that handle LEA, as well as the separate AGUs that handle actual load/store addresses. But no new instruction was needed.)
It's just as good for arbitrary arithmetic as for pointers, so it's a mistake to think of it as being intended for pointers these days. It's not an "abuse" or "trick" to use it for non-pointers, because everything's an integer in assembly language. It has lower throughput than add, but it's cheap enough to use almost all the time when it saves even one instruction. But it can save up to three instructions:
;; Intel syntax.
lea eax, [rdi + rsi*4 - 8] ; 3 cycle latency on Intel SnB-family
; 2-component LEA is only 1c latency
;;; without LEA:
mov eax, esi ; maybe 0 cycle latency, otherwise 1
shl eax, 2 ; 1 cycle latency
add eax, edi ; 1 cycle latency
sub eax, 8 ; 1 cycle latency
On some AMD CPUs, even a complex LEA is only 2 cycle latency, but the 4-instruction sequence would be 4 cycle latency from esi being ready to the final eax being ready. Either way, this saves 3 uops for the front-end to decode and issue, and that take up space in the reorder buffer all the way until retirement.
lea has several major benefits, especially in 32/64-bit code where addressing modes can use any register and can shift:
non-destructive: output in a register that isn't one of the inputs. It's sometimes useful as just a copy-and-add like lea 1(%rdi), %eax or lea (%rdx, %rbp), %ecx.
can do 3 or 4 operations in one instruction (see above).
Math without modifying EFLAGS, can be handy after a test before a cmovcc. Or maybe in an add-with-carry loop on CPUs with partial-flag stalls.
x86-64: position independent code can use a RIP-relative LEA to get a pointer to static data.
7-byte lea foo(%rip), %rdi is slightly larger and slower than mov $foo, %edi (5 bytes), so prefer mov r32, imm32 in position-dependent code on OSes where symbols are in the low 32 bits of virtual address space, like Linux. You may need to disable the default PIE setting in gcc to use this.
In 32-bit code, mov edi, OFFSET symbol is similarly shorter and faster than lea edi, [symbol]. (Leave out the OFFSET in NASM syntax.) RIP-relative isn't available and addresses fit in a 32-bit immediate, so there's no reason to consider lea instead of mov r32, imm32 if you need to get static symbol addresses into registers.
Other than RIP-relative LEA in x86-64 mode, all of these apply equally to calculating pointers vs. calculating non-pointer integer add / shifts.
See also the x86 <!--> tag wiki for assembly guides / manuals, and performance info.
Operand-size vs. address-size for x86-64 lea
See also Which 2's complement integer operations can be used without zeroing high bits in the inputs, if only the low part of the result is wanted?. 64-bit address size and 32-bit operand size is the most compact encoding (no extra prefixes), so prefer lea (%rdx, %rbp), %ecx when possible instead of 64-bit lea (%rdx, %rbp), %rcx or 32-bit lea (%edx, %ebp), %ecx.
x86-64 lea (%edx, %ebp), %ecx is always a waste of an address-size prefix vs. lea (%rdx, %rbp), %ecx, but 64-bit address / operand size is obviously required for doing 64-bit math. (Agner Fog's objconv disassembler even warns about useless address-size prefixes on LEA with a 32-bit operand-size.)
Except maybe on Ryzen, where Agner Fog reports that 32-bit operand size lea in 64-bit mode has an extra cycle of latency. I don't know if overriding the address-size to 32-bit can speed up LEA in 64-bit mode if you need it to truncate to 32-bit.
This question is a near-duplicate of the very-highly-voted What's the purpose of the LEA instruction?, but most of the answers explain it in terms of address calculation on actual pointer data. That's only one use.
leaq doesn't have to operate on memory addresses, and it computes an address, it doesn't actually read from the result, so until a mov or the like tries to use it, it's just an esoteric way to add one number, plus 1, 2, 4 or 8 times another number (or the same number in this case). It's frequently "abused"† for mathematical purposes, as you see. 2*%rdi+%rdi is just 3 * %rdi, so it's computing x * 3 without involving the multiplier unit on the CPU.
Similarly, left shifting, for integers, doubles the value for every bit shifted (every zero added to the right), thanks to the way binary numbers work (the same way in decimal numbers, adding zeroes on the right multiplies by 10).
So this is abusing the leaq instruction to accomplish multiplication by 3, then shifting the result to achieve a further multiplication by 4, for a final result of multiplying by 12 without ever actually using a multiply instruction (which it presumably believes would run more slowly, and for all I know it could be right; second-guessing the compiler is usually a losing game).
†: To be clear, it's not abuse in the sense of misuse, just using it in a way that doesn't clearly align with the implied purpose you'd expect from its name. It's 100% okay to use it this way.
LEA is for calculating the address. It doesn't dereference the memory address
It should be much more readable in Intel syntax
m12(long):
lea rax, [rdi+rdi*2]
sal rax, 2
ret
So the first line is equivalent to rax = rdi*3
Then the left shift is to multiply rax by 4, which results in rdi*3*4 = rdi*12

Multiplication of corresponding values in an array

I want to write an x86 program that multiplies corresponding elements of 2 arrays (array1[0]*array2[0] and so on till 5 elements) and stores the results in a third array. I don't even know where to start. Any help is greatly appreciated.
First thing you'll want to get is an assembler, I'm personally a big fan of NASM in my opinion it has a very clean and concise syntax, it's also what I started on so that's what I'll use for this answer.
Other than NASM you have:
GAS
This is the GNU assembler, unlike NASM there are versions for many architectures so the directives and way of working will be about the same other than the instructions if you switch architectures. GAS does however have the unfortunate downside of being somewhat unfriendly for people who want to use the Intel syntax.
FASM
This is the Flat Assembler, it is an assembler written in Assembly. Like NASM it's unfriendly to people who want to use AT&T syntax. It has a few rough edges but some people seem to prefer it for DOS applications (especially because there's a DOS port of it) and bare metal work.
Now you might be reading 'AT&T syntax' and 'Intel syntax' and wondering what's meant by that. These are dialects of x86 assembly, they both assemble to the same machine code but reflect slightly different ways of thinking about each instruction. AT&T syntax tends to be more verbose whereas Intel syntax tends to be more minimal, however certain parts of AT&T syntax have nicer operand orderings tahn Intel syntax, a good demonstration of the difference is the mov instruction:
AT&T syntax:
movl (0x10), %eax
This means get the long value (1 dword, aka 4 bytes) and put it in the register eax. Take note of the fact that:
The mov is suffixed with the operand length.
The memory address is surrounded in parenthesis (you can think of them like a pointer dereference in C)
The register is prefixed with %
The instruction moves the left operand into the right operand
Intel Syntax:
mov eax, [0x10]
Take note of the fact that:
We do not need to suffix the instruction with the operand size, the assembler infers it, there are situations where it can't, in which case we specify the size next to the address.
The register is not prefixed
Square brackets are used to address memory
The second operand is moved into the first operand
I will be using Intel syntax for this answer.
Once you've installed NASM on your machine you'll want a simple build script (when you start writing bigger programs use a Makefile or some other proper build system, but for now this will do):
nasm -f elf arrays.asm
ld -o arrays arrays.o -melf_i386
rm arrays.o
echo
echo " Done building, the file 'arrays' is your executable"
Remember to chmod +x the script or you won't be able to execute it.
Now for the code along with some comments explaining what everything means:
global _start ; The linker will be looking for this entrypoint, so we need to make it public
section .data ; We're going on to describe our data here
array_length equ 5 ; This is effectively a macro and isn't actually being stored in memory
array1 dd 1,4,1,5,9 ; dd means declare dwords
array2 dd 2,6,5,3,5
sys_exit equ 1
section .bss ; Data that isn't initialised with any particular value
array3 resd 5 ; Leave us 5 dword sized spaces
section .text
_start:
xor ecx,ecx ; index = 0 to start
; In a Linux static executable, registers are initialized to 0 so you could leave this out if you're never going to link this as a dynamic executable.
_multiply_loop:
mov eax, [array1+ecx*4] ; move the value at the given memory address into eax
; We calculate the address we need by first taking ecx (which tells us which
; item we want) multiplying it by 4 (i.e: 4 bytes/1 dword) and then adding it
; to our array's start address to determine the address of the given item
imul eax, dword [array2+ecx*4] ; This performs a 32-bit integer multiply
mov dword [array3+ecx*4], eax ; Move our result to array3
inc ecx ; Increment ecx
; While ecx is a general purpose register the convention is to use it for
; counting hence the 'c'
cmp ecx, array_length ; Compare the value in ecx with our array_length
jb _multiply_loop ; Restart the loop unless we've exceeded the array length
; If the loop has concluded the instruction pointer will continue
_exit:
mov eax, sys_exit ; The system call we want
; ebx is already equal to 0, ebx contains the exit status
mov ebp, esp ; Prepare the stack before jumping into the system
sysenter ; Call the Linux kernel and tell it that our program has concluded
If you wanted the full 64-bit result of the 32-bit multiply, use one-operand mul. But normally you only want a result that's the same width as the inputs, in which case imul is most efficient and easiest to use. See links in the x86 tag wiki for docs and tutorials.
You'll notice that this program has no output. I'm not going to cover writing the algorithm to print numbers because we'd be here all day, that's an exercise for the reader (or see this Q&A)
However in the meantime we can run our program in gdbtui and inspect the data, use your build script to build then open your program with the command gdbtui arrays. You'll want to enter these commands:
layout asm
break _exit
run
print (int[5])array3
And GDB will display the results.

Using LEA on values that aren't addresses / pointers?

I was trying to understand how Address Computation Instruction works, especially with leaq command. Then I get confused when I see examples using leaq to do arithmetic computation. For example, the following C code,
long m12(long x) {
return x*12;
}
In assembly,
leaq (%rdi, %rdi, 2), %rax
salq $2, $rax
If my understanding is right, leaq should move whatever address (%rdi, %rdi, 2), which should be 2*%rdi+%rdi, evaluate to into %rax. What I get confused is since value x is stored in %rdi, which is just memory address, why does times %rdi by 3 then left shift this memory address by 2 is equal to x times 12? Isn't that when we times %rdi by 3, we jump to another memory address which does not hold value x?
lea (see Intel's instruction-set manual entry) is a shift-and-add instruction that uses memory-operand syntax and machine encoding. This explains the name, but it's not the only thing it's good for. It never actually accesses memory, so it's like using & in C.
See for example How to multiply a register by 37 using only 2 consecutive leal instructions in x86?
In C, it's like uintptr_t foo = (uintptr_t) &arr[idx]. Note the & to give you arr + idx (scaling for the object size of arr since this is C not asm). In C, this would be abuse of the language syntax and types, but in x86 assembly pointers and integers are the same thing. Everything is just bytes, and it's up to the program put instructions in the right order to get useful results.
Effective address is a technical term in x86: it means the "offset" part of a seg:off logical address, especially when a base_reg + index*scale + displacement calculation was needed. e.g. the rax + (rcx<<2) in a %gs:(%rax,%rcx,4) addressing mode. (But EA still applies to %rdi for stosb, or the absolute displacement for movabs load/store, or other cases without a ModRM addr mode). Its use in this context doesn't mean it must be a valid / useful memory address, it's telling you that the calculation doesn't involve the segment base so it's not calculating a linear address. (Adding the seg base would make it unusable for actual address math in a non-flat memory model.)
The original designer / architect of 8086's instruction set (Stephen Morse) might or might not have had pointer math in mind as the main use-case, but modern compilers think of it as just another option for doing arithmetic on pointers / integers, and so should humans.
(Note that 16-bit addressing modes don't include shifts, just [BP|BX] + [SI|DI] + disp8/disp16, so LEA wasn't as useful for non-pointer math before 386. See this Q&A for more about 32/64-bit addressing modes, although that answer uses Intel syntax like [rax + rdi*4] instead of the AT&T syntax used in this question. x86 machine code is the same regardless of what syntax you use to create it.)
Maybe the 8086 architects did simply want to expose the address-calculation hardware for arbitrary uses because they could do it without using a lot of extra transistors. The decoder already has to be able to decode addressing modes, and other parts of the CPU have to be able to do address calculations. Putting the result in a register instead of using it with a segment-register value for memory access doesn't take many extra transistors. Ross Ridge confirms that LEA on original 8086 reuses the CPUs effective-address decoding and calculation hardware.
Note that most modern CPUs run LEA on the same ALUs as normal add and shift instructions. They have dedicated AGUs (address-generation units), but only use them for actual memory operands. In-order Atom is one exception; LEA runs earlier in the pipeline than the ALUs: inputs have to be ready sooner, but outputs are also ready sooner. Out-of-order execution CPUs (all modern x86) don't want LEA to interfere with actual loads/stores so they run it on an ALU.
lea has good latency and throughput, but not as good throughput as add or mov r32, imm32 on most CPUs, so only use lea when you can save an instructions with it instead of add. (See Agner Fog's x86 microarch guide and asm optimization manual and https://uops.info/)
Ice Lake improved on that for Intel, now able to run LEA on all four ALU ports.
Rules for which kinds of LEA are "complex", running on fewer of the ports that can handle it, vary by microarchitecture. e.g. 3-component (two + operations) is the slower case on SnB-family, having a scaled index is the lower-throughput case on Ice Lake. Alder Lake E-cores (Gracemont) are 4/clock, but 1/clock when there's an index at all, and 2-cycle latency when there's an index and displacement (whether or not there's a base reg). Zen is slower when there's a scaled index or 3 components. (2c latency and 2/clock down from 1c and 4/clock).
The internal implementation is irrelevant, but it's a safe bet that decoding the operands to LEA shares transistors with decoding addressing modes for any other instruction. (So there is hardware reuse / sharing even on modern CPUs that don't execute lea on an AGU.) Any other way of exposing a multi-input shift-and-add instruction would have taken a special encoding for the operands.
So 386 got a shift-and-add ALU instruction for "free" when it extended the addressing modes to include scaled-index, and being able to use any register in an addressing mode made LEA much easier to use for non-pointers, too.
x86-64 got cheap access to the program counter (instead of needing to read what call pushed) "for free" via LEA because it added the RIP-relative addressing mode, making access to static data significantly cheaper in x86-64 position-independent code than in 32-bit PIC. (RIP-relative does need special support in the ALUs that handle LEA, as well as the separate AGUs that handle actual load/store addresses. But no new instruction was needed.)
It's just as good for arbitrary arithmetic as for pointers, so it's a mistake to think of it as being intended for pointers these days. It's not an "abuse" or "trick" to use it for non-pointers, because everything's an integer in assembly language. It has lower throughput than add, but it's cheap enough to use almost all the time when it saves even one instruction. But it can save up to three instructions:
;; Intel syntax.
lea eax, [rdi + rsi*4 - 8] ; 3 cycle latency on Intel SnB-family
; 2-component LEA is only 1c latency
;;; without LEA:
mov eax, esi ; maybe 0 cycle latency, otherwise 1
shl eax, 2 ; 1 cycle latency
add eax, edi ; 1 cycle latency
sub eax, 8 ; 1 cycle latency
On some AMD CPUs, even a complex LEA is only 2 cycle latency, but the 4-instruction sequence would be 4 cycle latency from esi being ready to the final eax being ready. Either way, this saves 3 uops for the front-end to decode and issue, and that take up space in the reorder buffer all the way until retirement.
lea has several major benefits, especially in 32/64-bit code where addressing modes can use any register and can shift:
non-destructive: output in a register that isn't one of the inputs. It's sometimes useful as just a copy-and-add like lea 1(%rdi), %eax or lea (%rdx, %rbp), %ecx.
can do 3 or 4 operations in one instruction (see above).
Math without modifying EFLAGS, can be handy after a test before a cmovcc. Or maybe in an add-with-carry loop on CPUs with partial-flag stalls.
x86-64: position independent code can use a RIP-relative LEA to get a pointer to static data.
7-byte lea foo(%rip), %rdi is slightly larger and slower than mov $foo, %edi (5 bytes), so prefer mov r32, imm32 in position-dependent code on OSes where symbols are in the low 32 bits of virtual address space, like Linux. You may need to disable the default PIE setting in gcc to use this.
In 32-bit code, mov edi, OFFSET symbol is similarly shorter and faster than lea edi, [symbol]. (Leave out the OFFSET in NASM syntax.) RIP-relative isn't available and addresses fit in a 32-bit immediate, so there's no reason to consider lea instead of mov r32, imm32 if you need to get static symbol addresses into registers.
Other than RIP-relative LEA in x86-64 mode, all of these apply equally to calculating pointers vs. calculating non-pointer integer add / shifts.
See also the x86 <!--> tag wiki for assembly guides / manuals, and performance info.
Operand-size vs. address-size for x86-64 lea
See also Which 2's complement integer operations can be used without zeroing high bits in the inputs, if only the low part of the result is wanted?. 64-bit address size and 32-bit operand size is the most compact encoding (no extra prefixes), so prefer lea (%rdx, %rbp), %ecx when possible instead of 64-bit lea (%rdx, %rbp), %rcx or 32-bit lea (%edx, %ebp), %ecx.
x86-64 lea (%edx, %ebp), %ecx is always a waste of an address-size prefix vs. lea (%rdx, %rbp), %ecx, but 64-bit address / operand size is obviously required for doing 64-bit math. (Agner Fog's objconv disassembler even warns about useless address-size prefixes on LEA with a 32-bit operand-size.)
Except maybe on Ryzen, where Agner Fog reports that 32-bit operand size lea in 64-bit mode has an extra cycle of latency. I don't know if overriding the address-size to 32-bit can speed up LEA in 64-bit mode if you need it to truncate to 32-bit.
This question is a near-duplicate of the very-highly-voted What's the purpose of the LEA instruction?, but most of the answers explain it in terms of address calculation on actual pointer data. That's only one use.
leaq doesn't have to operate on memory addresses, and it computes an address, it doesn't actually read from the result, so until a mov or the like tries to use it, it's just an esoteric way to add one number, plus 1, 2, 4 or 8 times another number (or the same number in this case). It's frequently "abused"† for mathematical purposes, as you see. 2*%rdi+%rdi is just 3 * %rdi, so it's computing x * 3 without involving the multiplier unit on the CPU.
Similarly, left shifting, for integers, doubles the value for every bit shifted (every zero added to the right), thanks to the way binary numbers work (the same way in decimal numbers, adding zeroes on the right multiplies by 10).
So this is abusing the leaq instruction to accomplish multiplication by 3, then shifting the result to achieve a further multiplication by 4, for a final result of multiplying by 12 without ever actually using a multiply instruction (which it presumably believes would run more slowly, and for all I know it could be right; second-guessing the compiler is usually a losing game).
†: To be clear, it's not abuse in the sense of misuse, just using it in a way that doesn't clearly align with the implied purpose you'd expect from its name. It's 100% okay to use it this way.
LEA is for calculating the address. It doesn't dereference the memory address
It should be much more readable in Intel syntax
m12(long):
lea rax, [rdi+rdi*2]
sal rax, 2
ret
So the first line is equivalent to rax = rdi*3
Then the left shift is to multiply rax by 4, which results in rdi*3*4 = rdi*12

What does an lea instruction after a ret instruction mean?

I found x86 lea instructions in an executable file made using clang and gcc.
The lea instructions are after the ret instruction as shown below.
0x???????? <func>
...
pop %ebx
pop %ebp
ret
lea 0x0(%esi,%eiz,1),%esi
lea 0x0(%edi,%eiz,1),%edi
0x???????? <next_func>
...
What are these lea instructions used for? There is no jmp instruction to the lea instructions.
My environment is Ubuntu 12.04 32-bit and gcc 4.6.3.
It's probably not anything--it's just padding to let the next function start at an address that's probably a multiple of at least 8 (and quite possibly 16).
Depending on the rest of the code, it's possible that it's actually a table. Some implementations of a switch statement, for example, use a constant table that's often stored in the code segment (even though, strictly speaking, it's more like data than code).
The first is a lot more likely though. As an aside, such space is often filled with 0x03 instead. This is a single-byte debug-break instruction, so if some undefined behavior results in attempting to execute that code, it immediately stops execution and breaks to the debugger (if available).

Resources