On some compilers (I'm mostly interested in GCC/clang) with void *jmp = &&label you can get the address of a label you can later goto jmp. For an interpreter with pointer tagging, I want to align those labels to the next 8 byte boundary, so the label values all have 0b…000. Usually, you can align the instruction position with assembly, i.e. __asm__ volatile(".p2align 8"); will pad the next instruction to the 2^8 byte with e.g. NOPs. However, because the interpreter won't have code between jump blocks, the assembly gets optimized out (or something – even with volatile it doesn't appear in the output.)
void test() {
void *next = &&LABEL_A;
goto *next;
/* won't get reached – optimized away? */
__asm__ volatile(".p2align 8");
LABEL_A:
goto *next;
}
On godbolt it shows the compiler doesn't align LABEL_A. However, when we change the code to
void test() {
unsigned long foo = (unsigned long)&&LABEL;
void *next = &&LABEL_A;
goto *next;
LABEL:
__asm__ volatile(".p2align 8");
LABEL_A:
goto *next;
}
The output will show that LABEL_A got correctly padded – presumably because LABEL and the following statements couldn't get optimized away. However, this seems extremely fragile and doesn't work with -O3. Is there a better way?
I tried several __attribute__(aligned(8), used) attributes, volatile values/accesses between the blocks, though only the LABEL reference prevented the code elimination.
For GCC, you should use the -falign-labels=8 or the weaker -falign-jumps=8 options.
You could put them in a function attribute: __attribute__((optimise ("-falign-labels=8")).
Related
Im writing a real time DSP processing library.
My intention is to give it a flexibility to define input samples blockSize, while also having best possible performance in case of sample-by-sample processing, that is - single sample block size
I think I have to use volatile keyword defining loop variable since data processing will be using pointers to Inputs/Outputs.
This leads me to a question:
Will gcc compiler optimize this code
int blockSize = 1;
for (volatile int i=0; i<blockSize; i++)
{
foo()
}
or
//.h
#define BLOCKSIZE 1
//.c
for (volatile int i=0; i<BLOCKSIZE; i++)
{
foo()
}
to be same as simply calling body of the loop:
foo()
?
Thx
I think I have to use volatile keyword defining loop variable since data processing will be using pointers to Inputs/Outputs.
No, that doesn't make any sense. Only the input/output hardware registers themselves should be volatile. Pointers to them should be declared as pointer-to-volatile data, ie volatile uint8_t*. There is no need to make the pointer itself volatile, ie uint8_t* volatile //wrong.
As things stand now, you force the compiler to create a variable i and increase it, which will likely block loop unrolling optimizations.
Trying your code on gcc x86 with -O3 this is exactly what happens. No matter the size of BLOCKSIZE, it still generates the loop because of volatile. If I drop volatile it completely unrolls the loop up to BLOCKSIZE == 7 and replace it with a number of function calls. Beyond 8 it creates a loop (but keeps the iterator in a register instead of RAM).
x86 example:
for (int i=0; i<5; i++)
{
foo();
}
gives
call foo
call foo
call foo
call foo
call foo
But
for (volatile int i=0; i<5; i++)
{
foo();
}
gives way more inefficient
mov DWORD PTR [rsp+12], 0
mov eax, DWORD PTR [rsp+12]
cmp eax, 4
jg .L2
.L3:
call foo
mov eax, DWORD PTR [rsp+12]
add eax, 1
mov DWORD PTR [rsp+12], eax
mov eax, DWORD PTR [rsp+12]
cmp eax, 4
jle .L3
.L2:
For further study of the correct use of volatile in embedded systems, please see:
How to access a hardware register from firmware?
Using volatile in embedded C development
Since the loop variable is volatile it shouldn't optimize it. The compiler can not know wether i will be 1 when the condition is evaluated, so it has to keep the loop.
From the compiler point of view, the loop can run an indeterminite number of times until the condition is satisfied.
If you somehwere access hardware registers, then those should be declared volatile, which would make more sense, to the reader, and also allows the compiler to apply appropriate optimizations where possible.
volatile keyword says the compiler that the variable is side effects prone - ie it can be changed by something which is not visible for the compiler.
Because of that volatile variables have to read before every use and saved to their permanent storage location after every modification.
In your example the loop cannot be optimized as variable i can be changed during the loop (for example some interrupt routine will change it to zero so the loop will have to be executed again.
The answer to your question is: If the compiler can determine that every time you enter the loop, it will execute only once, then it can eliminate the loop.
Normally, the optimization phase unrolls the loops, based on how the iterations relate to one another, this makes your (e.g. indefinite) loop to get several times bigger, in exchange to avoid the back loops (that normally result in a bubble in the pipeline, depending on the cpu type) but not too much to lose cache hits.... so it is a bit complicate... but the earnings are huge. But if your loop executes only once, and always, is normally because the test you wrote is always true (a tautology) or always false (impossible fact) and can be eliminated, this makes the jump back unnecessary, and so, there's no loop anymore.
int blockSize = 1;
for (volatile int i=0; i<blockSize; i++)
{
foo(); // you missed a semicolon here.
}
In your case, the variable is assigned a value, that is never touched anymore, so the first thing the compiler is going to do is to replace all expressions of your variable by the literal you assigned to it. (lacking context I assume blocsize is a local automatic variable that is not changed anywhere else) Your code changes into:
for (volatile int i=0; i<1; i++)
{
foo();
}
the next is that volatile is not necessary, as its scope is the block body of the loop, where it is not used, so it can be replaced by a sequence of code like the following:
do {
foo();
} while (0);
hmmm.... this code can be replaced by this code:
foo();
The compiler analyses each data set analising the graph of dependencies between data and variables.... when a variable is not needed anymore, assigning a value to it is not necessary (if it is not used later in the program or goes out of life), so that code is eliminated. If you make your compiler to compile a for loop frrom 1 to 2^64, and then stop. and you optimize the compilation of that,, you will see you loop being trashed up and will get the false idea that your processor is capable of counting from 1 to 2^64 in less than a second.... but that is not true, 2^64 is still very big number to be counted in less than a second. And that is not a one fixed pass loop like yours.... but the data calculations done in the program are of no use, so the compiler eliminates it.
Just test the following program (in this case it is not a test of a just one pass loop, but 2^64-1 executions):
#include <stdint.h>
#include <stdio.h>
#include <unistd.h>
int main()
{
uint64_t low = 0UL;
uint64_t high = ~0UL;
uint64_t data = 0; // this data is updated in the loop body.
printf("counting from %lu to %lu\n", low, high);
alarm(10); /* security break after 10 seconds */
for (uint64_t i = low; i < high; i++) {
#if 0
printf("data = $lu\n", data = i ); // either here...
#else
data = i; // or here...
#endif
}
return 0;
}
(You can change the #if 0 to #if 1 to see how the optimizer doesn't eliminate the loop when you need to print the results, but you see that the program is essentially the same, except for the call to printf with the result of the assignment)
Just compile it with/without optimization:
$ cc -O0 pru.c -o pru_noopt
$ cc -O2 pru.c -o pru_optim
and then run it under time:
$ time pru_noopt
counting from 0 to 18446744073709551615
Alarm clock
real 0m10,005s
user 0m9,848s
sys 0m0,000s
while running the optimized version gives:
$ time pru_optim
counting from 0 to 18446744073709551615
real 0m0,002s
user 0m0,002s
sys 0m0,002s
(impossible, neither the best computer can count one after the other, upto that number in less than 2 milliseconds) so the loop must have gone somewhere else. You can check from the assembler code. As the updated value of data is not used after assignment, the loop body can be eliminated, so the 2^64-1 executions of it can also be eliminated.
Now add the following line after the loop:
printf("data = %lu\n", data);
You will see that then, even with the -O3 option, will get the loop untouched, because the value after all the assignments is used after the loop.
(I preferred not to show the assembler code, and remain in high level, but you can have a look at the assembler code and see the actual generated code)
I have an OS project that I am working on and I am trying to call data that I have read from the disk in C with inline assembly.
I have already tried reading the code and executing it with the assembly call instruction, using inline assembly.
void driveLoop() {
uint16_t sectors = 31;
uint16_t sector = 0;
uint16_t basesector = 40000;
uint32_t i = 40031;
uint16_t code[sectors][256];
int x = 0;
while(x==0) {
read(i);
for (int p=0; p < 256; p++) {
if (readOut[p] == 0) {
} else {
x = 1;
//kprint_int(i);
}
}
i++;
}
kprint("Found sector!\n");
kprint("Loading OS into memory...\n");
for (sector=0; sector<sectors; sector++) {
read(basesector+sector);
for (int p=0; p<256; p++) {
code[sector][p] = readOut[p];
}
}
kprint("Done loading.\n");
kprint("Attempting to call...\n");
asm volatile("call (%0)" : : "r" (&code));
When the inline assembly is called I expect it to run the code from the sectors I read from the "disk" (this is in a VM, because its a hobby OS). What it does instead is it just hangs.
I probably don't much understand how variables, arrays, and assembly work, so if you could fill me in, that would be nice.
EDIT: The data I am reading from the disk is a binary file that was added
to the disk image file with
cat kernel.bin >> disk.img
and the kernel.bin is compiled with
i686-elf-ld -o kernel.bin -Ttext 0x4C4B40 *insert .o files here* --oformat binary
What it does instead is it just hangs.
Run your OS inside BOCHS so you can use BOCHS's built-in debugger to see exactly where it's stuck.
Being able to debug lockups, including with interrupts disabled, is probably very useful...
asm volatile("call (%0)" : : "r" (&code)); is unsafe because of missing clobbers.
But even worse than that it will load a new EIP value from the first 4 bytes of the array, instead of setting EIP to that address. (Unless the data you're loading is an array of pointers, not actual machine code?)
You have the %0 in parentheses, so it's an addressing mode. The assembler will warn you about an indirect call without *, but will assemble it like call *(%eax), with EAX = the address of code[0][0]. You actually want a call *%eax or whatever register the compiler chooses, register-indirect not memory-indirect.
&code and code are both just a pointer to the start of the array; &code doesn't create an anonymous pointer object storing the address of another address. &code takes the address of the array as a whole. code in this context "decays" to a pointer to the first object.
https://gcc.gnu.org/wiki/DontUseInlineAsm (for this).
You can get the compiler to emit a call instruction by casting the pointer to a function pointer.
__builtin___clear_cache(&code[0][0], &code[30][255]); // don't optimize away stores into the buffer
void (*fptr)(void) = (void*)code; // casting to void* instead of the actual target type is simpler
fptr();
That will compile (with optimization enabled) to something like lea 16(%esp), %eax / call *%eax, for 32-bit x86, because your code[][] buffer is an array on the stack.
Or to have it emit a jmp instead, do it at the end of a void function, or return funcptr(); in a non-void function, so the compiler can optimize the call/ret into a jmp tailcall.
If it doesn't return, you can declare it with __attribute__((noreturn)).
Make sure the memory page / segment is executable. (Your uint16_t code[]; is a local, so gcc will allocate it on the stack. This might not be what you want. The size is a compile-time constant so you could make it static, but if you do that for other arrays in other sibling functions (not parent or child), then you lose out on the ability to reuse a big chunk of stack memory for different arrays.)
This is much better than your unsafe inline asm. (You forgot a "memory" clobber, so nothing tells the compiler that your asm actually reads the pointed-to memory). Also, you forgot to declare any register clobbers; presumably the block of code you loaded will have clobbered some registers if it returns, unless it's written to save/restore everything.
In GNU C you do need to use __builtin__clear_cache when casting a data pointer to a function pointer. On x86 it doesn't actually clear any cache, it's telling the compiler that the stores to that memory are not dead because it's going to be read by execution. See How does __builtin___clear_cache work?
Without that, gcc could optimize away the copying into uint16_t code[sectors][256]; because it looks like a dead store. (Just like with your current inline asm which only asks for the pointer in a register.)
As a bonus, this part of your OS becomes portable to other architectures, including ones like ARM without coherent instruction caches where that builtin expands to a actual instructions. (On x86 it purely affects the optimizer).
read(basesector+sector);
It would probably be a good idea for your read function to take a destination pointer to read into, so you don't need to bounce data through your readOut buffer.
Also, I don't see why you'd want to declare your code as a 2D array; sectors are an artifact of how you're doing your disk I/O, not relevant to using the code after it's loaded. The sector-at-a-time thing should only be in the code for the loop that loads the data, not visible in other parts of your program.
char code[sectors * 512]; would be good.
I have to change the designated section of function_b so that it changes the stack in such a way that the program prints:
Executing function_a
Executing function_b
Finished!
At this point it also prints Executed function_b in between Executing function_b and Finished!.
I have the following code and I have to fill something in, in the part where it says // ... insert code here
#include <stdio.h>
void function_b(void){
char buffer[4];
// ... insert code here
fprintf(stdout, "Executing function_b\n");
}
void function_a(void) {
int beacon = 0x0b1c2d3;
fprintf(stdout, "Executing function_a\n");
function_b();
fprintf(stdout, "Executed function_b\n");
}
int main(void) {
function_a();
fprintf(stdout, "Finished!\n");
return 0;
}
I am using Ubuntu Linux with the gcc compiler. I compile the program with the following options: -g -fno-stack-protector -fno-omit-frame-pointer. I am using an intel processor.
Here is a solution, not exactly stable across environments, but works for me on x86_64 processor on Windows/MinGW64.
It may not work for you out of the box, but still, you might want to use a similar approach.
void function_b(void) {
char buffer[4];
buffer[0] = 0xa1; // part 1
buffer[1] = 0xb2;
buffer[2] = 0xc3;
buffer[3] = 0x04;
register int * rsp asm ("rsp"); // part 2
register size_t r10 asm ("r10");
r10 = 0;
while (*rsp != 0x04c3b2a1) {rsp++; r10++;} // part 3
while (*rsp != 0x00b1c2d3) rsp++; // part 4
rsp -= r10; // part 5
rsp = (int *) ((size_t) rsp & ~0xF); // part 6
fprintf(stdout, "Executing function_b\n");
}
The trick is that each of function_a and function_b have only one local variable, and we can find the address of that variable just by searching around in the memory.
First, we put a signature in the buffer, let it be the 4-byte integer 0x04c3b2a1 (remember that x86_64 is little-endian).
After that, we declare two variables to represent the registers: rsp is the stack pointer, and r10 is just some unused register.
This allows to not use asm statements later in the code, while still being able to use the registers directly.
It is important that the variables don't actually take stack memory, they are references to processor registers themselves.
After that, we move the stack pointer in 4-byte increments (since the size of int is 4 bytes) until we get to the buffer. We have to remember the offset from the stack pointer to the first variable here, and we use r10 to store it.
Next, we want to know how far in the stack are the instances of function_b and function_a. A good approximation is how far are buffer and beacon, so we now search for beacon.
After that, we have to push back from beacon, the first variable of function_a, to the start of instance of the whole function_a on the stack.
That we do by subtracting the value stored in r10.
Finally, here comes a werider bit.
At least on my configuration, the stack happens to be 16-byte aligned, and while the buffer array is aligned to the left of a 16-byte block, the beacon variable is aligned to the right of such block.
Or is it something with a similar effect and different explanation?..
Anyway, so we just clear the last four bits of the stack pointer to make it 16-byte aligned again.
The 32-bit GCC doesn't align anything for me, so you might want to skip or alter this line.
When working on a solution, I found the following macro useful:
#ifdef DEBUG
#define show_sp() \
do { \
register void * rsp asm ("rsp"); \
fprintf(stdout, "stack pointer is %016X\n", rsp); \
} while (0);
#else
#define show_sp() do{}while(0);
#endif
After this, when you insert a show_sp(); in your code and compile with -DDEBUG, it prints what is the value of stack pointer at the respective moment.
When compiling without -DDEBUG, the macro just compiles to an empty statement.
Of course, other variables and registers can be printed in a similar way.
ok, let assume that epilogue (i.e code at } line) of function_a and for function_b is the same
despite functions A and B not symmetric, we can assume this because it have the same signature (no parameters, no return value), same calling conventions and same size of local variables (4 byte - int beacon = 0x0b1c2d3 vs char buffer[4];) and with optimization - both must be dropped because unused. but we must not use additional local variables in function_b for not break this assumption. most problematic point here - what is function_A or function_B will be use nonvolatile registers (and as result save it in prologue and restore in epilogue) - but however look like here no place for this.
so my next code based on this assumption - epilogueA == epilogueB (really solution of #Gassa also based on it.
also need very clearly state that function_a and function_b must not be inline. this is very important - without this any solution impossible. so I let yourself add noinline attribute to function_a and function_b. note - not code change but attribute add, which author of this task implicitly implies but not clearly stated. don't know how in GCC mark function as noinline but in CL __declspec(noinline) for this used.
next code I write for CL compiler where exist next intrinsic function
void * _AddressOfReturnAddress();
but I think that GCC also must have the analog of this function. also I use
void* _ReturnAddress();
but however really _ReturnAddress() == *(void**)_AddressOfReturnAddress() and we can use _AddressOfReturnAddress() only. simply using _ReturnAddress() make source (but not binary - it equal) code smaller and more readable.
and next code is work for both x86 and x64. and this code work (tested) with any optimization.
despite I use 2 global variables - code is thread safe - really we can call main from multiple threads in concurrent, call it multiple time - but all will be worked correct (only of course how I say at begin if epilogueA == epilogueB)
hope comments in code enough self explained
__declspec(noinline) void function_b(void){
char buffer[4];
buffer[0] = 0;
static void *IPa, *IPb;
// save the IPa address
_InterlockedCompareExchangePointer(&IPa, _ReturnAddress(), 0);
if (_ReturnAddress() == IPa)
{
// we called from function_a
function_b();
// <-- IPb
if (_ReturnAddress() == IPa)
{
// we called from function_a, change return address for return to IPb instead IPa
*(void**)_AddressOfReturnAddress() = IPb;
return;
}
// we at stack of function_a here.
// we must be really at point IPa
// and execute fprintf(stdout, "Executed function_b\n"); + '}' (epilogueA)
// but we will execute fprintf(stdout, "Executing function_b\n"); + '}' (epilogueB)
// assume that epilogueA == epilogueB
}
else
{
// we called from function_b
IPb = _ReturnAddress();
return;
}
fprintf(stdout, "Executing function_b\n");
// epilogueB
}
__declspec(noinline) void function_a(void) {
int beacon = 0x0b1c2d3;
fprintf(stdout, "Executing function_a\n");
function_b();
// <-- IPa
fprintf(stdout, "Executed function_b\n");
// epilogueA
}
int main(void) {
function_a();
fprintf(stdout, "Finished!\n");
return 0;
}
I just ran into an interesting phenomenon with msp430f5529 (TI launchpad). After trying different approaches I was able to find a solution, but I don't understand what is going on here.
This code is part of a timer interrupt service routine (ISR). The special function register (SFR) TA0IV is supposed to hold the value of the interrupt number that triggered the ISR.
1 unsigned int index;
2
3 index = TA0IV; // Gives wrong value: 19874
4 index = *((volatile unsigned int *) TA0IV_); // Correct value: 4
TA0IV is defined with macros here:
5 #define sfrw_(x,x_) volatile __MSPGCC_PERIPHERAL__ unsigned int x __asm__("__" #x)
6 #define sfrw(x,x_) extern sfrw_(x,x_)
7 #define TA0IV_ 0x036E /* Timer0_A5 Interrupt Vector Word */
8 sfrw(TA0IV, TA0IV_);
What does this part of the first macro on line 5 do?
asm("__" #x)
Why is there no "x_" on the right hand side in the macro on line 5?
Last and most important question: Why does the usual typecasting on line 4 work as expected, but the one on line 3 doesn't?
BTW I use gcc-4.7.0.
Edit: More info
9 #define __MSPGCC_PERIPHERAL__ __attribute__((__d16__))
1) The # is a preprocessor "stringify" operator. You can see the impact of this using the -E compiler switch. Google "c stringify" for details.
2) Couldn't say. It isn't required that all parameters get used, and apparently whoever wrote this decided they didn't need it.
3) I'll take a shot at this one, but since I don't have all the source code or the hardware and can't experiment, I probably won't get it quite right. Maybe close enough for what you need though.
The first thing to understand is what the asm bit is doing. Normally (ok, sometimes) when you declare a variable (foo), the compiler assigns its own 'internal' name to the variable (ie _foo). However, when interfacing with asm modules (or other languages), sometimes you need to be able to specify the exact name to use, not allowing the compiler to mangle it in any fashion. That's what this asm is doing (see Asm Labels). So when you brush aside all the #define nonsense, what you've got is:
extern volatile __MSPGCC_PERIPHERAL__ unsigned int TA0IV __asm__("__TA0IV");
Since the definition you have posted is "extern," presumably somewhere (not shown), there's a symbol named __TA0IV that's getting defined. And since accessing it isn't working right, it appears that it is getting MIS-defined.
With the caveat that I HAVEN'T TRIED THIS, I would find this to be somewhat more readable:
#define TA0IV_ 0x036E
inline int ReadInterruptNumber()
{
int retval;
asm volatile("movl (%c1), %0": "=rm" (retval) : "i" (TA0IV_));
return retval;
}
FWIW.
I'm writing a program where a constant is needed but the value for the constant will be determined during run time. I have an array of op codes from which I want to randomly select one and _emit it into the program's code. Here is an example:
unsigned char opcodes[] = {
0x60, // pushad
0x61, // popad
0x90 // nop
}
int random_byte = rand() % sizeof(opcodes);
__asm _emit opcodes[random_byte]; // optimal goal, but invalid
However, it seems _emit can only take a constant value. E.g, this is valid:
switch(random_byte) {
case 2:
__asm _emit 0x90
break;
}
But this becomes unwieldy if the opcodes array grows to any considerable length, and also essentially eliminates the worth of the array since it would have to be expressed in a less attractive manner.
Is there any way to neatly code this to facilitate the growth of the opcodes array? I've tried other approaches like:
#define OP_0 0x60
#define OP_1 0x61
#define OP_2 0x90
#define DO_EMIT(n) __asm _emit OP_##n
// ...
unsigned char abyte = opcodes[random_byte];
DO_EMIT(abyte)
In this case, the translation comes out as OP_abyte, so it would need a call like DO_EMIT(2), which forces me back to the switch statement and enumerating every element in the array.
It is also quite possible that I have an entirely invalid approach here. Helpful feedback is appreciated.
I'm not sure what compiler/assembler you are using, but you could do what you're after in GCC using a label. At the asm site, you'd write it as:
asm (
"target_opcode: \n"
".byte 0x90\n" ); /* Placeholder byte */
...and at the place where you want to modify that code, you'd use:
extern volatile unsigned char target_opcode[];
int random_byte = rand() % sizeof(opcodes);
target_opcode[0] = random_byte;
Perhaps you can translate this into your compiler's dialect of asm.
Note that all the usual caveats about self-modifying code apply: the code segment might not be writeable, and you may have to flush the I-cache before executing the modified code.
You won't be able to do any randomness in the C preprocessor AFAIK. The closest you could get is generating the random value outside. For instance:
cpp -DRND_VAL=$RANDOM ...
(possibly with a modulus to maintain the value within a range), at least in UNIX-based systems. Then, you can use the definition value, that will be essentially random.
How about
char operation[4]; // is it really only 1 byte all the time?
operation[0] = random_whatever();
operation[1] = 0xC3; // RET
void (*func)() = &operation[0];
func();
Note that in this example you'd need to add a RET instruction to the buffer, so that in the end you end up at the right instruction after calling func().
Using an _emit at runtime into your program code is kind of like compiling the program you're running while the program is running.
You should describe your end-goal rather than just your idea of using _emit at runtime- there might be abetter way to accomplish what you want. Maybe you can write your opcodes to a regular data array and somehow make that bit of memory executable. That might be a little tricky due to security considerations, but it can be done.