Simple question. The function asm in C is used to do inline assembly in your code. But what does it return? Is it the conventional eax, and if not, what does it return?
__asm__ itself does not return a value. C standard does not define how __asm__ should handle the return value, so the behavior might be different between compilers. You stated that Visual Studio example is valid, but Visual Studio uses __asm. __asm__ is used at least by GCC.
Visual Studio
To get the result in a C program, you can place return value to eax in the assembly code, and return from the function. The caller will receive contents of eax as the return value. This is supported even with optimization enabled, even if the compiler decides to inline the function containing the __asm{} block.
It avoids a store/reload you'd otherwise get from moving the value to a C variable in the asm and returning that C variable, because MSVC inline asm syntax doesn't support inputs/outputs in registers (except for this return-value case).
Visual Studio 2015 documentation:
int power2( int num, int power )
{
__asm
{
mov eax, num ; Get first argument
mov ecx, power ; Get second argument
shl eax, cl ; EAX = EAX * ( 2 to the power of CL )
}
// Return with result in EAX
// by falling off the end of a non-void function
}
clang -fasm-blocks supports the same inline-asm syntax but does not support falling off the end of a non-void function as returning the value that an asm{} block left in EAX/RAX. Beware of that if porting MSVC inline asm to clang. It will break horribly when compiled with optimization enabled (function inlining).
GCC
GCC inline assembly HOWTO does not contain a similar example. You can't use an implicit return as in Visual Studio, but fortunately you don't need to because GNU C inline asm syntax allows specifying outputs in registers. No hack is needed to avoid a store/reload of an output value.
The HOWTO shows that you can store the result to C variable inside the assembly block, and return value of that variable after the assembly block has ended. You can even use "=r"(var) to let the compiler pick its choice of register, in case EAX isn't the most convenient after inlining.
An example of an (inefficient) string copy function, returning value of dest:
static inline char * strcpy(char * dest,const char *src)
{
int d0, d1, d2;
__asm__ __volatile__( "1:\tlodsb\n\t"
"stosb\n\t"
"testb %%al,%%al\n\t"
"jne 1b"
: "=&S" (d0), "=&D" (d1), "=&a" (d2)
: "0" (src),"1" (dest)
: "memory");
return dest;
}
(Note that dest isn't actually an output from the inline asm statement. The matching constraint for the dummy output operands tells the compiler the inline asm destroyed that copy of the variable so it needs to preserve it across the asm statement on its own somehow.)
If you omit a return statement in a non-void function with optimization enabled, you get a warning like warning: no return statement in function returning non-void [-Wreturn-type] and recent GCC/clang won't even emit a ret; it assumes this path of execution is never taken (because that would be UB). It doesn't matter whether or not the function contained an asm statement or not.
It's unlikely; per the C99 spec, under J3 Implementation-defined behaviour:
The asm keyword may be used to insert assembly language directly into
the translator output (6.8). The most common implementation is via a statement of the form:
asm ( character-string-literal );
So it's unlikely that an implementor is going to come up with an approach that both inserts the assembly language into the translator output and also generates some additional intermediary linking code to wire a particular register as a return result.
It's a keyword, not a function.
E.g. GCC uses "=r"-type constraint semantics to allow you in your assembly to have write access to a variable. But you ensure the result ends up in the right place.
Related
Recently I read the code of one public C library and found below function definition:
void* block_alloc(void** block, size_t* len, size_t type_size)
{
return malloc(type_size);
(void)block;
(void)len;
}
I wonder whether it will arrive at the statements after return. If not, what's the purpose of these 2 statements that convert some data to void ?
As Basil notes, the (void) statements are likely intended to silence compiler warnings about the unused parameters. But - you can move the (void) statements before the return to make them less confusing, and with the same effect.
In fact, there's yet another way to achieve the same effect, without resorting to any extra statements. It's supported by many compilers already today, although it's not officially in the C standard before C2X:
void* block_alloc(void**, size_t*, size_t type_size)
{
return malloc(type_size);
}
if you don't name the parameters, typical compilers don't expect you to be using them.
First, these statements appearing in the block after a return will never be executed.
Check by reading some C standard like n1570.
Second, on some compilers (perhaps GCC 10 invoked as gcc -Wall -Wextra) the useless statements might avoid some warnings.
In my opinion, coding these statements before the return won't change the machine code emitted by an optimizing compiler (use gcc -Wall -Wextra -O2 -fverbose-asm -S to check and emit the assembler code) and makes the C source code more understandable.
GCC provides, as an extension, the variable __attribute__ named unused.
Perhaps in your software your block_alloc is assigned to some important function pointer (whose signature is requested)
It is used to silence the warnings. Some programming standards required all the parameters to be used in the function body, and their static analyzers will not pass the code without it.
It is added after the return to prevent a generation of the code in some circumstances:
int foo(volatile unsigned x)
{
(void)x;
return 0;
}
int foo1(volatile unsigned x)
{
return 0;
(void)x;
}
foo:
mov DWORD PTR [rsp-4], edi
mov eax, DWORD PTR [rsp-4]
xor eax, eax
ret
foo1:
mov DWORD PTR [rsp-4], edi
xor eax, eax
ret
I am using the bsf x86-64 instruction found on page 210 of Intels developers manual found here. Essentially, if a least significant 1 bit is found, its bit index is stored in the destination operand .
Furthermore, the ZF flag is set to 1 if all the source operand is 0; otherwise, the ZF flag is cleared.
I am compiling my C code with inline x86-64 assembly instructions. I have defined a C function which invokes the bsf instruction:
uint64_t bitScanForward(T_bitboard b) {
__asm__(
"bsf %rcx,%rax\n"
"leave\n"
"ret\n"
);
}
and also another C function which checks if the status of the ZF bit in the flag register:
uint64_t isZFSet() {
printf("\n"); <- This is another problem I am having (see below)...
__asm__(
"jz true\n"
"movq $0,%rax\n"//return false
"jmp end\n"
"true:\n"
"movq $1,%rax\n"//return true
"end:\n"
"leave\n"
"ret\n"
);
}
I have tested these and found that the ZF flag is always cleared even when the bsf comand is applied to the number zero, seemingly going against the specification.
//Calling function...
//Do stuff...
bitScanForward(0ULL);//ULL is 64 bit on my machine
if(isZFSet()){//ZF flag *should* be set here but its not
printf("ZF flag is set\n");
}
//More stuff...
I suspect the reason the ZF flag is clearing is due to entering and leaving one set of inline instructions to another.
How can I ensure that the flag in the above code is set as specified in the documentation? (I don't want to change much of my code or design)
My "other problem" is that if I dont include the printf statement in the isZFFlagSet, the function seemingly doesnt execute. Totally bizarre. Can anyone explain why?
You are treating an aggressively optimizing C compiler as if it were a macro assembler. That just plain isn't going to work. To get GCC to emit correct code in the presence of assembly inserts, you have to annotate the inserts with complete information about the registers and memory regions that are affected by the assembly code, and you have to use ancillary C statements to mesh them with the surrounding code. Even then, there are things the assembly insert cannot do at all. I urge you to scrap this entire mess and instead use the __builtin_ctzll intrinsic, as suggested in the comments on the question.
Now, to specifics. Your first function is incorrect because GCC does not support use of leave or ret inside an assembly insert. (More generally, assembly inserts may not alter the stack pointer, and may only jump to designated labels within the same function.) The correct way to use bsf from a GCC-style assembly insert is with "extended asm" with input and output operands:
uint64_t bitScanForward(uint64_t b) {
uint64_t ret;
asm ("bsf %1, %0" : "=r" (ret) : "r" (b));
return ret;
}
You must declare a C variable to receive the output of the operation, and explicitly return that variable; having bsf write to %rax would not work (unlike how it was in old MSVC). BSF accepts any two registers as operands, so there is no need to use constraints more specific than r.
Your second function is incorrect because you didn't tell GCC that the condition codes were meaningful after bitScanForward, and because GCC does not support using the condition-code register as an input to an assembly insert. In order to read the ZF output from bsf you must do so within the same assembly insert that invoked bsf:
uint64_t countTrailingZeroes(uint64_t b) {
uint64_t ret;
asm ("bsf %1, %0\n\t"
"cmove %2, %0"
: "=&r" (ret)
: "r" (b), "rm" (64));
return ret;
}
This requires special care -- see how the constraint on operand 0 is now =&r instead of just =r? Without that, GCC is liable to think it can put operand 2 in the same register as operand 0.
Alternatively, you can specify that ZF is an output, which is supported (see the "flag output operands" section of the manual) and then supply a default value from C:
uint64_t countTrailingZeroes(uint64_t b) {
uint64_t ret;
int zf;
asm ("bsf %2, %0"
: "=r" (ret), "=#ccz" (zf) : "r" (b));
if (zf) ret = 64;
return ret;
}
This is basically to perform swap for the buffers while transferring a message buffer. This statement left me puzzled (because of my unfamiliarity with the embedded assembly code in c). This is a power pc instruction
#define ASMSWAP32(dest_addr,data) __asm__ volatile ("stwbrx %0, 0, %1" : : "r" (data), "r" (dest_addr))
Besides being unsafe because of a bug, this macro is also less efficient than what the compiler will generate for you.
stwbrx = store word byte-reversed. The x stands for indexed.
You don't need inline asm for this in GNU C, where you can use __builtin_bswap32 and let the compiler emit this instruction for you.
void swapstore_asm(int a, int *p) {
ASMSWAP32(p, a);
}
void swapstore_c(int a, int *p) {
*p = __builtin_bswap32(a);
}
Compiled with gcc4.8.5 -O3 -mregnames, we get identical code from both functions (Godbolt compiler explorer):
swapstore:
stwbrx %r3, 0, %r4
blr
swapstore_c:
stwbrx %r3,0,%r4
blr
But with a more complicated address (storing to p[off], where off is an integer function arg), the compiler knows how to use both register inputs, while your macro forces the compiler to have the address in a single register:
void swapstore_offset(int a, int *p, int off) {
= __builtin_bswap32(a);
}
swapstore_offset:
slwi %r5,%r5,2 # *4 = sizeof(int)
stwbrx %r3,%r4,%r5 # use an indexed addressing mode, with both registers non-zero
blr
swapstore_offset_asm:
slwi %r5,%r5,2
add %r4,%r4,%r5 # extra instruction forced by using the macro
stwbrx %r3, 0, %r4
blr
BTW, if you're having trouble understanding GNU C inline asm templates, looking at the compiler's asm output can be a useful way to see what gets substituted in. See How to remove "noise" from GCC/clang assembly output? for more about reading compiler asm output.
Also note that this macro is buggy: it's missing a "memory" clobber for the store. And yes, you still need that with asm volatile. The compiler doesn't assume that *dest_addr is modified unless you tell it, so it could hoist a non-volatile load of *dest_addr ahead of this insn, or more likely to be a real problem, sink a store after it. (e.g. if you zeroed a buffer before storing to it with this, the compiler might actually zero after this instruction.)
Instead of a "memory" clobber (and also leaving out volatile), you could tell the compiler which memory location you modify with a =m" (*dest_addr) operand, either as a dummy operand or with a constraint on the addressing mode so you could use it as reg+reg. (IDK PPC well enough to know what "=m" usually expands to.)
In most cases this bug won't bite you, but it's still a bug. Upgrading your compiler version or using link-time optimization could maybe make your program buggy with no source-level changes.
This kind of thing is why https://gcc.gnu.org/wiki/DontUseInlineAsm
See also https://stackoverflow.com/tags/inline-assembly/info.
#define ASMSWAP32(dest_addr,data) ...
This part should be clear
__asm__ volatile ( ... : : "r" (data), "r" (dest_addr))
This is the actual inline assembly:
Two values are passed to the assmbly code; no value is returned from the assembly code (this is the colons after the actual assembly code).
Both parameters are passed in registers ("r"). The expression %0 will be replaced by the register that contains the value of data while the expression %1 will be replaced by the register that contains the value of dest_addr (which will be a pointer in this case).
The volatile here means that the assembly code has to be executed at this point and cannot be moved to somewhere else.
So if you use the following code in the C source:
ASMSWAP(&a, b);
... the following assembler code will be generated:
# write the address of a to register 5 (for example)
...
# write the value of b to register 6
...
stwbrx 6, 0, 5
So the first argument of the stwbrx instruction is the value of b and the last argument is the address of a.
stwbrx x, 0, y
This instruction writes the value in register x to the address stored in register y; however it writes the value in "reverse endian" (on a big-endian CPU it writes the value "little endian".
The following code:
uint32 a;
ASMSWAP32(&a, 0x12345678);
... should therefore result in a = 0x78563412.
I was looking at the documentation on the Atmel website and I came across this example where they explain some issues with reordering.
Here's the example code:
#define cli() __asm volatile( "cli" ::: "memory" )
#define sei() __asm volatile( "sei" ::: "memory" )
unsigned int ivar;
void test2( unsigned int val )
{
val = 65535U / val;
cli();
ivar = val;
sei();
}
In this example, they're implementing a critical region-like mechanism. The cli instruction disables interrupts and the sei instruction enables them. Normally, I would save the interrupt state and restore to that state, but I digress...
The problem which they note is that, with optimization enabled, the division on the first line actually gets moved to after the cli instruction. This can cause some issues when you're trying to be inside of the critical region for the shortest amount of time as possible.
How come this is possible if the cli() MACRO expands to inline asm which explicitly clobbers the memory? How is the compiler free to move things before or after this statement?
Also, I modified the code to include memory barriers before every statement in the form of __asm volatile("" ::: "memory"); and it doesn't seem to change anything.
I also removed the memory clobber from the cli() and sei() MACROs, and the generated code was identical.
Of course, if I declare the test2 function argument as volatile, there is no reordering, which I assume to be because volatile statements can't be reordered with respect to other volatile statements (which the inline asm technically is). Is my assumption correct?
Can volatile accesses be reordered with respect to volatile inline asm?
Can non-volatile accesses be reordered with respect to volatile inline asm?
What's weird is that Atmel claims they need the memory clobber just to enforce the ordering of volatile accesses with respect to the asm. That doesn't make any sense to me.
If the compiler barrier isn't the proper solution for this, then how could I go about preventing any outside code from "leaking" into the critical region?
If anyone could shed some light, I'd appreciate it.
Thanks
How come this is possible if the cli() MACRO expands to inline asm which explicitly clobbers the memory? How is the compiler free to move things before or after this statement?
This is due to implementation details of avr-gcc: The compiler's support library, libgcc, provides many functions written in assembly for performance; including functions for integer division like __udivmodhi4. Not all of these functions clobber all of the callee-used registers as specified by the avr-gcc ABI. In particular, __udivmodhi4 does not clobber the Z register.
avr-gcc makes use of this as follows: On machines without 16-bit division instruction like AVR, GCC would issue a library call instead of generating code for it inline. avr-gcc however pretends that the architecture does have such division instruction and models it as having an effect on processor registers just like the library call. Finally, after all code analyzes and optimizations, the avr backend prints this instruction as [R]CALL __udivmodhi4. Let's call this a transparent call, i.e. a call which the compiler analysis does not see.
Example
int div (int a, int b, volatile const __flash char *z)
{
int ab;
(void) *z;
asm volatile ("" : "+r" (a));
ab = a / b;
asm volatile ("" : "+r" (ab));
(void) *z;
return ab;
}
Compile this with avr-gcc -S -Os -mmcu=atmega8 ... to get assembly file *.s:
div:
movw r30,r20
lpm r18,Z
rcall __divmodhi4
movw r24,r22
lpm r18,Z
ret
Explanation
(void) *z reads one byte from flash, and in order to use lpm instruction, the address must be in the Z register accomplished by movw r30,r20. After reading via lpm, the compiler issues rcall __divmodhi4 to perform signed 16-bit division. If this was an ordinary (non-transparent) call, the compiler would know nothing about the internal working of the callee, but as the avr backend models the call by hand, the compiler knows that the instruction sequence does not change Z and hence may use Z again after the call without any further ado. This allows for better code generation due to less register pressure, in particular z need not be saved / restores around the division.
The asm just serves to order the code: It is volatile and hence must not be reordered against the volatile read *z. And the asm must not be reordered against the division because the asm changes a and ab – at least that's what we are pretending and telling the compiler by means of the constraints. (These variables are not actually changed, but that does not matter here.)
Also, I modified the code to include memory barriers before every statement in the form of __asm volatile("" ::: "memory"); and it doesn't seem to change anything.
The division does not touch memory (it's a transparent call without memory clobber) hence the compiler machinery may reorder it against memory clobber / accesses.
If you need a specific order, then you'll have to introduce artificial dependencies like in in my example above.
In order to tell apart ordinary calls from transparent ones, you can dump the generated assembly in the .s file be means of -save-temps -dp where -dp prints insn names:
void func0 (void);
int func1 (int a, int b)
{
return a / b;
}
void func2 (void)
{
func0();
}
Every call that's neither call_insn nor call_value_insn is a transparent call, *divmodhi4_call in this case:
func1:
rcall __divmodhi4 ; 17 [c=0 l=1] *divmodhi4_call
movw r24,r22 ; 18 [c=4 l=1] *movhi/0
ret ; 23 [c=0 l=1] return
func2:
rjmp func0 ; 5 [c=0 l=1] call_insn/3
This question already has answers here:
Why can't I get the value of asm registers in C?
(2 answers)
Closed 1 year ago.
I remember seeing a way to use extended gcc inline assembly to read a register value and store it into a C variable.
I cannot though for the life of me remember how to form the asm statement.
Editor's note: this way of using a local register-asm variable is now documented by GCC as "not supported". It still usually happens to work on GCC, but breaks with clang. (This wording in the documentation was added after this answer was posted, I think.)
The global fixed-register variable version has a large performance cost for 32-bit x86, which only has 7 GP-integer registers (not counting the stack pointer). This would reduce that to 6. Only consider this if you have a global variable that all of your code uses heavily.
Going in a different direction than other answers so far, since I'm not sure what you want.
GCC Manual § 5.40 Variables in Specified Registers
register int *foo asm ("a5");
Here a5 is the name of the register which should be used…
Naturally the register name is cpu-dependent, but this is not a problem, since specific registers are most often useful with explicit assembler instructions (see Extended Asm). Both of these things generally require that you conditionalize your program according to cpu type.
Defining such a register variable does not reserve the register; it remains available for other uses in places where flow control determines the variable's value is not live.
GCC Manual § 3.18 Options for Code Generation Conventions
-ffixed-reg
Treat the register named reg as a fixed register; generated code should never refer to it (except perhaps as a stack pointer, frame pointer or in some other fixed role).
This can replicate Richard's answer in a simpler way,
int main() {
register int i asm("ebx");
return i + 1;
}
although this is rather meaningless, as you have no idea what's in the ebx register.
If you combined these two, compiling this with gcc -ffixed-ebx,
#include <stdio.h>
register int counter asm("ebx");
void check(int n) {
if (!(n % 2 && n % 3 && n % 5)) counter++;
}
int main() {
int i;
counter = 0;
for (i = 1; i <= 100; i++) check(i);
printf("%d Hamming numbers between 1 and 100\n", counter);
return 0;
}
you can ensure that a C variable always uses resides in a register for speedy access and also will not get clobbered by other generated code. (Handily, ebx is callee-save under usual x86 calling conventions, so even if it gets clobbered by calls to other functions compiled without -ffixed-*, it should get restored too.)
On the other hand, this definitely isn't portable, and usually isn't a performance benefit either, as you're restricting the compiler's freedom.
Here is a way to get ebx:
int main()
{
int i;
asm("\t movl %%ebx,%0" : "=r"(i));
return i + 1;
}
The result:
main:
subl $4, %esp
#APP
movl %ebx,%eax
#NO_APP
incl %eax
addl $4, %esp
ret
Edit:
The "=r"(i) is an output constraint, telling the compiler that the first output (%0) is a register that should be placed in the variable "i". At this optimization level (-O5) the variable i never gets stored to memory, but is held in the eax register, which also happens to be the return value register.
I don't know about gcc, but in VS this is how:
int data = 0;
__asm
{
mov ebx, 30
mov data, ebx
}
cout<<data;
Essentially, I moved the data in ebx to your variable data.
This will move the stack pointer register into the sp variable.
intptr_t sp;
asm ("movl %%esp, %0" : "=r" (sp) );
Just replace 'esp' with the actual register you are interested in (but make sure not to lose the %%) and 'sp' with your variable.
From the GCC docs itself: http://gcc.gnu.org/onlinedocs/gcc/Extended-Asm.html
#include <stdio.h>
void gav(){
//rgv_t argv = get();
register unsigned long long i asm("rax");
register unsigned long long ii asm("rbx");
printf("I`m gav - first arguman is: %s - 2th arguman is: %s\n", (char *)i, (char *)ii);
}
int main(void)
{
char *test = "I`m main";
char *test1 = "I`m main2";
printf("0x%llx\n", (unsigned long long)&gav);
asm("call %P0" : :"i"((unsigned long long)&gav), "a"(test), "b"(test1));
return 0;
}
You can't know what value compiler-generated code will have stored in any register when your inline asm statement runs, so the value is usually meaningless, and you'd be much better off using a debugger to look at register values when stopped at a breakpoint.
That being said, if you're going to do this strange task, you might as well do it efficiently.
On some targets (like x86) you can use specific-register output constraints to tell the compiler which register an output will be in. Use a specific-register output constraint with an empty asm template (zero instructions) to tell the compiler that your asm statement doesn't care about that register value on input, but afterward the given C variable will be in that register.
#include <stdint.h>
int foo() {
uint64_t rax_value; // type width determines register size
asm("" : "=a"(rax_value)); // =letter determines which register (or partial reg)
uint32_t ebx_value;
asm("" : "=b"(ebx_value));
uint16_t si_value;
asm("" : "=S"(si_value) );
uint8_t sil_value; // x86-64 required to use the low 8 of a reg other than a-d
// With -m32: error: unsupported size for integer register
asm("# Hi mom, my output constraint picked %0" : "=S"(sil_value) );
return sil_value + ebx_value;
}
Compiled with clang5.0 on Godbolt for x86-64. Notice that the 2 unused output values are optimized away, no #APP / #NO_APP compiler-generated asm-comment pairs (which switch the assembler out / into fast-parsing mode, or at least used to if that's no longer a thing). This is because I didn't use asm volatile, and they have an output operand so they're not implicitly volatile.
foo(): # #foo()
# BB#0:
push rbx
#APP
#NO_APP
#DEBUG_VALUE: foo:ebx_value <- %EBX
#APP
# Hi mom, my output constraint picked %sil
#NO_APP
#DEBUG_VALUE: foo:sil_value <- %SIL
movzx eax, sil
add eax, ebx
pop rbx
ret
# -- End function
# DW_AT_GNU_pubnames
# DW_AT_external
Notice the compiler-generated code to add two outputs together, directly from the registers specified. Also notice the push/pop of RBX, because RBX is a call-preserved register in the x86-64 System V calling convention. (And basically all 32 and 64-bit x86 calling conventions). But we've told the compiler that our asm statement writes a value there. (Using an empty asm statement is kind of a hack; there's no syntax to directly tell the compiler we just want to read a register, because like I said you don't know what the compiler was doing with the registers when your asm statement is inserted.)
The compiler will treat your asm statement as if it actually wrote that register, so if it needs the value for later, it will have copied it to another register (or spilled to memory) when your asm statement "runs".
The other x86 register constraints are b (bl/bx/ebx/rbx), c (.../rcx), d (.../rdx), S (sil/si/esi/rsi), D (.../rdi). There is no specific constraint for bpl/bp/ebp/rbp, even though it's not special in functions without a frame pointer. (Maybe because using it would make your code not compiler with -fno-omit-frame-pointer.)
You can use register uint64_t rbp_var asm ("rbp"), in which case asm("" : "=r" (rbp_var)); guarantees that the "=r" constraint will pick rbp. Similarly for r8-r15, which don't have any explicit constraints either. On some architectures, like ARM, asm-register variables are the only way to specify which register you want for asm input/output constraints. (And note that asm constraints are the only supported use of register asm variables; there's no guarantee that the variable's value will be in that register any other time.
There's nothing to stop the compiler from placing these asm statements anywhere it wants within a function (or parent functions after inlining). So you have no control over where you're sampling the value of a register. asm volatile may avoid some reordering, but maybe only with respect to other volatile accesses. You could check the compiler-generated asm to see if you got what you wanted, but beware that it might have been by chance and could break later.
You can place an asm statement in the dependency chain for something else to control where the compiler places it. Use a "+rm" constraint to tell the compiler it modifies some other variable which is actually used for something that doesn't optimize away.
uint32_t ebx_value;
asm("" : "=b"(ebx_value), "+rm"(some_used_variable) );
where some_used_variable might be a return value from one function, and (after some processing) passed as an arg to another function. Or computed in a loop, and will be returned as the function's return value. In that case, the asm statement is guaranteed to come at some point after the end of the loop, and before any code that depends on the later value of that variable.
This will defeat optimizations like constant-propagation for that variable, though. https://gcc.gnu.org/wiki/DontUseInlineAsm. The compiler can't assume anything about the output value; it doesn't check that the asm statement has zero instructions.
This doesn't work for some registers that gcc won't let you use as output operands or clobbers, e.g. the stack pointer.
Reading the value into a C variable might make sense for a stack pointer, though, if your program does something special with stacks.
As an alternative to inline-asm, there's __builtin_frame_address(0) to get a stack address. (But IIRC, cause that function to make a full stack frame, even when -fomit-frame-pointer is enabled, like it is by default on x86.)
Still, in many functions that's nearly free (and making a stack frame can be good for code-size, because of smaller addressing modes for RBP-relative than RSP-relative access to local variables).
Using a mov instruction in an asm statement would of course work, too.
Isn't this what you are looking for?
Syntax:
asm ("fsinx %1,%0" : "=f" (result) : "f" (angle));