I'm trying my hand at assembly in order to use vector operations, which I've never really used before, and I'm admittedly having a bit of trouble grasping some of the syntax.
The relevant code is below.
unit16_t asdf[4];
asdf[0] = 1;
asdf[1] = 2;
asdf[2] = 3;
asdf[3] = 4;
uint16_t other = 3;
__asm__("movq %0, %%mm0"
:
: "m" (asdf));
__asm__("pcmpeqw %0, %%mm0"
:
: "r" (other));
__asm__("movq %%mm0, %0" : "=m" (asdf));
printf("%u %u %u %u\n", asdf[0], asdf[1], asdf[2], asdf[3]);
In this simple example, I'm trying to do a 16-bit compare of "3" to each element in the array. I would hope that the output would be "0 0 65535 0". But it won't even assemble.
The first assembly instruction gives me the following error:
error: memory input 0 is not directly addressable
The second instruction gives me a different error:
Error: suffix or operands invalid for `pcmpeqw'
Any help would be appreciated.
You can't use registers directly in gcc asm statements and expect them to match up with anything in other asm statements -- the optimizer moves things around. Instead, you need to declare variables of the appropriate type and use constraints to force those variables into the right kind of register for the instruction(s) you are using.
The relevant constraints for MMX/SSE are x for xmm registers and y for mmx registers. For your example, you can do:
#include <stdint.h>
#include <stdio.h>
typedef union xmmreg {
uint8_t b[16];
uint16_t w[8];
uint32_t d[4];
uint64_t q[2];
} xmmreg;
int main() {
xmmreg v1, v2;
v1.w[0] = 1;
v1.w[1] = 2;
v1.w[2] = 3;
v1.w[3] = 4;
v2.w[0] = v2.w[1] = v2.w[2] = v2.w[3] = 3;
asm("pcmpeqw %1,%0" : "+x"(v1) : "x"(v2));
printf("%u %u %u %u\n", v1.w[0], v1.w[1], v1.w[2], v1.w[3]);
}
Note that you need to explicitly replicate the 3 across all the relevant elements of the second vector.
From intel reference manual:
PCMPEQW mm, mm/m64 Compare packed words in mm/m64 and mm for equality.
PCMPEQW xmm1, xmm2/m128 Compare packed words in xmm2/m128 and xmm1 for equality.
Your pcmpeqw uses an "r" register which is wrong. Only "mm" and "m64" registers
valter
The code above failed when expanding the asm(), it never tried to even assemble anything. In this case, you are trying to use the zeroth argument (%0), but you didn't give any.
Check out the GCC Inline assembler HOWTO, or read the relevant chapter of your local GCC documentation.
He's right, the optimizer is changing register contents. Switching to intrinsics and using volatile to keep things a little more in place might help.
Related
I need to measure time based on Time Stmp Counter (TSC) for some reasons. To read TSC, I am using the code below:
#include <stdio.h>
#include <inttypes.h>
inline volatile uint32_t RDTSC32() {
register uint32_t TSC asm("eax");
asm volatile (".byte 15, 49" : : : "eax", "edx");
return TSC;
}
inline volatile uint64_t RDTSC64() {
register uint64_t TSC asm("rax");
asm volatile (".byte 15, 49" : : : "rax", "rdx");
return TSC;
}
int main() {
while (1) {
printf("%" PRIu64 "\n", RDTSC64());
}
}
When I've tested it, it works fine. Except one thing. When it reaches the maximum counter value (some value higher than 4,256,448,731, in my environment,) the counter value gets reset to 0 and keeps going on.
In this situation, is there any way to see how many times TSC has been reset?
For example, the code below does not print a correct time difference:
#include <stdio.h>
int main() {
long long start, end;
start = RDTSC64();
// long long works to do
end = RDTSC64();
printf("%lld \n", end - start);
}
The time stamp counter is always 64-bit, see this statement on the Wipedia page:
The Time Stamp Counter (TSC) is a 64-bit register present on all x86 processors since the Pentium.
For some reason you're getting a truncated value with only 32 bits, which is why it wraps. The 64-bit value will need 146 years of continuous counting at 4 GHz to wrap.
Your code seems to want to use both eax and edx to hold the two 32 bit halves, as expected. Something must go wrong when moving the values to the single C variable. I believe the snippet you're using is for GCC; perhaps that's no longer your compiler?
Inspect the generated assembly, and check the compiler docs for a proper intrinsic function instead. This question has some good answers, with compiler-specific assembly.
My requirement is to set EDI register using a variable with inline assembly. I wrote the following snippet but it fails to compile.
uint32_t value = 0;
__asm__ __volatile__("mov %1,%%edi \n\t"
: "=D"
: "ir" (value)
:
);
Errors I get are
cyg_functions.cpp(544): error: expected a "("
: "ir" (value)
^
cyg_functions.cpp(544): internal error: null pointer
: "ir" (value)
Edit
I guess I wasn't clear on the problem specification. Let's say my requirement is as follows.
There are two int variables val and result.
I need to
Set the value of variable val to %%edi clobbering whatever in there already
Multiply %%edi value by 2
Set %%edi value back to result variable
How can this be stated with inline assembly? Though this is not exactly my requirement answer to this (specifically the 1st step) would solve my problem. I need the intermediate to be specifically in EDI register.
I have read your comments, and the requirements here still makes no sense to me. However, making sense is not a requirement. Such being the case:
int main(int argc, char *argv[])
{
int res;
int value = argc;
asm ("shl $1, %[res]" /* Take the value in res (aka EDI) and shift
it left by 1. */
: [res] "=D" (res) /* On exit from the asm, the register EDI
will contain the value for "res". The
existing value of res will be overwritten. */
: "0" (value)); /* Take the contents of "value" and put it
in the same place as parameter #0. */
return res;
}
This may be easier to understand if you read it from the bottom up.
I use various double machine word types, like e.g. (u)int128_t on x86_64 and (u)int64_t on i386, ARM etc. in GCC.
I am looking for a correct/portable/clean way of accessing and manipulating the individual actual machine words (mostly in assembler). E.g. on 32bit machines I want to directly access the high/low 32bit part of an int64_t which gcc uses internally, without using stupid error-prone code like below. Similarly for the "native" 128bit types I want to access the 64b parts gcc is using (not for the below example as "add" is simple enough, but generally).
Consider the 32bit ASM path in the following code to add two int128_t together (which may be "native" to gcc, "native" to the machine or "half native" to the machine); it's horrendous and hard to maintain (and slower).
#define BITS 64
#if defined(USENATIVE)
// USE "NATIVE" 128bit GCC TYPE
typedef __int128_t int128_t;
typedef __uint128_t uint128_t;
typedef int128_t I128;
#define HIGH(x) x
#define HIGHVALUE(x) ((uint64_t)(x >> BITS))
#define LOW(x) x
#define LOWVALUE(x) (x & UMYINTMAX)
#else
typedef struct I128 {
int64_t high;
uint64_t low;
} I128;
#define HIGH(x) x.high
#define HIGHVALUE(x) x.high
#define LOW(x) x.low
#define LOWVALUE(x) x.low
#endif
#define HIGHHIGH(x) (HIGHVALUE(x) >> (BITS / 2))
#define HIGHLOW(x) (HIGHVALUE(x) & 0xFFFFFFFF)
#define LOWHIGH(x) (LOWVALUE(x) >> (BITS / 2))
#define LOWLOW(x) (LOWVALUE(x) & 0xFFFFFFFF)
inline I128 I128add(I128 a, const I128 b) {
#if defined(USENATIVE)
return a + b;
#elif defined(USEASM) && defined(X86_64)
__asm(
"ADD %[blo], %[alo]\n"
"ADC %[bhi], %[ahi]"
: [alo] "+g" (a.low), [ahi] "+g" (a.high)
: [blo] "g" (b.low), [bhi] "g" (b.high)
: "cc"
);
return a;
#elif defined(USEASM) && defined(X86_32)
// SLOWER DUE TO ALL THE CRAP
int32_t ahihi = HIGHHIGH(a), bhihi = HIGHHIGH(b);
uint32_t ahilo = HIGHLOW(a), bhilo = HIGHLOW(b);
uint32_t alohi = LOWHIGH(a), blohi = LOWHIGH(b);
uint32_t alolo = LOWLOW(a), blolo = LOWLOW(b);
__asm(
"ADD %[blolo], %[alolo]\n"
"ADC %[blohi], %[alohi]\n"
"ADC %[bhilo], %[ahilo]\n"
"ADC %[bhihi], %[ahihi]\n"
: [alolo] "+r" (alolo), [alohi] "+r" (alohi), [ahilo] "+r" (ahilo), [ahihi] "+r" (ahihi)
: [blolo] "g" (blolo), [blohi] "g" (blohi), [bhilo] "g" (bhilo), [bhihi] "g" (bhihi)
: "cc"
);
a.high = ((int64_t)ahihi << (BITS / 2)) + ahilo;
a.low = ((uint64_t)alohi << (BITS / 2)) + alolo;
return a;
#else
// this seems faster than adding to a directly
I128 r = {a.high + b.high, a.low + b.low};
// check for overflow of low 64 bits, add carry to high
// avoid conditionals
r.high += r.low < a.low || r.low < b.low;
return r;
#endif
}
Please note that I don't use C/ASM much, in fact this is my first attempt at inline ASM. Being used to Java/C#/JS/PHP etc. means that something very obvious to a routine C dev may not be apparent to me (besides the obvious insecure quirkiness in code style ;)). Also all this may be called something else entirely, because I had a very hard time finding anything online regarding the subject (non-native speaker as well).
Thanks a lot!
Edit 1
After much digging I have found the following theoretical solution, which works, but is unnecessary slow (slower than the much longer gcc output!) because it forces everything to memory and I am looking for a generic solution (reg/mem/possibly imm). I have also found that if you use an "r" constraint on e.g. 64bit int on 32bit machine, gcc will actually put both values in 2 registers (e.g. eax and ebx). The problem is not being able to reliably access the second part. I am sure there some hidden operator modifier that's just hard to find to tell gcc I want to access that second part.
uint32_t t1, t2;
__asm(
"MOV %[blo], %[t1]\n"
"MOV 4+%[blo], %[t2]\n"
"ADD %[t1], %[alo]\n"
"ADC %[t2], 4+%[alo]\n"
"MOV %[bhi], %[t1]\n"
"MOV 4+%[bhi], %[t2]\n"
"ADC %[t1], %[ahi]\n"
"ADC %[t2], 4+%[ahi]\n"
: [alo] "+o" (a.low), [ahi] "+o" (a.high), [t1] "=&r" (t1), [t2] "=&r" (t2)
: [blo] "o" (b.low), [bhi] "o" (b.high)
: "cc"
);
return a;
I have heard that this site isn't really for "code review", but since this is an interesting aspect to me, I think I can offer some advice:
In the 32-bit version, you could do the HIGHHIGH and co. with some clever overlaying of arrays of int/uint, instead of shifting and anding. Using a union is one way to do this, the other being pointer "magic". Since the assembler part of the code isn't particularly portable in the first place, using non-portable code in form of type punning casts or non-portable unions isn't really that big a deal.
Edit: relying on position within the words may also be fine. So for example, just passing in the address of the input and the output in registers, and then using (%0+4) and (%1+4) to do the remaining parts is definitely an option.
It will of course get more interesting if you have to do this for multiply and divide... I'm not sure I'd like to go there...
The following piece of code was given to us from our instructor so we could measure some algorithms performance:
#include <stdio.h>
#include <unistd.h>
static unsigned cyc_hi = 0, cyc_lo = 0;
static void access_counter(unsigned *hi, unsigned *lo) {
asm("rdtsc; movl %%edx,%0; movl %%eax,%1"
: "=r" (*hi), "=r" (*lo)
: /* No input */
: "%edx", "%eax");
}
void start_counter() {
access_counter(&cyc_hi, &cyc_lo);
}
double get_counter() {
unsigned ncyc_hi, ncyc_lo, hi, lo, borrow;
double result;
access_counter(&ncyc_hi, &ncyc_lo);
lo = ncyc_lo - cyc_lo;
borrow = lo > ncyc_lo;
hi = ncyc_hi - cyc_hi - borrow;
result = (double) hi * (1 << 30) * 4 + lo;
return result;
}
However, I need this code to be portable to machines with different CPU frequencies. For that, I'm trying to calculate the CPU frequency of the machine where the code is being run like this:
int main(void)
{
double c1, c2;
start_counter();
c1 = get_counter();
sleep(1);
c2 = get_counter();
printf("CPU Frequency: %.1f MHz\n", (c2-c1)/1E6);
printf("CPU Frequency: %.1f GHz\n", (c2-c1)/1E9);
return 0;
}
The problem is that the result is always 0 and I can't understand why. I'm running Linux (Arch) as guest on VMware.
On a friend's machine (MacBook) it is working to some extent; I mean, the result is bigger than 0 but it's variable because the CPU frequency is not fixed (we tried to fix it but for some reason we are not able to do it). He has a different machine which is running Linux (Ubuntu) as host and it also reports 0. This rules out the problem being on the virtual machine, which I thought it was the issue at first.
Any ideas why this is happening and how can I fix it?
Okay, since the other answer wasn't helpful, I'll try to explain on more detail. The problem is that a modern CPU can execute instructions out of order. Your code starts out as something like:
rdtsc
push 1
call sleep
rdtsc
Modern CPUs do not necessarily execute instructions in their original order though. Despite your original order, the CPU is (mostly) free to execute that just like:
rdtsc
rdtsc
push 1
call sleep
In this case, it's clear why the difference between the two rdtscs would be (at least very close to) 0. To prevent that, you need to execute an instruction that the CPU will never rearrange to execute out of order. The most common instruction to use for that is CPUID. The other answer I linked should (if memory serves) start roughly from there, about the steps necessary to use CPUID correctly/effectively for this task.
Of course, it's possible that Tim Post was right, and you're also seeing problems because of a virtual machine. Nonetheless, as it stands right now, there's no guarantee that your code will work correctly even on real hardware.
Edit: as to why the code would work: well, first of all, the fact that instructions can be executed out of order doesn't guarantee that they will be. Second, it's possible that (at least some implementations of) sleep contain serializing instructions that prevent rdtsc from being rearranged around it, while others don't (or may contain them, but only execute them under specific (but unspecified) circumstances).
What you're left with is behavior that could change with almost any re-compilation, or even just between one run and the next. It could produce extremely accurate results dozens of times in a row, then fail for some (almost) completely unexplainable reason (e.g., something that happened in some other process entirely).
I can't say for certain what exactly is wrong with your code, but you're doing quite a bit of unnecessary work for such a simple instruction. I recommend you simplify your rdtsc code substantially. You don't need to do 64-bit math carries your self, and you don't need to store the result of that operation as a double. You don't need to use separate outputs in your inline asm, you can tell GCC to use eax and edx.
Here is a greatly simplified version of this code:
#include <stdint.h>
uint64_t rdtsc() {
uint64_t ret;
# if __WORDSIZE == 64
asm ("rdtsc; shl $32, %%rdx; or %%rdx, %%rax;"
: "=A"(ret)
: /* no input */
: "%edx"
);
#else
asm ("rdtsc"
: "=A"(ret)
);
#endif
return ret;
}
Also you should consider printing out the values you're getting out of this so you can see if you're getting out 0s, or something else.
As for VMWare, take a look at the time keeping spec (PDF Link), as well as this thread. TSC instructions are (depending on the guest OS):
Passed directly to the real hardware (PV guest)
Count cycles while the VM is executing on the host processor (Windows / etc)
Note, in #2 the while the VM is executing on the host processor. The same phenomenon would go for Xen, as well, if I recall correctly. In essence, you can expect that the code should work as expected on a paravirtualized guest. If emulated, its entirely unreasonable to expect hardware like consistency.
You forgot to use volatile in your asm statement, so you're telling the compiler that the asm statement produces the same output every time, like a pure function. (volatile is only implicit for asm statements with no outputs.)
This explains why you're getting exactly zero: the compiler optimized end-start to 0 at compile time, through CSE (common-subexpression elimination).
See my answer on Get CPU cycle count? for the __rdtsc() intrinsic, and #Mysticial's answer there has working GNU C inline asm, which I'll quote here:
// prefer using the __rdtsc() intrinsic instead of inline asm at all.
uint64_t rdtsc(){
unsigned int lo,hi;
__asm__ __volatile__ ("rdtsc" : "=a" (lo), "=d" (hi));
return ((uint64_t)hi << 32) | lo;
}
This works correctly and efficiently for 32 and 64-bit code.
hmmm I'm not positive but I suspect the problem may be inside this line:
result = (double) hi * (1 << 30) * 4 + lo;
I'm suspicious if you can safely carry out such huge multiplications in an "unsigned"... isn't that often a 32-bit number? ...just the fact that you couldn't safely multiply by 2^32 and had to append it as an extra "* 4" added to the 2^30 at the end already hints at this possibility... you might need to convert each sub-component hi and lo to a double (instead of a single one at the very end) and do the multiplication using the two doubles
I would like my C function to efficiently compute the high 64 bits of the product of two 64 bit signed ints. I know how to do this in x86-64 assembly, with imulq and pulling the result out of %rdx. But I'm at a loss for how to write this in C at all, let alone coax the compiler to do it efficiently.
Does anyone have any suggestions for writing this in C? This is performance sensitive, so "manual methods" (like Russian Peasant, or bignum libraries) are out.
This dorky inline assembly function I wrote works and is roughly the codegen I'm after:
static long mull_hi(long inp1, long inp2) {
long output = -1;
__asm__("movq %[inp1], %%rax;"
"imulq %[inp2];"
"movq %%rdx, %[output];"
: [output] "=r" (output)
: [inp1] "r" (inp1), [inp2] "r" (inp2)
:"%rax", "%rdx");
return output;
}
If you're using a relatively recent GCC on x86_64:
int64_t mulHi(int64_t x, int64_t y) {
return (int64_t)((__int128_t)x*y >> 64);
}
At -O1 and higher, this compiles to what you want:
_mulHi:
0000000000000000 movq %rsi,%rax
0000000000000003 imulq %rdi
0000000000000006 movq %rdx,%rax
0000000000000009 ret
I believe that clang and VC++ also have support for the __int128_t type, so this should also work on those platforms, with the usual caveats about trying it yourself.
The general answer is that x * y can be broken down into (a + b) * (c + d), where a and c are the high order parts.
First, expand to ac + ad + bc + bd
Now, you multiply the terms as 32 bit numbers stored as long long (or better yet, uint64_t), and you just remember that when you multiplied a higher order number, you need to scale by 32 bits. Then you do the adds, remembering to detect carry. Keep track of the sign. Naturally, you need to do the adds in pieces.
For code implementing the above, see my other answer.
With regard to your assembly solution, don't hard-code the mov instructions! Let the compiler do it for you. Here's a modified version of your code:
static long mull_hi(long inp1, long inp2) {
long output;
__asm__("imulq %2"
: "=d" (output)
: "a" (inp1), "r" (inp2));
return output;
}
Helpful reference: Machine Constraints
Since you did a pretty good job solving your own problem with the machine code, I figured you deserved some help with the portable version. I would leave an ifdef in where you do just use the assembly if in gnu on x86.
Anyway, here is an implementation based on my general answer. I'm pretty sure this is correct, but no guarantees, I just banged this out last night. You probably should get rid of the statics positive_result[] and result_negative - those are just artefacts of my unit test.
#include <stdlib.h>
#include <stdio.h>
// stdarg.h doesn't help much here because we need to call llabs()
typedef unsigned long long uint64_t;
typedef signed long long int64_t;
#define B32 0xffffffffUL
static uint64_t positive_result[2]; // used for testing
static int result_negative; // used for testing
static void mixed(uint64_t *result, uint64_t innerTerm)
{
// the high part of innerTerm is actually the easy part
result[1] += innerTerm >> 32;
// the low order a*d might carry out of the low order result
uint64_t was = result[0];
result[0] += (innerTerm & B32) << 32;
if (result[0] < was) // carry!
++result[1];
}
static uint64_t negate(uint64_t *result)
{
uint64_t t = result[0] = ~result[0];
result[1] = ~result[1];
if (++result[0] < t)
++result[1];
return result[1];
}
uint64_t higherMul(int64_t sx, int64_t sy)
{
uint64_t x, y, result[2] = { 0 }, a, b, c, d;
x = (uint64_t)llabs(sx);
y = (uint64_t)llabs(sy);
a = x >> 32;
b = x & B32;
c = y >> 32;
d = y & B32;
// the highest and lowest order terms are easy
result[1] = a * c;
result[0] = b * d;
// now have the mixed terms ad + bc to worry about
mixed(result, a * d);
mixed(result, b * c);
// now deal with the sign
positive_result[0] = result[0];
positive_result[1] = result[1];
result_negative = sx < 0 ^ sy < 0;
return result_negative ? negate(result) : result[1];
}
Wait, you have a perfectly good, optimized assembly solution already
working for this, and you want to back it out and try to write it in
an environment that doesn't support 128 bit math? I'm not following.
As you're obviously aware, this operation is a single instruction on
x86-64. Obviously nothing you do is going to make it work any better.
If you really want portable C, you'll need to do something like
DigitalRoss's code above and hope that your optimizer figures out what
you're doing.
If you need architecture portability but are willing to limit yourself
to gcc platforms, there are __int128_t (and __uint128_t) types in the
compiler intrinsics that will do what you want.