clock_gettime doesn't work on MacOS Sierra anymore. Pretty sure I had this compiling correctly before Xcode 8 came out. I am really kind of stuck on what to do to get it to compile correctly.
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <time.h>
int main(){
struct timespec time1, time2;
clock_gettime(CLOCK_MONOTONIC,&time1);
//Some code I am trying to work out performance of...
clock_gettime(CLOCK_MONOTONIC,&time2);
printf("Time Taken: %ld",time2.tv_nsec - time1.tv_nsec);
}
Code like this simply fails to compile. I've been told the sierra timing library has changed. I get a compiler error for CLOCK_MONOTONIC not being defined and a warning for implicit declaration of clock_gettime, which turns into an error if I define CLOCK_MONOTONIC to something random, as it then just errors out during the linking stage.
Does anyone know of a fix or workaround to get the code compiling and executing?
I don't think CLOCK_MONOTONIC has been around in recent times.
I believe what you want is probably mach_absolute_time() plus a conversion, as documented here; this appears to be monotonic, and absolute (which are two different things as you probably know).
Other useful hints are to be found at the following related (but I think not duplicate) questions:
clock_gettime alternative in Mac OS X (but deals with a different clock type)
Monotonic clock on OSX (not C but objective C)
Related
By chance, I found out about the existence of the clock_gettime() function for Linux systems. Since I'm looking for a way to measure execution time of a function, I tried it in the MinGW gcc 8.2.0 version on a Windows 10 64-bit machine:
#include <time.h>
#include <stdio.h>
int main() {
struct timespec tstart, tend;
clock_gettime(CLOCK_THREAD_CPUTIME_ID, &tstart);
for (int i = 0; i < 100000; ++i);
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &tend);
printf("It takes %li nanoseconds for 100,000 empty iterations.\n", tend.tv_nsec - tstart.tv_nsec);
return 0;
}
This code snippet compiles without warnings/errors, and there are no runtime failures (at least not written to stdout).
Output:
It takes 0 nanoseconds for 100,000 empty iterations.
Which I don't believe is true.
Can you spot the flaw?
One more thing:
According to the N1570 Committee draft (April 12, 2011) of the ISO/IEC 9899:201x, shouldn't timespec_get() take the role of clock_gettime() instead?
First of all, your code is querying two different clocks (CLOCK_THREAD_CPUTIME_ID for tstart and CLOCK_PROCESS_CPUTIME_ID for tend), so it makes no sense to compare the two values. Secondly, you're only looking at the tv_nsec field of the struct timespec returned by clock_gettime(), and your difference might be wrong even if querying the same clock both times. Also, your compiler could be optimizing the empty for loop away, but that's impossible to say without looking at the generated binary, however I would find that unlikely unless you were compiling with -O1 or -O2 (see here for example, the loop is eliminated only with -O2).
Furthermore, Windows is not POSIX compliant at all, and MinGW can only emulate the behavior of clock_gettime() to some extent, so I wouldn't really trust it to return precise values. It seems to be okay for mingw-w64 looking at the source code, but I don't know if that's the version you're using. Even though a struct timespec object describes times with nanosecond resolution, the available resolution is system dependent and may even be greater than 1 second. You might want to check what clock_getres() says.
According to the N1570 Committee draft (April 12, 2011) of the ISO/IEC 9899:201x, shouldn't timespec_get() take the role of clock_gettime() instead?
The C standard does not say anything about which function should take the role of which other. The timespec_get() function definitely does not have the same semantics as clock_gettime(). The timespec_get() function only works on "calendar time" (which should be the same as CLOCK_REALTIME when using clock_gettime()).
That loop should get optimized out to nothing at all, so with a low resolution clock (resolution is not necessarily individual nanoseconds; it may advance in much larger units which clock_getres should be able to tell you) 0 is a plausible result. But you have a few other bugs in your code like mixing CLOCK_THREAD_CPUTIME_ID with CLOCK_PROCESS_CPUTIME_ID and not checking the return value of clock_gettime (it might be telling you these clocks aren't supported).
For the heck of it, I decided to see if a program I started writing on an Amiga many years ago and much further developed on other machines would still compile and run on an Amiga (after being developed on other machines). I originally used Lattice C because that's what I used before. But the 68881 support in Lattice is VERY buggy. So I decided to try gcc. I think the most recent version of gcc for Amiga is 2.7.0 (so I can't upgrade). It's worked rather well except for one bug in 68881 support: When multiplying any negative number by zero, the result is always:
1.:00000
when printed out (colon is NOT a typo). BTW, if you set x to zero, then print out, it's 0.00000 like it should be.
Here's a sample program to test the bug, it doesn't matter which variable is 0 and which is negative, and if the non-zero value is positive, it works fine.
#include <stdio.h>
#include <math.h>
main()
{
float x,a,b;
a=-10.0;
b=0.0;
x=a*b;
printf("%f\n",x);
}
and it's compiled with: gcc -o tt -m68020 -m68881 tt.c -lm
Taking out -m68881 works fine (but of course, doesn't use the FPU)
Taking out -lm and/or math.h makes no difference.
Does anyone know of a bug fix or workaround? Maybe a gcc command line argument? (would rather not have to do UGLY things like "if ((a<0)&&(b==0))")
BTW, since I don't have a working Amiga anymore, I've had to use an emulator. If you want to see what I've been doing on this project (using Lattice C version), you can view my video at:
https://www.youtube.com/watch?v=x8O-qYQvP4M
(Amiga part starts at 10:07)
Thanks for any help.
This isn't exactly an answer, but a revelation that the problem is rather complicated (more so than a simple bug with gcc). Here's the info:
If I set the Amiga emulator to emulate a 68020 or a 68030 and a 68881 or a 68882 INSTEAD of a 68040 using the 68040's internal FPU it doesn't produce the 1.:00000 (in other words, it works). So that could mean the emulator is to blame for not emulating the 68040's FPU correctly (though I imagine the 68040's FPU is likely compatible with the 68881/68882). (Don't know if there's a performance hit in setting the emulator to 68020/30 68881/2 (I have the emulator set to run as fast as possible on the host machine instead of going at the speed of the 680xx)).
However, if I compile with the Amiga's gcc's -noixemul option, the code works correctly in every combination of CPU and FPU. So that would indicate it's a problem with the Amiga's version of gcc (really the part of the gcc system that tries to emulate UNIX on an Amiga (that is what ixemul.library does)). So that might not be gcc's fault (if it were compiled on some other system that uses a 68040 it would probably work), but the fault of the people who ported gcc to Amiga.
So, you might say "problem solved, just use -noixemul" - well not so fast... Although the simple test program doesn't crash, my bigger program that exposed this problem crashes on program exit (recoverable GURU meditation) only when compiled with -noixemul (perhaps it's trying to close a library that was never opened, I don't know). This is why I didn't use -noixemul even though I wanted to.
So, it's not exactly solved, but I would say it's not likely a non-Amiga gcc bug.
this is my first time posting a question here so be kind :).
I have a problem porting a linux application to windows (windows 7 64-bit) using cygwin. I had to rewrite some code that is not supported under windows and that doesn´t exist in cygwin (like adjtimex) and I have manage to compile the code but when I am building the application .exe file (by using makefiles) I get a link error:
gcc -Wl,-Map,ppsi.map2 -o ppsi ppsi.o -lrt -lc
ppsi.o: In function `win_time_adjust':
ppsi/time-win/win-time.c:74: undefined reference to `adjtime'
ppsi/time-win/win-time.c:74:(.text+0x1905): relocation truncated to fit: R_X86_64_PC32 against undefined symbol `adjtime'
adjtime function is defined in which is included in the code:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <errno.h>
#include <time.h>
#include <sys/time.h> /* timex.h not available on windows so we use sys/time.h*/
#include <ppsi/ppsi.h>
Here is how a part of the code looks like:
static int win_time_adjust(struct pp_instance *ppi, long offset_ns, long freq_ppb){
struct timeval t;
int ret;
t.tv_sec = 0;
t.tv_usec = 0;
if (offset_ns) {
t.tv_sec = offset_ns / 1000000;
t.tv_usec = offset_ns / 1000; /*microseconds */
}
ret = adjtime(&t, NULL);
pp_diag(ppi, time, 1, "%s: %li %li\n", __func__, offset_ns, freq_ppb);
return ret;
}
I have encountered a similar problem before and I solved by linking the missing library when compiling the application. I tired the same here and linked libc (since adjtime is suppose to be there) -lc when compiling but got the same error. I unsure how adjtime is undefined and I wonder if there is some cygwin library I should link instead?
I use cygwin (gcc version 4.9.2) when compiling
From my understanding, adjtime is a Linux-ism which likely crawled its way into the BSD world via FreeBSD. It's certainly not documented by any of the C or POSIX standards.
Even those which are documented by the C and POSIX standards often aren't implemented correctly in the Microsoft world, so it wouldn't surprise me that this function doesn't exist.
I'm very interested to hear more about this project of yours. Though a simplistic wrapper of some implementation-defined stuff can be written for the purposes of portability, I can only imagine highly niche usecases, for which there are better alternatives:
If you need something with a large amount of good, secure entropy to seed your pseudo-random number generator, you should use something provided by a cryptographic library.
If you want to conduct performance tests, you should learn how to use the profiler rather than rolling your own profiling code. Also, every "performance test" should be conducted as though through the eyes of a scientist observing the very first time; it's generally best not to let tests from previous projects guide optimisation for future projects. Instead, let the profile of the current project guide optimisation for the current project (on the current hardware), and you'll be cheering before you know it!
So far I am trying to compile Wolfenstein: Enemy Territory to be x86_64 native. After dealing with ASM instructions, amd64 specific processor registers and other strange stuff, I eventually got three beautiful GCC compile error:
url.c:xxxx:xx error: expected identifier or '(' before numeric constant
sigaction( SIGALRM, &sigact, NULL );
^
If I type "sigaction" onto Google, almost every single link is visited by me, so any help would be greeeeaaatly appreciated.
#include <signal.h>
is present.
#define _POSIX_SOURCE
#define _XOPEN_SOURCE
#define _POSIX_C_SOURCE
are present, too. By using string earch method, I can see no "overwrite" of the structure / function sigaction(), so I guess this is not the problem. Because the source file is ~ 4 000 lines long, I won't paste the whole code here (but the original is available there). I' compiling with the following flags (orinigal Makefile(s) only had the -m32 flag which, of course, I removed):
gcc --std=c99 -D_XOPEN_SOURCE
I still get this same error. This begins to get beyond my understanding, so this is why I'm asking you: how to I get rid of this error?
From what I've read, I think it is related with POSIX not being compatible with ANSI (which won't allow to compile sigaction() ).
Also, I'm running Ubuntu 14.04 64 bits (because it came preinstalled and I've been too lazy to install anoher distro), Enemy Territory compiles with scons (my version is 2.3.0) and GCC version is 4.8.2-19 .
Thanks in advance.
I was given the source code to modify an MS-DOS program built back in 1992. I have the EXE file and it runs fine, but I need to modify the source code. The source code needs the below headers to compile.
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <time.h>
#include <dos.h>
#include <dir.h>
#include <alloc.h>
#include <ctype.h>
#include <string.h>
#include <mem.h>
#include <values.h>
Does anyone know what was used and are there any modern compilers that can handle this? I tried with Visual Studio 2010 and GCC "out of the box", but it fails because some headers are missing (dir.h, alloc.h, mem.h, values.h)
It might be more interesting to ask what what function declarations, type declarations, global variable declarations and macros it needs to have. The particular arrangement of those things into headers isn't very interesting as long as they are all there.
So comment out the offending #includes and let the compiler complain about the bits it is missing. Then you know what you're looking for.
You could try the Open Watcom compiler, which is one of the few relatively up-to-date compilers that builds 16-bit DOS executables. Other than finding an old MS or Borland compiler (or whatever was originally used), that's probably the easiest route.
If you want to rebuild for a different platform instead of rebuilding for DOS again, you'll likely have to make a lot of changes to the program itself. That may be worthwhile, but may be a lot of work and have a lot of surprise headaches.
There's Turbo C++ 1.01, not so modern, though, that appears to have all these header files as well. I still occasionally use it.
You might try using DJGPP. According to the documentation, it may have the headers you need.
a) Remove all the header files
b) Try a compile
c) Look up which header file the undefined function/type is int
d) Add the header file
e) repeat