I'm doing some exercise projects in a C book, and I was asked to write a program that uses clock function in C library to measure how long it takes qsort function to sort an array that's reversed from a sorted state. So I wrote below:
/*
* Write a program that uses the clock function to measure how long it takes qsort to sort
* an array of 1000 integers that are originally in reverse order. Run the program for arrays
* of 10000 and 100000 integers as well.
*/
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#define SIZE 10000
int compf(const void *, const void *);
int main(void)
{
int arr[SIZE];
clock_t start_clock, end_clock;
for (int i = 0; i < SIZE; ++i) {
arr[i] = SIZE - i;
}
start_clock = clock();
qsort(arr, SIZE, sizeof(arr[0]), compf);
end_clock = clock();
printf("start_clock: %ld\nend_clock: %ld\n", start_clock, end_clock);
printf("Measured seconds used: %g\n", (end_clock - start_clock) / (double)CLOCKS_PER_SEC);
return EXIT_SUCCESS;
}
int compf(const void *p, const void *q)
{
return *(int *)p - *(int *)q;
}
But running the program gives me the results below:
start_clock: 0
end_clock: 0
Measured clock used: 0
How can it be my system used 0 clock to sort an array? What am I doing wrong?
I'm using GCC included in mingw-w64 which is x86_64-8.1.0-release-win32-seh-rt_v6-rev0.
Also I'm compiling with arguments -g -Wall -Wextra -pedantic -std=c99 -D__USE_MINGW_ANSI_STDIO given to gcc.exe.
3 possible answers to your issue:
what is clock_t? Is it just a normal data type like int? Or is it some sort of struct? Make sure you are using it correctly for its data type
What is this running on? If your clock isn't already running you need to start it on, for instance, most microcontrollers. If you try pulling from it without starting, it will just be 0 at all times since the clock is not moving
Is your code fast enough that it's not registering? Is it actually taking 0 seconds (rounded down) to run? 1 full second is a very long time in the coding world, you can run millions of lines of code in less than a second. Make sure your timing process can handle small numbers (ie. you can register 1 micro-second of timing), or your code is running slow enough to register on your clock speed
I'm new to programming the STM32F Discovery board. I followed the instructions here and managed to get the blinky led light working.
But now I'm trying to play an audio tone for which I have borrowed code from here. In my Makefile I have included CFLAGS += -lm which is where I understand that arm_sin_f32 is defined.
This is the code for main.c:
#define USE_STDPERIPH_DRIVER
#include "stm32f4xx.h"
#define ARM_MATH_CM4
#include <arm_math.h>
#include <math.h>
#include "speaker.h"
//Quick hack, approximately 1ms delay
void ms_delay(int ms)
{
while (ms-- > 0) {
volatile int x=5971;
while (x-- > 0)
__asm("nop");
}
}
volatile uint32_t msTicks = 0;
// SysTick Handler (every time the interrupt occurs, this is called)
void SysTick_Handler(void){ msTicks++; }
// initialize the system tick
void InitSystick(void){
SystemCoreClockUpdate();
// division occurs in terms of seconds... divide by 1000 to get ms, for example
if (SysTick_Config(SystemCoreClock / 10000)) { while (1); } //
update every 0.0001 s, aka 10kHz
}
//Flash orange LED at about 1hz
int main(void)
{
SystemInit();
InitSystick();
init_speaker();
int16_t audio_sample;
int loudness = 250;
float audio_freq = 440;
audio_sample = (int16_t) (loudness * arm_sin_f32(audio_freq*msTicks/10000));
send_to_speaker(audio_sample);
}
But when trying to run make I get the following error:
main.c:42: undefined reference to `arm_sin_f32'
By using -lm, you're linking to libc's math library, which for floating points provides you with
https://www.gnu.org/software/libc/manual/html_node/Trig-Functions.html
Function: double sin (double x)
Function: float sinf (float x)
Function: long double sinl (long double x)
Function: _FloatN sinfN (_FloatN x)
Function: _FloatNx sinfNx (_FloatNx x)
Preliminary: | MT-Safe | AS-Safe | AC-Safe | See POSIX Safety Concepts.
These functions return the sine of x, where x is given in radians. The return value is in the range -1 to 1.
You'll want to use sinf as you're using a float.
If you'd like to use arm_sin_f32, then you should link to CMSIS's dsp library.
https://www.keil.com/pack/doc/CMSIS/DSP/html/group__sin.html
float32_t arm_sin_f32 (float32_t x)
Fast approximation to the trigonometric sine function for floating-point
data.
You should link to the appropriate precompiled library as detailed in: CMSIS DSP Software Library
The latest version of CMSIS at this moment is available at:
https://github.com/ARM-software/CMSIS_5
I don't think you should simply copy the c-files, as it will 'pollute' your own project and updating will be hard.
Simply download the latest release and to your makefile add:
CMSISPATH = "C:/path/to/cmsis/top/directory"
CFLAGS += -I$(CMSISPATH)/CMSIS/DSP/Include
LDFLAGS += -L$(CMSISPATH)/CMSIS/Lib/GCC/ -larm_cortexM4lf_math
First of all the arm_sin_32 does not exist. arm_sin_f32 for example yes. There are more different ones as well. You need to add the appropriate c file from the CMSIS to your project for example: CMSIS/DSP/Source/FastMathFunctions/arm_sin_f32.c
I would suggest to do not use the one from the keil as it probably outdated - just download the most current version of the CMSIS from github.
arm_.... functions are not the part of the m library.
Do not use nop-s for the delay as they are instantly flushed out from the pipeline without the execution. They are used only for padding
I need a 'good' way to initialize the pseudo-random number generator in C++. I've found an article that states:
In order to generate random-like
numbers, srand is usually initialized
to some distinctive value, like those
related with the execution time. For
example, the value returned by the
function time (declared in header
ctime) is different each second, which
is distinctive enough for most
randoming needs.
Unixtime isn't distinctive enough for my application. What's a better way to initialize this? Bonus points if it's portable, but the code will primarily be running on Linux hosts.
I was thinking of doing some pid/unixtime math to get an int, or possibly reading data from /dev/urandom.
Thanks!
EDIT
Yes, I am actually starting my application multiple times a second and I've run into collisions.
This is what I've used for small command line programs that can be run frequently (multiple times a second):
unsigned long seed = mix(clock(), time(NULL), getpid());
Where mix is:
// Robert Jenkins' 96 bit Mix Function
unsigned long mix(unsigned long a, unsigned long b, unsigned long c)
{
a=a-b; a=a-c; a=a^(c >> 13);
b=b-c; b=b-a; b=b^(a << 8);
c=c-a; c=c-b; c=c^(b >> 13);
a=a-b; a=a-c; a=a^(c >> 12);
b=b-c; b=b-a; b=b^(a << 16);
c=c-a; c=c-b; c=c^(b >> 5);
a=a-b; a=a-c; a=a^(c >> 3);
b=b-c; b=b-a; b=b^(a << 10);
c=c-a; c=c-b; c=c^(b >> 15);
return c;
}
The best answer is to use <random>. If you are using a pre C++11 version, you can look at the Boost random number stuff.
But if we are talking about rand() and srand()
The best simplest way is just to use time():
int main()
{
srand(time(nullptr));
...
}
Be sure to do this at the beginning of your program, and not every time you call rand()!
Side Note:
NOTE: There is a discussion in the comments below about this being insecure (which is true, but ultimately not relevant (read on)). So an alternative is to seed from the random device /dev/random (or some other secure real(er) random number generator). BUT: Don't let this lull you into a false sense of security. This is rand() we are using. Even if you seed it with a brilliantly generated seed it is still predictable (if you have any value you can predict the full sequence of next values). This is only useful for generating "pseudo" random values.
If you want "secure" you should probably be using <random> (Though I would do some more reading on a security informed site). See the answer below as a starting point: https://stackoverflow.com/a/29190957/14065 for a better answer.
Secondary note: Using the random device actually solves the issues with starting multiple copies per second better than my original suggestion below (just not the security issue).
Back to the original story:
Every time you start up, time() will return a unique value (unless you start the application multiple times a second). In 32 bit systems, it will only repeat every 60 years or so.
I know you don't think time is unique enough but I find that hard to believe. But I have been known to be wrong.
If you are starting a lot of copies of your application simultaneously you could use a timer with a finer resolution. But then you run the risk of a shorter time period before the value repeats.
OK, so if you really think you are starting multiple applications a second.
Then use a finer grain on the timer.
int main()
{
struct timeval time;
gettimeofday(&time,NULL);
// microsecond has 1 000 000
// Assuming you did not need quite that accuracy
// Also do not assume the system clock has that accuracy.
srand((time.tv_sec * 1000) + (time.tv_usec / 1000));
// The trouble here is that the seed will repeat every
// 24 days or so.
// If you use 100 (rather than 1000) the seed repeats every 248 days.
// Do not make the MISTAKE of using just the tv_usec
// This will mean your seed repeats every second.
}
if you need a better random number generator, don't use the libc rand. Instead just use something like /dev/random or /dev/urandom directly (read in an int directly from it or something like that).
The only real benefit of the libc rand is that given a seed, it is predictable which helps with debugging.
On windows:
srand(GetTickCount());
provides a better seed than time() since its in milliseconds.
C++11 random_device
If you need reasonable quality then you should not be using rand() in the first place; you should use the <random> library. It provides lots of great functionality like a variety of engines for different quality/size/performance trade-offs, re-entrancy, and pre-defined distributions so you don't end up getting them wrong. It may even provide easy access to non-deterministic random data, (e.g., /dev/random), depending on your implementation.
#include <random>
#include <iostream>
int main() {
std::random_device r;
std::seed_seq seed{r(), r(), r(), r(), r(), r(), r(), r()};
std::mt19937 eng(seed);
std::uniform_int_distribution<> dist{1,100};
for (int i=0; i<50; ++i)
std::cout << dist(eng) << '\n';
}
eng is a source of randomness, here a built-in implementation of mersenne twister. We seed it using random_device, which in any decent implementation will be a non-determanistic RNG, and seed_seq to combine more than 32-bits of random data. For example in libc++ random_device accesses /dev/urandom by default (though you can give it another file to access instead).
Next we create a distribution such that, given a source of randomness, repeated calls to the distribution will produce a uniform distribution of ints from 1 to 100. Then we proceed to using the distribution repeatedly and printing the results.
Best way is to use another pseudorandom number generator.
Mersenne twister (and Wichmann-Hill) is my recommendation.
http://en.wikipedia.org/wiki/Mersenne_twister
i suggest you see unix_random.c file in mozilla code. ( guess it is mozilla/security/freebl/ ...) it should be in freebl library.
there it uses system call info ( like pwd, netstat ....) to generate noise for the random number;it is written to support most of the platforms (which can gain me bonus point :D ).
The real question you must ask yourself is what randomness quality you need.
libc random is a LCG
The quality of randomness will be low whatever input you provide srand with.
If you simply need to make sure that different instances will have different initializations, you can mix process id (getpid), thread id and a timer. Mix the results with xor. Entropy should be sufficient for most applications.
Example :
struct timeb tp;
ftime(&tp);
srand(static_cast<unsigned int>(getpid()) ^
static_cast<unsigned int>(pthread_self()) ^
static_cast<unsigned int >(tp.millitm));
For better random quality, use /dev/urandom. You can make the above code portable in using boost::thread and boost::date_time.
The c++11 version of the top voted post by Jonathan Wright:
#include <ctime>
#include <random>
#include <thread>
...
const auto time_seed = static_cast<size_t>(std::time(0));
const auto clock_seed = static_cast<size_t>(std::clock());
const size_t pid_seed =
std::hash<std::thread::id>()(std::this_thread::get_id());
std::seed_seq seed_value { time_seed, clock_seed, pid_seed };
...
// E.g seeding an engine with the above seed.
std::mt19937 gen;
gen.seed(seed_value);
#include <stdio.h>
#include <sys/time.h>
main()
{
struct timeval tv;
gettimeofday(&tv,NULL);
printf("%d\n", tv.tv_usec);
return 0;
}
tv.tv_usec is in microseconds. This should be acceptable seed.
As long as your program is only running on Linux (and your program is an ELF executable), you are guaranteed that the kernel provides your process with a unique random seed in the ELF aux vector. The kernel gives you 16 random bytes, different for each process, which you can get with getauxval(AT_RANDOM). To use these for srand, use just an int of them, as such:
#include <sys/auxv.h>
void initrand(void)
{
unsigned int *seed;
seed = (unsigned int *)getauxval(AT_RANDOM);
srand(*seed);
}
It may be possible that this also translates to other ELF-based systems. I'm not sure what aux values are implemented on systems other than Linux.
Suppose you have a function with a signature like:
int foo(char *p);
An excellent source of entropy for a random seed is a hash of the following:
Full result of clock_gettime (seconds and nanoseconds) without throwing away the low bits - they're the most valuable.
The value of p, cast to uintptr_t.
The address of p, cast to uintptr_t.
At least the third, and possibly also the second, derive entropy from the system's ASLR, if available (the initial stack address, and thus current stack address, is somewhat random).
I would also avoid using rand/srand entirely, both for the sake of not touching global state, and so you can have more control over the PRNG that's used. But the above procedure is a good (and fairly portable) way to get some decent entropy without a lot of work, regardless of what PRNG you use.
For those using Visual Studio here's yet another way:
#include "stdafx.h"
#include <time.h>
#include <windows.h>
const __int64 DELTA_EPOCH_IN_MICROSECS= 11644473600000000;
struct timezone2
{
__int32 tz_minuteswest; /* minutes W of Greenwich */
bool tz_dsttime; /* type of dst correction */
};
struct timeval2 {
__int32 tv_sec; /* seconds */
__int32 tv_usec; /* microseconds */
};
int gettimeofday(struct timeval2 *tv/*in*/, struct timezone2 *tz/*in*/)
{
FILETIME ft;
__int64 tmpres = 0;
TIME_ZONE_INFORMATION tz_winapi;
int rez = 0;
ZeroMemory(&ft, sizeof(ft));
ZeroMemory(&tz_winapi, sizeof(tz_winapi));
GetSystemTimeAsFileTime(&ft);
tmpres = ft.dwHighDateTime;
tmpres <<= 32;
tmpres |= ft.dwLowDateTime;
/*converting file time to unix epoch*/
tmpres /= 10; /*convert into microseconds*/
tmpres -= DELTA_EPOCH_IN_MICROSECS;
tv->tv_sec = (__int32)(tmpres * 0.000001);
tv->tv_usec = (tmpres % 1000000);
//_tzset(),don't work properly, so we use GetTimeZoneInformation
rez = GetTimeZoneInformation(&tz_winapi);
tz->tz_dsttime = (rez == 2) ? true : false;
tz->tz_minuteswest = tz_winapi.Bias + ((rez == 2) ? tz_winapi.DaylightBias : 0);
return 0;
}
int main(int argc, char** argv) {
struct timeval2 tv;
struct timezone2 tz;
ZeroMemory(&tv, sizeof(tv));
ZeroMemory(&tz, sizeof(tz));
gettimeofday(&tv, &tz);
unsigned long seed = tv.tv_sec ^ (tv.tv_usec << 12);
srand(seed);
}
Maybe a bit overkill but works well for quick intervals. gettimeofday function found here.
Edit: upon further investigation rand_s might be a good alternative for Visual Studio, it's not just a safe rand(), it's totally different and doesn't use the seed from srand. I had presumed it was almost identical to rand just "safer".
To use rand_s just don't forget to #define _CRT_RAND_S before stdlib.h is included.
Assuming that the randomness of srand() + rand() is enough for your purposes, the trick is in selecting the best seed for srand. time(NULL) is a good starting point, but you'll run into problems if you start more than one instance of the program within the same second. Adding the pid (process id) is an improvement as different instances will get different pids. I would multiply the pid by a factor to spread them more.
But let's say you are using this for some embedded device and you have several in the same network. If they are all powered at once and you are launching the several instances of your program automatically at boot time, they may still get the same time and pid and all the devices will generate the same sequence of "random" numbers. In that case, you may want to add some unique identifier of each device (like the CPU serial number).
The proposed initialization would then be:
srand(time(NULL) + 1000 * getpid() + (uint) getCpuSerialNumber());
In a Linux machine (at least in the Raspberry Pi where I tested this), you can implement the following function to get the CPU Serial Number:
// Gets the CPU Serial Number as a 64 bit unsigned int. Returns 0 if not found.
uint64_t getCpuSerialNumber() {
FILE *f = fopen("/proc/cpuinfo", "r");
if (!f) {
return 0;
}
char line[256];
uint64_t serial = 0;
while (fgets(line, 256, f)) {
if (strncmp(line, "Serial", 6) == 0) {
serial = strtoull(strchr(line, ':') + 2, NULL, 16);
}
}
fclose(f);
return serial;
}
Include the header at the top of your program, and write:
srand(time(NULL));
In your program before you declare your random number. Here is an example of a program that prints a random number between one and ten:
#include <iostream>
#include <iomanip>
using namespace std;
int main()
{
//Initialize srand
srand(time(NULL));
//Create random number
int n = rand() % 10 + 1;
//Print the number
cout << n << endl; //End the line
//The main function is an int, so it must return a value
return 0;
}
I recently wrote a little curses game and as all it needs to work is some timer mechanism and a curses implementation, the idea to try building it for DOS comes kind of naturally. Curses is provided by pdcurses for DOS.
Timing is already different between POSIX and Win32, so I have defined this interface:
#ifndef CSNAKE_TICKER_H
#define CSNAKE_TICKER_H
void ticker_init(void);
void ticker_done(void);
void ticker_start(int msec);
void ticker_stop(void);
void ticker_wait(void);
#endif
The game calls ticker_init() and ticker_done() once, ticker_start() with a millisecond interval as soon as it needs ticks and ticker_wait() in its main loop to wait for the next tick.
Using the same implementation on DOS as the one for POSIX platforms, using setitimer(), didn't work. One reason was that the C lib coming with djgpp doesn't implement waitsig(). So I created a new implementation of my interface for DOS:
#undef __STRICT_ANSI__
#include <time.h>
uclock_t tick;
uclock_t nextTick;
uclock_t tickTime;
void
ticker_init(void)
{
}
void
ticker_done(void)
{
}
void
ticker_start(int msec)
{
tickTime = msec * UCLOCKS_PER_SEC / 1000;
tick = uclock();
nextTick = tick + tickTime;
}
void
ticker_stop()
{
}
void
ticker_wait(void)
{
while ((tick = uclock()) < nextTick);
nextTick = tick + tickTime;
}
This works like a charm in dosbox (I don't have a real DOS system right now). But my concern is: Is busy waiting really the best I can do on this platform? I'd like to have a solution allowing the CPU to at least save some energy.
For reference, here's the whole source.
Ok, I think I can finally answer my own question (thanks Wyzard for the helpful comment!)
The obvious solution, as there doesn't seem any library call doing this, is putting a hlt in inline assembly. Unfortunately, this crashed my program. Looking for the reason, it is because the default dpmi server used runs the program in ring 3 ... hlt is reserved to ring 0. So to use it, you have to modify the loader stub to load a dpmi server running your program in ring 0. See later.
Browsing through the docs, I came across __dpmi_yield(). If we are running in a multitasking environment (Win 3.x or 9x ...), there will already be a dpmi server provided by the operating system, and of course, in that case we want to give up our time slice while waiting instead of trying the privileged hlt.
So, putting it all together, the source for DOS now looks like this:
#undef __STRICT_ANSI__
#include <time.h>
#include <dpmi.h>
#include <errno.h>
static uclock_t nextTick;
static uclock_t tickTime;
static int haveYield;
void
ticker_init(void)
{
errno = 0;
__dpmi_yield();
haveYield = errno ? 0 : 1;
}
void
ticker_done(void)
{
}
void
ticker_start(int msec)
{
tickTime = msec * UCLOCKS_PER_SEC / 1000;
nextTick = uclock() + tickTime;
}
void
ticker_stop()
{
}
void
ticker_wait(void)
{
if (haveYield)
{
while (uclock() < nextTick) __dpmi_yield();
}
else
{
while (uclock() < nextTick) __asm__ volatile ("hlt");
}
nextTick += tickTime;
}
In order for this to work on plain DOS, the loader stub in the compiled executable must be modified like this:
<path to>/stubedit bin/csnake.exe dpmi=CWSDPR0.EXE
CWSDPR0.EXE is a dpmi server running all code in ring 0.
Still to test is whether yielding will mess with the timing when running under win 3.x / 9x. Maybe the time slices are too long, will have to check that. Update: It works great in Windows 95 with this code above.
The usage of the hlt instruction breaks compatibility with dosbox 0.74 in a weird way .. the program seems to hang forever when trying to do a blocking getch() through PDcurses. This doesn't happen however on a real MS-DOS 6.22 in virtualbox. Update: This is a bug in dosbox 0.74 that is fixed in the current SVN tree.
Given those findings, I assume this is the best way to wait "nicely" in a DOS program.
Update: It's possible to do even better by checking all available methods and picking the best one. I found a DOS idle call that should be considered as well. The strategy:
If yield is supported, use this (we are running in a multitasking environment)
If idle is supported, use this. Optionally, if we're in ring-0, do a hlt each time before calling idle, because idle is documented to return immediately when no other program is ready to run.
Otherwise, in ring-0 just use plain hlt instructions.
Busy-waiting as a last resort.
Here's a little example program (DJGPP) that tests for all possibilities:
#include <stdio.h>
#include <dpmi.h>
#include <errno.h>
static unsigned int ring;
static int
haveDosidle(void)
{
__dpmi_regs regs;
regs.x.ax = 0x1680;
__dpmi_int(0x28, ®s);
return regs.h.al ? 0 : 1;
}
int main()
{
puts("checking idle methods:");
fputs("yield (int 0x2f 0x1680): ", stdout);
errno = 0;
__dpmi_yield();
if (errno)
{
puts("not supported.");
}
else
{
puts("supported.");
}
fputs("idle (int 0x28 0x1680): ", stdout);
if (!haveDosidle())
{
puts("not supported.");
}
else
{
puts("supported.");
}
fputs("ring-0 HLT instruction: ", stdout);
__asm__ ("mov %%cs, %0\n\t"
"and $3, %0" : "=r" (ring));
if (ring)
{
printf("not supported. (running in ring-%u)\n", ring);
}
else
{
puts("supported. (running in ring-0)");
}
}
The code in my github repo reflects the changes.
I want to see how much is taken by the C program, so I wrote:
#include<stdio.h>
#include<stdlib.h>
#include"memory.h"
#include"memory_debug.h"
#include<sys/times.h>
#include<unistd.h>
int (*deallocate_ptr)(memContainer *,void*);
void (*merge_ptr)(node *);
void* (*allocate_ptr)(memContainer *,unsigned long size);
memContainer* (*init_ptr)(unsigned long );
diagStruct* (*diagnose_ptr)(memContainer *);
void (*finalize_ptr)(memContainer *);
void (*printNode_ptr)(node *n);
void (*printContainer_ptr)(memContainer *c);
void info(memContainer *c)
{
struct tms *t;
t=malloc(sizeof(struct tms));
times(t);
printf("user : %d\nsystem : %d\n %d",t->tms_utime,(int)t->tms_stime);
diagnose_ptr(c);
printf("\n");
return ;
}
but when I invoke this function I get 0 user time and 0 system time, even if I write:
for (i=0;i<100000;++i)
for (j=0;j<10;++j)
{}
info(c);
what am I doing wrong?
The compiler probably optimizes away your for loops since they do nothing. Try incrementing a volatile variable.
If you only want to know the time, try running time ./app and it will print the cputime, wall clock time etc of the executed app.
The code could simply write a volatile variable at the start, put your 'work' in a function (in a separate file), then read the volatile after the 'work' and print something involving the volatile.
Or do some simple calculation with a part of the calculation buried in a function, or using a function return.
What platform (Operating system & Compiler) are you using?
I don't know what platform you are running on, but there are a few useful questions on stackoverflow about higher precision system clocks. High precision timing in userspace in Linux has several useful links and references.
Timing Methods in C++ Under Linux looked useful.
The below demo program outputs nonzero times:
#include<stdio.h>
#include<stdlib.h>
#include"memory.h"
#include<sys/times.h>
#include<unistd.h>
#include <iostream>
using namespace std;
int main()
{
int x = 0;
for (int i = 0; i < 1 << 30; i++)
x++;
struct tms t;
times(&t);
cout << t.tms_utime << endl;
cout << t.tms_stime << endl;
return x;
}
Output:
275
1