Using threads in C to create an asynchronous timer - c

I am not even sure if threads are a way to go with what I am trying to accomplish, but my intuition is telling me so.
I am implemeting a simple input in a while loop, character by character. If the time between the character input is greater than 2 seconds, the timeout should occur. The timeout is currently a simple printf in the main function.
This is my code:
typedef struct {
clock_t startTime;
} timerStruct;
void *TimerThread(void *arg) {
timerStruct timerThreadArgument = *((timerStruct *) arg);
clock_t differenceTime;
while(1) {
differenceTime = clock() - timerThreadArgument.startTime;
int millis = differenceTime * 1000 / CLOCKS_PER_SEC;
if (millis >= TIMEOUT_TIME) {
return (void *) 2;
}
}
}
int main() {
pthread_t timerThreadId;
void *threadReturn;
char inputChar;
printf("Input characters one by one or paste the string:\n");
while (1) {
timerStruct *timerThreadArgument = malloc(sizeof(*timerThreadArgument));
timerThreadArgument->startTime = clock();
pthread_create(&timerThreadId, NULL, TimerThread, timerThreadArgument);
pthread_join(timerThreadId, &threadReturn);
if ((int) threadReturn == 2) {
printf("Timeout!\n");
}
scanf(" %c", &inputChar);
}
}
The problem is, since I am using the pthread_join function, it blocks the main thread from executing and asking for input. I am aware why this is happening. If I don't use the pthread_join function, I am not going to be able to return the data from the thread, which is important because I wish to break the while loop if the timeout has occurred.
If anyone has an idea on how I could approach this, please share it.
Thank you in advance for your time.

There are a number of problems with what you are attempting. One of them is that stdio (ie. scanf) involves buffering; so you won't get a single character until a whole line is available. The other is that what you have implemented is more akin to sleep() than a timeout.
In order to have direct control over tty input, you need to inform the device driver in the kernel of your desire. The standard interface to this is tcgetattr, tcsetattr; which enables rather fine grained control of the tty. The n?curses library is a little coarser, but much, much easier to use.
There are a lot of advantages to designing your system as a set of asynchronous services which provide events to one another. In this, your keyboard handling might occupy a thread sending "Key Events" to some sort of dispatcher; and maybe another sending "Alarm Events" around to track timeouts. This isn't a particularly easy pattern though. Its value only comes to the fore in relatively large systems.
Some newer programming languages, Go for instance, have constructs specifically geared to this sort of architecture. You might find it worthwhile to look at it, at least to get a flavour of how tidily it can be accomplished.

Related

Stream buffering issue

The mod_rewrite documentation states that it is a strict requirement to disable in(out)put buffering in a rewrite program.
Keeping that in mind I've written a simple program (I do know that it lacks the EOF check but this is not an issue and it saves one condition check per loop):
#include <stdio.h>
#include <stdlib.h>
int main ( void )
{
setvbuf(stdin,NULL,_IONBF,0);
setvbuf(stdout,NULL,_IONBF,0);
int character;
while ( 42 )
{
character = getchar();
if ( character == '-' )
{
character = '_';
}
putchar(character);
}
return 0;
}
After making some measurements I was shocked - it was over 9,000 times slower than the demo Perl script provided by the documentation:
#!/usr/bin/perl
$| = 1; # Turn off I/O buffering
while (<STDIN>) {
s/-/_/g; # Replace dashes with underscores
print $_;
}
Now I have two related questions:
Question 1. I believe that the streams may be line buffered since Apache sends a new line after each path. Am I correct? Switching my program to
setvbuf(stdin,NULL,_IOLBF,4200);
setvbuf(stdout,NULL,_IOLBF,4200);
makes it twice as fast as Perl one. This should not hit Apache's performance, should it?
Question 2. How can one write a program in C which will use unbuffered streams (like Perl one) and will perform as fast as Perl one?
Question 1: You would have to look at the code. It could be line buffered, it could be using fflush at the end of each request (or block of requests), or it could be using write calls with a larger buffer. In any case, it won't be doing per-character I/O which is what your program is doing.
Question 2: I suspect the main issue is on output. If you were to assemble the entire result in a buffer and write that out as one call, then you would be faster. However, that just means you are doing the line buffering instead of having the library take care of it for you. The key is that with no buffering, each output call results in a system call - that is very high overhead. In theory, the same concept holds true on input but I'm not sure the implementation wouldn't notice the available characters and buffer them in any case. Same workaround though - read a larger buffer and then take it apart yourself.
Personally, I'd avoid all the setvbuf stuff and just do an fflush at the end of each request.
When writing to a terminal, stdout is flushed after every line. This way you can always see the output right away. When writing to a file or, as in your case a pipe, this automatic flush is disabled. Usually in those cases performance is more important.
This causes problems when processes have to interact with each other. One program writes something. It's not sent instantly but stored in a buffer. Second program waits for that data. First program waits for more data from second program resulting in a deadlock.
To avoid this, you need to flush all the output before waiting for additional input. Simple fflusuh(stdout) before every read operation should be enough. This is actually what $|=1 does in Perl. Nothing needs to be done with stdin.
If performance is critical and you need to operate only on single bytes. Read and write data in big chunks using unbuffered read/write. For example:
#include <unistd.h>
int main() {
char buf[1024];
while(1) {
int len = read(0,buf,sizeof(buf));
for(int i=0;i<len;i++) {
if ( buf[i] == '-' ) {
buf[i] = '_';
}
}
write(1,buf,len);
}
}

Execution Flow - Possible to enter function twice before exiting?

I observed something in a log file that I cannot explain:
All code in project is ANSI C, 32bit exe running on Windows 7 64bit
I have a worker function similar to this one, running in a single threaded program, using no recursion. During debugging logging was included as shown:
//This function is called from an event handler
//triggered by a UI timer similar in concept to
//C# `Timer.OnTick` or C++ Timer::OnTick
//with tick period set to a shorter duration
//than this worker function sometimes requires
int LoadState(int state)
{
WriteToLog("Entering ->"); //first call in
//...
//Some additional code - varies in execution time, but typically ~100ms.
//...
WriteToLog("Leaving <-");//second to last call out
return 0;
}
The function above is simplified from our actual code but is sufficient for illustrating the issue.
Occasionally we have seen log entries such as this:
Where the time/date stamp is on the left, then message, the last field is duration in clock() ticks between calls to logging function. This logging indicates that the function was entered twice in a row before exiting.
Without recursion, and in a single threaded program, how is it (or is it) possible that execution flow can enter a function twice before the first call was completed?
EDIT: (to show top call of logging function)
int WriteToLog(char* str)
{
FILE* log;
char *tmStr;
ssize_t size;
char pn[MAX_PATHNAME_LEN];
char path[MAX_PATHNAME_LEN], base[50], ext[5];
char LocationKeep[MAX_PATHNAME_LEN];
static unsigned long long index = 0;
if(str)
{
if(FileExists(LOGFILE, &size))
{
strcpy(pn,LOGFILE);
ManageLogs(pn, LOGSIZE);
tmStr = calloc(25, sizeof(char));
log = fopen(LOGFILE, "a+");
if (log == NULL)
{
free(tmStr);
return -1;
}
//fprintf(log, "%10llu %s: %s - %d\n", index++, GetTimeString(tmStr), str, GetClockCycles());
fprintf(log, "%s: %s - %d\n", GetTimeString(tmStr), str, GetClockCycles());
//fprintf(log, "%s: %s\n", GetTimeString(tmStr), str);
fclose(log);
free(tmStr);
}
else
{
strcpy(LocationKeep, LOGFILE);
GetFileParts(LocationKeep, path, base, ext);
CheckAndOrCreateDirectories(path);
tmStr = calloc(25, sizeof(char));
log = fopen(LOGFILE, "a+");
if (log == NULL)
{
free(tmStr);
return -1;
}
fprintf(log, "%s: %s - %d\n", GetTimeString(tmStr), str, GetClockCycles());
//fprintf(log, "%s: %s\n", GetTimeString(tmStr), str);
fclose(log);
free(tmStr);
}
}
return 0;
}
I asked the question wondering at the time if there was some obscure part of the C standard that allowed execution flow to enter a function more than once without first exiting (given that multi-threading or recursion was not present)
Your comments, I believe, have clearly answered the question. Borrowing from what #Oli Charlesworth said in one comment, he summarizes it up pretty well:
If the code is truly single-threaded, and the log-function is truly sane, and there's truly no other piece of code that can be outputting to the log, then obviously this can't happen (UB notwithstanding).
But since the actual log files (which I could not post for proprietary reasons) on several occasions have demonstrated this pattern, one of the conditions #Oli Charlesworth listed is not actually true for our software. My best guess at this point, given that the logging function is sane and is the only input to the file, is to consider the alternate context/Fiber possibility suggested by #jxh:
"Primary thread only" can mean multiple things. The library could still possibly use <ucontext.h> on POSIX, or Fibers on Windows.
So, I will post this same question to the supplier of my environment, specifically if their UI Timers are run in such a way as to allow parallel calls due to a fiber or thread.
If any are interested, I will also update this answer with their response.
Edit to show conclusion:
As it turns out, the cause of the double entry of execution flow into a function was implicit recursion. That is, while the worker function did not reference itself explicitly, it was designated as the event handler for two separate event generators. That, coupled with a call to Process System Events (a function available in our environment forcing events in the queue to be processed now) can (and did) result in recursive execution flow into the event handler function. Here is a quote from a person who has expertise in the relationship between UI timers and system events in our environment:
"Timer events being nested" does equate to execution flow entering a function twice before leaving. Basically, it's the same thing as basic recursion: while you're inside one function, you call that same function. The only difference between this case and basic recursion is that the recursion call is implicit (via ProcessSystemEvents) and not explicit. But the end result is the same."

how to print to console graphically in C on unix

So I'm familiar with printf and the like, but I need to update a single line on the screen without having multiple lines scrolling. I have found libraries to do this in windows (conio.h) but not in unix. I need to be able to run this in cygwin, but any unix examples would be welcome.
I found the following stackoverflow article , but I don't think it quite closes the question for my needs.
Thanks for your help
It depends on whether you're trying to do a text-mode full-screen application (in which case ncurses is probably what you want) or just want to update a single line in-place (e.g., you want to update an "X percent done" indicator from 1 to 100, with all the output appearing on the same line so when X percent done is printed, it prints "over" the previous X-1 percent done indicator). In the latter case, you can write code that's relatively portable, and considerably simpler as well. For example, something like this:
#include <windows.h> // Used only for "Sleep" in our simulated work load
#include <stdio.h>
void do_work() {
// Simulated work load. Just waste some time:
Sleep(100);
}
int main() {
for (int i=0; i<100; i++) {
char buffer[82];
sprintf(buffer, "%d percent done", i+1);
printf("\r%-79s", buffer);
do_work();
}
return 0;
}
You want Ncurses for this. It's a library which allows you to edit any character on the screen.

Asynchronous io in c using windows API: which method to use and why does my code execute synchronous?

I have a C application which generates a lot of output and for which speed is critical. The program is basically a loop over a large (8-12GB) binary input file which must be read sequentially. In each iteration the read bytes are processed and output is generated and written to multiple files, but never to multiple files at the same time. So if you are at the point where output is generated and there are 4 output files you write to either file 0 or 1 or 2, or 3. At the end of the iteration I now write the output using fwrite(), thereby waiting for the write operation to finish. The total number of output operations is large, up to 4 million per file, and output size of files ranges from 100mb to 3.5GB. The program runs on a basic multicore processor.
I want to write output in a separate thread and I know this can be done with
Asyncronous I/O
Creating threads
I/O completion ports
I have 2 type of questions, namely conceptual and code specific.
Conceptual Question
What would be the best approach. Note that the application should be portable to Linux, however, I don't see how that would be very important for my choice for 1-3, since I would write a wrapper around anything kernel/API specific. For me the most important criteria is speed. I have read that option 1 is not that likely to increase the performance of the program and that the kernel in any case creates new threads for the i/o operation, so then why not use option (2) immediately with the advantage that it seems easier to program (also since I did not succeed with option (1), see code issues below).
Note that I read https://stackoverflow.com/questions/3689759/how-can-i-run-a-specific-function-of-thread-asynchronously-in-c-c, but I dont see a motivation on what to use based on the nature of the application. So I hope somebody could provide me with some advice what would be best in my situation. Also from the book "Windows System Programming" by Johnson M. Hart, I know that the recommendation is using threads, mainly because of the simplicity. However, will it also be fastest?
Code Question
This question involves the attempts I made so far to make asynchronous I/O work. I understand that its a big piece of code so that its not that easy to look into. In any case I would really appreciate any attempt.
To decrease execution time I try to write the output by means of a new thread using WINAPI via CreateFile() with FILE_FLAGGED_OVERLAP with an overlapped structure. I have created a sample program in which I try to get this to work. However, I encountered 2 problems:
The file is only opened in overlapped mode when I delete an already existing file (I have tried using CreateFile in different modes (CREATE_ALWAYS, CREATE_NEW, OPEN_EXISTING), but this does not help).
Only the first WriteFile is executed asynchronously. The remainder of WriteFile commands is synchronous. For this problem I already consulted http://support.microsoft.com/kb/156932. It seems that the problem I have is related to the fact that "any write operation to a file that extends its length will be synchronous". I've already tried to solve this by increasing file size/valid data size (commented region in code). However, I still do not get it to work. I'm aware of the fact that it could be the case that to get most out of asynchronous io i should CreateFile with FILE_FLAG_NO_BUFFERING, however I cannot get this to work as well.
Please note that the program creates a file of about 120mb in the path of execution. Also note that print statements "not ok" are not desireable, I would like to see "can do work in background" appear on my screen... What goes wrong here?
#include <windows.h>
#include <math.h>
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#define ASYNC // remove this definition to run synchronously (i.e. using fwrite)
#ifdef ASYNC
struct _OVERLAPPED *pOverlapped;
HANDLE *pEventH;
HANDLE *pFile;
#else
FILE *pFile;
#endif
#define DIM_X 100
#define DIM_Y 150000
#define _PRINTERROR(msgs)\
{printf("file: %s, line: %d, %s",__FILE__,__LINE__,msgs);\
fflush(stdout);\
return 0;} \
#define _PRINTF(msgs)\
{printf(msgs);\
fflush(stdout);} \
#define _START_TIMER \
time_t time1,time2; \
clock_t clock1; \
time(&time1); \
printf("start time: %s",ctime(&time1)); \
fflush(stdout);
#define _END_TIMER\
time(&time2);\
clock1 = clock();\
printf("end time: %s",ctime(&time2));\
printf("elapsed processor time: %.2f\n",(((float)clock1)/CLOCKS_PER_SEC));\
fflush(stdout);
double aio_dat[DIM_Y] = {0};
double do_compute(double A,double B, int arr_len);
int main()
{
_START_TIMER;
const char *pName = "test1.bin";
DWORD dwBytesToWrite;
BOOL bErrorFlag = FALSE;
int j=0;
int i=0;
int fOverlapped=0;
#ifdef ASYNC
// create / open the file
pFile=CreateFile(pName,
GENERIC_WRITE, // open for writing
0, // share write access
NULL, // default security
CREATE_ALWAYS, // create new/overwrite existing
FILE_FLAG_OVERLAPPED, // | FILE_FLAG_NO_BUFFERING, // overlapped file
NULL); // no attr. template
// check whether file opening was ok
if(pFile==INVALID_HANDLE_VALUE){
printf("%x\n",GetLastError());
_PRINTERROR("file not opened properly\n");
}
// make the overlapped structure
pOverlapped = calloc(1,sizeof(struct _OVERLAPPED));
pOverlapped->Offset = 0;
pOverlapped->OffsetHigh = 0;
// put event handle in overlapped structure
if(!(pOverlapped->hEvent = CreateEvent(NULL,TRUE,FALSE,NULL))){
printf("%x\n",GetLastError());
_PRINTERROR("error in createevent\n");
}
#else
pFile = fopen(pName,"wb");
#endif
// create some output
for(j=0;j<DIM_Y;j++){
aio_dat[j] = do_compute(i, j, DIM_X);
}
// determine how many bytes should be written
dwBytesToWrite = (DWORD)sizeof(aio_dat);
for(i=0;i<DIM_X;i++){ // do this DIM_X times
#ifdef ASYNC
//if(i>0){
//SetFilePointer(pFile,dwBytesToWrite,NULL,FILE_CURRENT);
//if(!(SetEndOfFile(pFile))){
// printf("%i\n",pFile);
// _PRINTERROR("error in set end of file\n");
//}
//SetFilePointer(pFile,-dwBytesToWrite,NULL,FILE_CURRENT);
//}
// write the bytes
if(!(bErrorFlag = WriteFile(pFile,aio_dat,dwBytesToWrite,NULL,pOverlapped))){
// check whether io pending or some other error
if(GetLastError()!=ERROR_IO_PENDING){
printf("lasterror: %x\n",GetLastError());
_PRINTERROR("error while writing file\n");
}
else{
fOverlapped=1;
}
}
else{
// if you get here output got immediately written; bad!
fOverlapped=0;
}
if(fOverlapped){
// do background, this msgs is what I want to see
for(j=0;j<DIM_Y;j++){
aio_dat[j] = do_compute(i, j, DIM_X);
}
for(j=0;j<DIM_Y;j++){
aio_dat[j] = do_compute(i, j, DIM_X);
}
_PRINTF("can do work in background\n");
}
else{
// not overlapped, this message is bad
_PRINTF("not ok\n");
}
// wait to continue
if((WaitForSingleObject(pOverlapped->hEvent,INFINITE))!=WAIT_OBJECT_0){
_PRINTERROR("waiting did not succeed\n");
}
// reset event structure
if(!(ResetEvent(pOverlapped->hEvent))){
printf("%x\n",GetLastError());
_PRINTERROR("error in resetevent\n");
}
pOverlapped->Offset+=dwBytesToWrite;
#else
fwrite(aio_dat,sizeof(double),DIM_Y,pFile);
for(j=0;j<DIM_Y;j++){
aio_dat[j] = do_compute(i, j, DIM_X);
}
for(j=0;j<DIM_Y;j++){
aio_dat[j] = do_compute(i, j, DIM_X);
}
#endif
}
#ifdef ASYNC
CloseHandle(pFile);
free(pOverlapped);
#else
fclose(pFile);
#endif
_END_TIMER;
return 1;
}
double do_compute(double A,double B, int arr_len)
{
int i;
double res = 0;
double *xA = malloc(arr_len * sizeof(double));
double *xB = malloc(arr_len * sizeof(double));
if ( !xA || !xB )
abort();
for (i = 0; i < arr_len; i++) {
xA[i] = sin(A);
xB[i] = cos(B);
res = res + xA[i]*xA[i];
}
free(xA);
free(xB);
return res;
}
Useful links
http://software.intel.com/sites/products/documentation/studio/composer/en-us/2011/compiler_c/cref_cls/common/cppref_asynchioC_aio_read_write_eg.htm
http://www.ibm.com/developerworks/linux/library/l-async/?ca=dgr-lnxw02aUsingPOISIXAIOAPI
http://www.flounder.com/asynchexplorer.htm#Asynchronous%20I/O
I know this is a big question and I would like to thank everybody in advance who takes the trouble reading it and perhaps even respond!
You should be able to get this to work using the OVERLAPPED structure.
You're on the right track: the system is preventing you from writing asynchronously because every WriteFile extends the size of the file. However, you're doing the file size extension wrong. Simply calling SetFileSize will not actually reserve space in the MFT. Use the SetFileValidData function. This will allocate clusters for your file (note that they will contain whatever garbage the disk had there) and you should be able to execute WriteFile and your computation in parallel.
I would stay away from FILE_FLAG_NO_BUFFERING. You're after more performance with parallelism I presume? Don't prevent the cache from doing its job.
Another option that you did not consider is a memory mapped file. Those are available on Windows and Linux. There is a handy Boost abstraction that you could use.
With a memory mapped file, every thread in your process could write its output to the file on its own time, assuming that the record sizes are known and each thread has its own output area.
The operating system will take care of writing the mapped pages to disk when needed or when it gets around to it or when you close the file. Maybe when you close the file. Now that I think about it, some operating systems may require that you call msync to guarantee it.
I don't see why you would want to write asynchronously. Doing things in parallel does not make them faster in all cases. If you write two file at the same time to the same disk, it will almost always be a lot faster. If that is the case, just write them one after another.
If you have some fancy drive like SSD or a virtual RAM drive, parallel writing could be faster. You have to create an file with at full size and then do your parallel magic.
Asynchronous writing is nice, but is done by any OS anyway. The potential gain for you is that you can do other things than writing to disk like displaying a progress bar. This is where multi-threading can help you.
So imho you should use serial writing or parallel writing to multiple disks.
hth

How to use /dev/random or urandom in C?

I want to use /dev/random or /dev/urandom in C. How can I do it? I don't know how can I handle them in C, if someone knows please tell me how. Thank you.
In general, it's a better idea to avoid opening files to get random data, because of how many points of failure there are in the procedure.
On recent Linux distributions, the getrandom system call can be used to get crypto-secure random numbers, and it cannot fail if GRND_RANDOM is not specified as a flag and the read amount is at most 256 bytes.
As of October 2017, OpenBSD, Darwin and Linux (with -lbsd) now all have an implementation of arc4random that is crypto-secure and that cannot fail. That makes it a very attractive option:
char myRandomData[50];
arc4random_buf(myRandomData, sizeof myRandomData); // done!
Otherwise, you can use the random devices as if they were files. You read from them and you get random data. I'm using open/read here, but fopen/fread would work just as well.
int randomData = open("/dev/urandom", O_RDONLY);
if (randomData < 0)
{
// something went wrong
}
else
{
char myRandomData[50];
ssize_t result = read(randomData, myRandomData, sizeof myRandomData);
if (result < 0)
{
// something went wrong
}
}
You may read many more random bytes before closing the file descriptor. /dev/urandom never blocks and always fills in as many bytes as you've requested, unless the system call is interrupted by a signal. It is considered cryptographically secure and should be your go-to random device.
/dev/random is more finicky. On most platforms, it can return fewer bytes than you've asked for and it can block if not enough bytes are available. This makes the error handling story more complex:
int randomData = open("/dev/random", O_RDONLY);
if (randomData < 0)
{
// something went wrong
}
else
{
char myRandomData[50];
size_t randomDataLen = 0;
while (randomDataLen < sizeof myRandomData)
{
ssize_t result = read(randomData, myRandomData + randomDataLen, (sizeof myRandomData) - randomDataLen);
if (result < 0)
{
// something went wrong
}
randomDataLen += result;
}
close(randomData);
}
There are other accurate answers above. I needed to use a FILE* stream, though. Here's what I did...
int byte_count = 64;
char data[64];
FILE *fp;
fp = fopen("/dev/urandom", "r");
fread(&data, 1, byte_count, fp);
fclose(fp);
Just open the file for reading and then read data. In C++11 you may wish to use std::random_device which provides cross-platform access to such devices.
Zneak is 100% correct. Its also very common to read a buffer of random numbers that is slightly larger than what you'll need on startup. You can then populate an array in memory, or write them to your own file for later re-use.
A typical implementation of the above:
typedef struct prandom {
struct prandom *prev;
int64_t number;
struct prandom *next;
} prandom_t;
This becomes more or less like a tape that just advances which can be magically replenished by another thread as needed. There are a lot of services that provide large file dumps of nothing but random numbers that are generated with much stronger generators such as:
Radioactive decay
Optical behavior (photons hitting a semi transparent mirror)
Atmospheric noise (not as strong as the above)
Farms of intoxicated monkeys typing on keyboards and moving mice (kidding)
Don't use 'pre-packaged' entropy for cryptographic seeds, in case that doesn't go without saying. Those sets are fine for simulations, not fine at all for generating keys and such.
Not being concerned with quality, if you need a lot of numbers for something like a monte carlo simulation, it's much better to have them available in a way that will not cause read() to block.
However, remember, the randomness of a number is as deterministic as the complexity involved in generating it. /dev/random and /dev/urandom are convenient, but not as strong as using a HRNG (or downloading a large dump from a HRNG). Also worth noting that /dev/random refills via entropy, so it can block for quite a while depending on circumstances.
zneak's answer covers it simply, however the reality is more complicated than that. For example, you need to consider whether /dev/{u}random really is the random number device in the first place. Such a scenario may occur if your machine has been compromised and the devices replaced with symlinks to /dev/zero or a sparse file. If this happens, the random stream is now completely predictable.
The simplest way (at least on Linux and FreeBSD) is to perform an ioctl call on the device that will only succeed if the device is a random generator:
int data;
int result = ioctl(fd, RNDGETENTCNT, &data);
// Upon success data now contains amount of entropy available in bits
If this is performed before the first read of the random device, then there's a fair bet that you've got the random device. So #zneak's answer can better be extended to be:
int randomData = open("/dev/random", O_RDONLY);
int entropy;
int result = ioctl(randomData, RNDGETENTCNT, &entropy);
if (!result) {
// Error - /dev/random isn't actually a random device
return;
}
if (entropy < sizeof(int) * 8) {
// Error - there's not enough bits of entropy in the random device to fill the buffer
return;
}
int myRandomInteger;
size_t randomDataLen = 0;
while (randomDataLen < sizeof myRandomInteger)
{
ssize_t result = read(randomData, ((char*)&myRandomInteger) + randomDataLen, (sizeof myRandomInteger) - randomDataLen);
if (result < 0)
{
// error, unable to read /dev/random
}
randomDataLen += result;
}
close(randomData);
The Insane Coding blog covered this, and other pitfalls not so long ago; I strongly recommend reading the entire article. I have to give credit to their where this solution was pulled from.
Edited to add (2014-07-25)...
Co-incidentally, I read last night that as part of the LibReSSL effort, Linux appears to be getting a GetRandom() syscall. As at time of writing, there's no word of when it will be available in a kernel general release. However this would be the preferred interface to get cryptographically secure random data as it removes all pitfalls that access via files provides. See also the LibReSSL possible implementation.

Resources