I would like to XOR a very big file (~50 Go).
More precisely, I would like to do so by XORing each block of 32 bytes of a plaintext file (because of lack of memory) with the key 3847611839 and create (block after block) a new cipher file.
Thank You for any help!!
This sounded like fun, and doesn't sound like a homework assignment.
I don't have a previously xor-encrypted file to try with,but if you convert one back and forward, there's no diff.
That I tried atleast. Enjoy! :) This xor's every 4 bytes with 0xE555E5BF, I presume that's what you wanted.
Here's bloxor.c
// bloxor.c - by Peter Boström 2009, public domain, use as you see fit. :)
#include <stdio.h>
unsigned int xormask = 0xE555E5BF; //3847611839 in hex.
int main(int argc, char *argv[])
{
printf("%x\n", xormask);
if(argc < 3)
{
printf("usage: bloxor 'file' 'outfile'\n");
return -1;
}
FILE *in = fopen(argv[1], "rb");
if(in == NULL)
{
printf("Cannot open: %s", argv[2]);
return -1;
}
FILE *out = fopen(argv[2], "wb");
if(out == NULL)
{
fclose(in);
printf("unable to open '%s' for writing.",argv[2]);
return -1;
}
char buffer[1024]; //presuming 1024 is a good block size, I dunno...
int count;
while(count = fread(buffer, 1, 1024, in))
{
int i;
int end = count/4;
if(count % 4)
++end;
for(i = 0;i < end; ++i)
{
((unsigned int *)buffer)[i] ^= xormask;
}
if(fwrite(buffer, 1, count, out) != count)
{
fclose(in);
fclose(out);
printf("cannot write, disk full?\n");
return -1;
}
}
fclose(in);
fclose(out);
return 0;
}
As starblue mentioned in a comment, "Be aware that this is at best obfuscation, not encryption". And it's probably not even obfuscation.
One property of XOR is that (Y xor 0) == Y. What this means for your algorithm is that for anyplace in your very big file where there are runs of zeros (which seems pretty likely given the size of the file), your key will show up in the cipher file. Plain as day.
Another nice feature of XOR encrypted stuff is that if someone has both the plaintext and the cipher text, XOR'ing those items together nets you an output that has the key used to perform the cipher repeated over and over. If the person knows that the 2 files are a plaintext/ciphertext pair, they've learned the key which is bad if the key is used for more than one encryption. if the attacker isn't sure if the plaintext and ciphertext are related, they have a pretty good idea after this since the key is a repeated pattern in the output. None of this is a problem with one time pad because each bit of the key is used only once, so one one learns anything new from this attack.
A lot of people make the mistake of assuming that because a one time pad is provably unbreakable, that an XOR encryption might be OK 'if done well' since the fundamental operation performed is the same. The difference is that a one time pad uses each random bit of the key exactly once. So among other things, if the plaintext has a run of zeros, nothing is learned about the key, unlike with a simple fixed-key XOR cipher.
As Bruce Schneier said: "There are two kinds of cryptography in this world: cryptography that will stop your kid sister from reading your files, and cryptography that will stop major governments from reading your files."
An XOR cipher is barely kid sister proof - if even that.
You need to craft a solution around a streaming architecture: you read the input file in "stream", modify it, and write the result in the output file.
This way, you don't have to read all the file at once.
If your question is how to do it without using extra space on the disk, I would just read in the chunks in multiples of 32 bytes (as big as you can), work with the chunk in memory, then write it out again. You should be able to use the ftell and fseek functions to do that (assuming your long type is large enough, of course).
It may be faster to memory-map the file if you can spare that much out of your address space (and your OS supports it) but I'd try the easiest solution first.
Of course, if space isn't a problem, just read the chunks in and write them to a new file, something like the following (pseudo-code):
open infile
open outfile
while not end of infile:
read chunk from file
change chunk
write chunk to outfile
close outfile
close infile
This sort of read/process/write is pretty basic stuff. If you have more complicated requirements, you should update your question with them.
Related
I have this function :
int cipher_file(char *file_path, uint8_t *key, int key_size){
FILE *file;
size_t read_char_count, wrote_char_count;
fpos_t *pos = malloc(sizeof(fpos_t));
char *block = malloc(16*sizeof(uint8_t));
if ( !(file = fopen(file_path, "rb+")) ) {
return EXIT_FAILURE;
}
while(!feof(file)){
while( ( read_char_count = fread(block, 1, 16*sizeof(uint8_t), file) ) > 0 ) {
block = cipher_block(block, key, key_size);
fseek(file, -read_char_count, SEEK_CUR);
wrote_char_count = fwrite(block , 1, 16*sizeof(uint8_t), file);
}
}
fclose(file);
return EXIT_SUCCESS;
}
(I know ECB mode is not safe btw)
Which takes a file, break it down in blocks of 128 bits, cipher them using an AES and write them back to the file, effectively replacing plain text with cipher text.
I also wrote a function decipher_file() to decipher the file.
The issue is that if the file size is not a multiple of 128 bits, at the end fread() only partially replace content of "block" (which is 16 bytes long) with the successfully read characters, leaving a bunch of garbage from the previous ciphered block.
When deciphering since decipher_file() has normally no way of knowing the size of the original file, it deciphers all the content, including the garbage characters, and write it back to the file.
I also tried re initializing "block" with zeros at each round but, without great surprise, they were added to the file too, which can be very problematic.
So my question is, is there a way (like a function) to signify where the file ends, or tell fwrite() to stop writing?
You can't use a special character because the encrypted data might end up looking like it. It would not be an elegant solution anyway.
There are multiple solutions:
Prefix the file with the decrypted content length. That's very clean and easy to implement.
Use a cipher mode that retains length information. ECB does not. Use padding schemes or a scheme, that preserves the length such as counter mode.
For the class cs50, I have to read in jpeg files byte by byte from a memory card in order to look at the header information. The file compiles well, but whenever I execute the file, it returns a "segmentation fault(core dumped)" message.
Edit) Okay, now I know why I have to use an "unsigned char" instead of "int*". Can someone tell me how I can store information into files within scope for this particular code? Right now, I am trying to store information outside of an if() condition, and I don't think the fread function is actually accessing the "image" file I opened.
#include <stdio.h>
#include <string.h>
#include <math.h>
FILE * image = NULL;
int main(int argc, char* argv[])
{
FILE* infile = fopen("card.raw", "r");
if (infile == NULL)
{
printf("Could not open.\n");
fclose(infile);
return 1;
}
unsigned char storage[512];
int number = 0;
int b = floor((number) / 100);
int c = floor(((number) - (b * 100))/ 10);
int d = floor(((number) - (b * 100) - (c * 10)));
int writing = 0;
char string[5];
char* extension = ".jpg";
while (fread(&storage, sizeof(storage), 1, infile))
{
if (storage == NULL)
{
break;
}
if (storage[0] == 0xff && storage[1] == 0xd8 && storage[2] == 0xff)
{
if (storage[3] == 0xe0 || storage[3] == 0xe1)
{
if (image != NULL)
{
fclose(image);
}
sprintf(string, "%d%d%d%s", b, c, d, extension);
image = fopen(string, "w");
number++;
writing = 1;
if (writing == 1 && storage != NULL)
{
fwrite(storage, sizeof(storage), 1, image);
}
}
}
if (writing == 1 && storage != NULL)
{
fwrite(storage, sizeof(storage), 1, image);
}
if (storage == NULL)
{
fclose(image);
}
}
fclose(image);
fclose(infile);
return 0;
}
This is the problem set just in case my explanation is not clear.
recover
In anticipation of this problem set, I spent the past several days snapping photos of people I know, all of which were saved by my
digital camera as JPEGs on a 1GB CompactFlash (CF) card. (It’s
possible I actually spent the past several days on Facebook instead.)
Unfortunately, I’m not very good with computers, and I somehow deleted
them all! Thankfully, in the computer world, "deleted" tends not to
mean "deleted" so much as "forgotten." My computer insists that the CF
card is now blank, but I’m pretty sure it’s lying to me.
Write in ~/Dropbox/pset4/jpg/recover.c a program that recovers these photos.
Ummm.
Okay, here’s the thing. Even though JPEGs are more complicated than BMPs, JPEGs have "signatures," patterns of bytes that distinguish
them from other file formats. In fact, most JPEGs begin with one of
two sequences of bytes. Specifically, the first four bytes of most
JPEGs are either
0xff 0xd8 0xff 0xe0 or 0xff 0xd8 0xff 0xe1
from first byte to fourth byte, left to right. Odds are, if you find one of these patterns of bytes on a disk known to store photos
(e.g., my CF card), they demark the start of a JPEG. (To be sure, you
might encounter these patterns on some disk purely by chance, so data
recovery isn’t an exact science.)
Fortunately, digital cameras tend to store photographs contiguously on CF cards, whereby each photo is stored immediately
after the previously taken photo. Accordingly, the start of a JPEG
usually demarks the end of another. However, digital cameras generally
initialize CF cards with a FAT file system whose "block size" is 512
bytes (B). The implication is that these cameras only write to those
cards in units of 512 B. A photo that’s 1 MB (i.e.,
1,048,576 B) thus takes up 1048576 ÷ 512 = 2048 "blocks" on a CF card. But so does a photo that’s, say, one byte smaller (i.e.,
1,048,575 B)! The wasted space on disk is called "slack space."
Forensic investigators often look at slack space for remnants of
suspicious data.
The implication of all these details is that you, the investigator, can probably write a program that iterates over a copy
of my CF card, looking for JPEGs' signatures. Each time you find a
signature, you can open a new file for writing and start filling that
file with bytes from my CF card, closing that file only once you
encounter another signature. Moreover, rather than read my CF card’s
bytes one at a time, you can read 512 of them at a time into a buffer
for efficiency’s sake. Thanks to FAT, you can trust that JPEGs'
signatures will be "block-aligned." That is, you need only look for
those signatures in a block’s first four bytes.
Realize, of course, that JPEGs can span contiguous blocks. Otherwise, no JPEG could be larger than 512 B. But the last byte of a
JPEG might not fall at the very end of a block. Recall the possibility
of slack space. But not to worry. Because this CF card was brand- new
when I started snapping photos, odds are it’d been "zeroed" (i.e.,
filled with 0s) by the manufacturer, in which case any slack space
will be filled with 0s. It’s okay if those trailing 0s end up in the
JPEGs you recover; they should still be viewable.
Now, I only have one CF card, but there are a whole lot of you! And so I’ve gone ahead and created a "forensic image" of the card,
storing its contents, byte after byte, in a file called card.raw . So
that you don’t waste time iterating over millions of 0s unnecessarily,
I’ve only imaged the first few megabytes of the CF card. But you
should ultimately find that the image contains 16 JPEGs. As usual, you
can open the file programmatically with
fopen , as in the below. FILE* file = fopen("card.raw", "r");
Notice, incidentally, that ~/Dropbox/pset4/jpg contains only recover.c, but it’s devoid of any code. (We leave it to you to decide
how to implement and compile recover!) For simplicity, you should
hard-code "card.raw" in your program; your program need not accept any
command-line arguments. When executed, though, your program should
recover every one of the JPEGs from card.raw, storing each as a
separate file in your current working directory. Your program should
number the files it
outputs by naming each , ###.jpg where ### is three-digit decimal number from 000 on up. (Befriend sprintf.) You need not try to
recover the JPEGs' original names. To
check whether the JPEGs your program spit out are correct, simply double-click and take a look! If each photo appears intact,
your operation was likely a success!
Odds are, though, the JPEGs that the first draft of your code spits out won’t be correct. (If you open them up and don’t see
anything, they’re probably not correct!) Execute the command below to
delete all JPEGs in your current working directory.
rm *.jpg
If you’d rather not be prompted to confirm each deletion, execute the command below instead.
rm -f *.jpg
Just be careful with that -f switch, as it "forces" deletion without prompting you.
int* storage[512];
You define a pointer to a memory location for 512 ints, but you don't actually reserve the space (only the pointer.
I suspect you just want
int storage[512];
After this, storage is still a pointer, but now it actually points to 512 ints. Though I still think you don't want this. You need 'bytes' not ints. The nearest C has are unsigned char. So the final declaration is:
unsigned char storage[512];
Why? Because read reads into consecutive bytes. If you read into ints, then you will read 4 bytes into each int (because an int occupies 4 bytes).
There are a number of problems in your program. The first is that you have not opened the file in binary mode.
The second is that you are doing unnecessary pointer arithmetic. Why not—
char buffer [BUFFERSIZE] ;
....
if (buffer [ii] == WHATEVER)
The idea behind this program is to simply access the ram and download the data from it to a txt file.
Later Ill convert the txt file to jpeg and hopefully it will be readable .
However when I try and read from the RAM using NEW[] it takes waaaaaay to long to actually copy all the values into the file?
Isnt it suppose to be really fast? I mean I save pictures everyday and it doesn't even take a second?
Is there some other method I can use to dump memory to a file?
#include <stdio.h>
#include <stdlib.h>
#include <hw/pci.h>
#include <hw/inout.h>
#include <sys/mman.h>
main()
{
FILE *fp;
fp = fopen ("test.txt","w+d");
int NumberOfPciCards = 3;
struct pci_dev_info info[NumberOfPciCards];
void *PciDeviceHandler1,*PciDeviceHandler2,*PciDeviceHandler3;
uint32_t *Buffer;
int *BusNumb; //int Buffer;
uint32_t counter =0;
int i;
int r;
int y;
volatile uint32_t *NEW,*NEW2;
uintptr_t iobase;
volatile uint32_t *regbase;
NEW = (uint32_t *)malloc(sizeof(uint32_t));
NEW2 = (uint32_t *)malloc(sizeof(uint32_t));
Buffer = (uint32_t *)malloc(sizeof(uint32_t));
BusNumb = (int*)malloc(sizeof(int));
printf ("\n 1");
for (r=0;r<NumberOfPciCards;r++)
{
memset(&info[r], 0, sizeof(info[r]));
}
printf ("\n 2");
//Here the attach takes place.
for (r=0;r<NumberOfPciCards;r++)
{
(pci_attach(r) < 0) ? FuncPrint(1,r) : FuncPrint(0,r);
}
printf ("\n 3");
info[0].VendorId = 0x8086; //Wont be using this one
info[0].DeviceId = 0x3582; //Or this one
info[1].VendorId = 0x10B5; //WIll only be using this one PLX 9054 chip
info[1].DeviceId = 0x9054; //Also PLX 9054
info[2].VendorId = 0x8086; //Not used
info[2].DeviceId = 0x24cb; //Not used
printf ("\n 4");
//I attached the device and give it a handler and set some setting.
if ((PciDeviceHandler1 = pci_attach_device(0,PCI_SHARE|PCI_INIT_ALL, 0, &info[1])) == 0)
{
perror("pci_attach_device fail");
exit(EXIT_FAILURE);
}
for (i = 0; i < 6; i++)
//This just prints out some details of the card.
{
if (info[1].BaseAddressSize[i] > 0)
printf("Aperture %d: "
"Base 0x%llx Length %d bytes Type %s\n", i,
PCI_IS_MEM(info[1].CpuBaseAddress[i]) ? PCI_MEM_ADDR(info[1].CpuBaseAddress[i]) : PCI_IO_ADDR(info[1].CpuBaseAddress[i]),
info[1].BaseAddressSize[i],PCI_IS_MEM(info[1].CpuBaseAddress[i]) ? "MEM" : "IO");
}
printf("\nEnd of Device random info dump---\n");
printf("\nNEWs Address : %d\n",*(int*)NEW);
//Not sure if this is a legitimate way of memory allocation but I cant see to read the ram any other way.
NEW = mmap_device_memory(NULL, info[1].BaseAddressSize[3],PROT_READ|PROT_WRITE|PROT_NOCACHE, 0,info[1].CpuBaseAddress[3]);
//Here is where things are starting to get messy and REALLY long to just run through all the ram and dump it.
//Is there some other way I can dump the data in the ram into a file?
while (counter!=info[1].BaseAddressSize[3])
{
fprintf(fp, "%x",NEW[counter]);
counter++;
}
fclose(fp);
printf("0x%x",*Buffer);
}
A few issues that I can see:
You are writing blocks of 4 bytes - that's quite inefficient. The stream buffering in the C library may help with that to a degree, but using larger blocks would still be more efficient.
Even worse, you are writing out the memory dump in hexadecimal notation, rather than the bytes themselves. That conversion is very CPU-intensive, not to mention that the size of the output is essentially doubled. You would be better off writing raw binary data using e.g. fwrite().
Depending on the specifics of your system (is this on QNX?), reading from I/O-mapped memory may be slower than reading directly from physical memory, especially if your PCI device has to act as a relay. What exactly is it that you are doing?
In any case I would suggest using a profiler to actually find out where your program is spending most of its time. Even a rudimentary system monitor would allow you to determine if your program is CPU-bound or I/O-bound.
As it is, "waaaaaay to long" is hardly a valid measurement. How much data is being copied? How long does it take? Where is the output file located?
P.S.: I also have some concerns w.r.t. what you are trying to do, but that is slightly off-topic for this question...
For fastest speed: write the data in binary form and use the open() / write() / close() API-s. Since your data is already available in a contiguous block of (virtual) memory it is a waste to copy it to a temporary buffer (used by the fwrite(), fprintf(), etc. API-s).
The code using write() will be similar to:
int fd = open("filename.bin", O_RDWR|O_CREAT, S_IRWXU);
write(fd, (void*)NEW, 4*info[1].BaseAddressSize[3]);
close(fd);
You will need to add error handling and make sure that the buffer size is specified correctly.
To reiterate, you get the speed-up from:
avoiding the conversion from binary to ASCII (as pointed out by others above)
avoiding many calls to libc
reducing the number of system-calls (from inside libc)
eliminating the overhead of copying data to a temporary buffer inside the fwrite()/fprintf() and related functions (buffering would be useful if your data arrived in small chunks, including the case of converting to ASCII in 4 byte units)
I intentionally ignore commenting on other parts of your code as it is apparently not intended to be production quality yet and your question is focused on how to speed up writing data to a file.
There are a bunch of ways describing how to use various methods to print out lines of a text file on this site:
Posix-style,
reading IP addresses,
Fixed line length.
They all seem to be tailored to a specific example.
It would be great to have the Clearest and Most Concise and Easiest way to simply: print each line of any text file to the screen. Preferably with detailed explanations of what each line does.
Points for brevity and clarity.
#include <stdio.h>
static void cat(FILE *fp)
{
char buffer[4096];
size_t nbytes;
while ((nbytes = fread(buffer, sizeof(char), sizeof(buffer), fp)) != 0)
fwrite(buffer, sizeof(char), nbytes, stdout);
}
int main(int argc, char **argv)
{
FILE *fp;
const char *file;
while ((file = *++argv) != 0)
{
if ((fp = fopen(file, "r")) != 0)
{
cat(fp);
fclose(fp);
}
}
return(0);
}
The cat() function is not strictly necessary, but I'd rather use it. The main program steps through each command line argument and opens the named file. If it succeeds, it calls the cat() function to print its contents. Since the call to fopen() does not specify "rb", it is opened as a text file. If the file is not opened, this code silently ignores the issue. If no files are specified, nothing is printed at all.
The cat() function simply reads blocks of text up to 4096 bytes at a time, and writes them to standard output ('the screen'). It stops when there's no more to read.
If you want to extend the code to read standard input when no file is specified, then you can use:
if (argc == 1)
cat(stdin);
else
{
...while loop as now...
}
which is one of the reasons for having the cat() function written as shown.
This code does not pay direct attention to newlines — or lines of any sort. If you want to process it formally one line at a time, then you can do several things:
static void cat(FILE *fp)
{
char buffer[4096];
while (fgets(buffer, sizeof(buffer), fp) != 0)
fputs(buffer, stdout);
}
This will read and write one line at a time. If any line is longer than 4095 bytes, it will read the line in two or more operations and write it in the same number of operations. Note that this assumes a text file in a way that the version using fread() and fwrite() does not. On POSIX systems, the version with fread() and fwrite() will handle arbitrary binary files with null bytes ('\0') in the data, but the version using fgets() and fputs() will not. Both the versions so far are strictly standard C (any version of the standard) as they don't use any platform-specific extensions; they are about as portable as code can be.
Alternatively again, if you have the POSIX 2008 getline() function, you can use that, but you need #include <stdlib.h> too (because you end up having to release the memory it allocates):
static void cat(FILE *fp)
{
char *buffer = 0;
size_t buflen = 0;
while (getline(&buffer, &buflen, fp) != -1)
fputs(buffer, stdout);
free(buffer);
}
This version, too, will not handle binary data (meaning data with null bytes in it). It could be upgraded to do so, of course:
static void cat(FILE *fp)
{
char *buffer = 0;
size_t buflen = 0;
ssize_t nbytes;
while ((nbytes = getline(&buffer, &buflen, fp)) != -1)
fwrite(buffer, sizeof(char), nbytes, stdout);
free(buffer);
}
The getline() function reports how many bytes it read (there's a null byte after that), but the fwrite() function is the only one that takes a stream of arbitrary bytes and writes them all to the given stream.
Well, here is a very short solution I eventually made. I imagine there is somethign fundamentally wrong with it otherwise it would have been suggested, but I figured I would post it here and hope someone tears it apart:
#include <stdio.h>
main()
{
FILE *MyFile;
int c;
MyFile=fopen("C:\YourFile.txt","r");
c = fgetc(MyFile);
while (c!=EOF)
{
printf("%c",c);
c = fgetc(MyFile);
}
}
#Dlinet, you are trying to learn some useful lessons on how to organize a program. I won't post code because there is already a really excellent answer; I cannot possibly improve upon it. But I would like to recommend a book to you.
The book is called Software Tools in Pascal. The language is Pascal, not C, but for reading the book this will cause no serious hardship. They start out implementing simple tools like the one in this example (which on UNIX is called cat) and they move on to more advanced stuff. Not only do they teach great lessons on how to organize this sort of program, they also cover language design issues. (There are problems in Pascal that really vex them, and if you know C you will realize that C doesn't have those problems.)
The book is out of print now, but I found it to be hugely valuable when I was learning to write code. The so-called "left corner design" methodology serves me well to this day.
I encourage you to find a used copy on Amazon or wherever. Amazon has used copies starting at $0.02 plus $4 shipping.
http://www.amazon.com/Software-Tools-Pascal-Brian-Kernighan/dp/0201103427
It would be an educational exercise to study the programs in this book and implement them in C. Any Linux system already has more-powerful and fully-debugged versions of these programs, but it would not be a waste of your time to work through this book and learn how to write this stuff.
Alternatively you could install FreePascal on your computer and use it to run the programs from the book.
Good luck and may you always enjoy software development!
If you want something prebaked, there's cat on POSIX systems.
If you want to write it yourself, here's the basic layout:
Check to make sure file name, permissions, and path are valid
Read til newline separator in a loop (\n on Unix, \r\n on Windows/DOS)
Check for error. If so, print error an abort.
Print line to screen.
Repeat
The point is, there isn't really a specific way to do it. Just read, then write, and repeat. With some error checking, you've got cat all over again.
I want to use /dev/random or /dev/urandom in C. How can I do it? I don't know how can I handle them in C, if someone knows please tell me how. Thank you.
In general, it's a better idea to avoid opening files to get random data, because of how many points of failure there are in the procedure.
On recent Linux distributions, the getrandom system call can be used to get crypto-secure random numbers, and it cannot fail if GRND_RANDOM is not specified as a flag and the read amount is at most 256 bytes.
As of October 2017, OpenBSD, Darwin and Linux (with -lbsd) now all have an implementation of arc4random that is crypto-secure and that cannot fail. That makes it a very attractive option:
char myRandomData[50];
arc4random_buf(myRandomData, sizeof myRandomData); // done!
Otherwise, you can use the random devices as if they were files. You read from them and you get random data. I'm using open/read here, but fopen/fread would work just as well.
int randomData = open("/dev/urandom", O_RDONLY);
if (randomData < 0)
{
// something went wrong
}
else
{
char myRandomData[50];
ssize_t result = read(randomData, myRandomData, sizeof myRandomData);
if (result < 0)
{
// something went wrong
}
}
You may read many more random bytes before closing the file descriptor. /dev/urandom never blocks and always fills in as many bytes as you've requested, unless the system call is interrupted by a signal. It is considered cryptographically secure and should be your go-to random device.
/dev/random is more finicky. On most platforms, it can return fewer bytes than you've asked for and it can block if not enough bytes are available. This makes the error handling story more complex:
int randomData = open("/dev/random", O_RDONLY);
if (randomData < 0)
{
// something went wrong
}
else
{
char myRandomData[50];
size_t randomDataLen = 0;
while (randomDataLen < sizeof myRandomData)
{
ssize_t result = read(randomData, myRandomData + randomDataLen, (sizeof myRandomData) - randomDataLen);
if (result < 0)
{
// something went wrong
}
randomDataLen += result;
}
close(randomData);
}
There are other accurate answers above. I needed to use a FILE* stream, though. Here's what I did...
int byte_count = 64;
char data[64];
FILE *fp;
fp = fopen("/dev/urandom", "r");
fread(&data, 1, byte_count, fp);
fclose(fp);
Just open the file for reading and then read data. In C++11 you may wish to use std::random_device which provides cross-platform access to such devices.
Zneak is 100% correct. Its also very common to read a buffer of random numbers that is slightly larger than what you'll need on startup. You can then populate an array in memory, or write them to your own file for later re-use.
A typical implementation of the above:
typedef struct prandom {
struct prandom *prev;
int64_t number;
struct prandom *next;
} prandom_t;
This becomes more or less like a tape that just advances which can be magically replenished by another thread as needed. There are a lot of services that provide large file dumps of nothing but random numbers that are generated with much stronger generators such as:
Radioactive decay
Optical behavior (photons hitting a semi transparent mirror)
Atmospheric noise (not as strong as the above)
Farms of intoxicated monkeys typing on keyboards and moving mice (kidding)
Don't use 'pre-packaged' entropy for cryptographic seeds, in case that doesn't go without saying. Those sets are fine for simulations, not fine at all for generating keys and such.
Not being concerned with quality, if you need a lot of numbers for something like a monte carlo simulation, it's much better to have them available in a way that will not cause read() to block.
However, remember, the randomness of a number is as deterministic as the complexity involved in generating it. /dev/random and /dev/urandom are convenient, but not as strong as using a HRNG (or downloading a large dump from a HRNG). Also worth noting that /dev/random refills via entropy, so it can block for quite a while depending on circumstances.
zneak's answer covers it simply, however the reality is more complicated than that. For example, you need to consider whether /dev/{u}random really is the random number device in the first place. Such a scenario may occur if your machine has been compromised and the devices replaced with symlinks to /dev/zero or a sparse file. If this happens, the random stream is now completely predictable.
The simplest way (at least on Linux and FreeBSD) is to perform an ioctl call on the device that will only succeed if the device is a random generator:
int data;
int result = ioctl(fd, RNDGETENTCNT, &data);
// Upon success data now contains amount of entropy available in bits
If this is performed before the first read of the random device, then there's a fair bet that you've got the random device. So #zneak's answer can better be extended to be:
int randomData = open("/dev/random", O_RDONLY);
int entropy;
int result = ioctl(randomData, RNDGETENTCNT, &entropy);
if (!result) {
// Error - /dev/random isn't actually a random device
return;
}
if (entropy < sizeof(int) * 8) {
// Error - there's not enough bits of entropy in the random device to fill the buffer
return;
}
int myRandomInteger;
size_t randomDataLen = 0;
while (randomDataLen < sizeof myRandomInteger)
{
ssize_t result = read(randomData, ((char*)&myRandomInteger) + randomDataLen, (sizeof myRandomInteger) - randomDataLen);
if (result < 0)
{
// error, unable to read /dev/random
}
randomDataLen += result;
}
close(randomData);
The Insane Coding blog covered this, and other pitfalls not so long ago; I strongly recommend reading the entire article. I have to give credit to their where this solution was pulled from.
Edited to add (2014-07-25)...
Co-incidentally, I read last night that as part of the LibReSSL effort, Linux appears to be getting a GetRandom() syscall. As at time of writing, there's no word of when it will be available in a kernel general release. However this would be the preferred interface to get cryptographically secure random data as it removes all pitfalls that access via files provides. See also the LibReSSL possible implementation.