I'm trying to generate big random numbers for the public key and private key. I have problem with the initial seed to generate a random 256-bit private key on client-side.
as you may know, we shouldn't use rand or srand function in C because it's easy to break.
how can I generate a random seed to generate a random 256-bit private key?
I use GMP's Linear congruential Algorithm to generate random number in C.
On unix systems, you can read from /dev/random and /dev/urandom files to get some "randomness" byte sequences. Those sequences are based on your system entropy.
See this post for more details about their differences.
#include <unistd.h> // read
#include <fcntl.h> // open
#include <stdio.h> // printf
int main(void)
{
int fd;
unsigned int seed;
fd = open("/dev/urandom", O_RDONLY);
read(fd, &seed, sizeof seed);
printf("%u\n", seed);
// Then you can use srand with your new random seed
return (0);
}
Note: Don't forget to check for errors after open and read, and to close fd after use.
Related
I had a network security class in my university.
And there is a challenge for finding a secret number.
Here is the code
#include <stdlib.h>
#include <time.h>
#include <stdio.h>
void init() {
setbuf(stdin, NULL);
setbuf(stdout, NULL);
}
int main() {
init();
srand(time(0));
int secret = 0;
puts("Your secret: ");
scanf("%d", &secret);
if(secret == rand()) {
system("/bin/sh");
} else {
puts("failed");
}
}
I actually could not understand my professor's explanation.
Anyone can explain the meaning of this code, and how can i find the secret number?
Technically you shouldn't be able to find the "secret" number - that's the whole point of the exercise: the "secret number" is generated using a strong PRNG seeded with a more-or-less random set of bits coming from the system clock.
Theoretically, if you have complete knowledge about the system - i.e. you both know the specific PRNG implementation used and the exact value used to seed it (the result of the time() call) - you should be able to guess the number simply by seeding another instance of the same PRNG and getting the next random int from it.
In this specific case, as the attacker you have complete control of the execution of the code and you should be able to predict the value returned from time() with very good accuracy (it is only a second resolution), so if you can build a program on the same system, that takes the predicted time value, seeds it to the same srand() implementation and returns the first random int - you can guess the "secret number".
So maybe the point of the exercise is to show that PRNG security trivially depends on knowing to start when no one is looking - otherwise you aren't secure, and possibly to not use time() to seed srand() and instead rely on something actually robust, like /dev/urandom 🤷.
Pseudo random number generators rely on a seed to generate their random numbers: if you use the same seed, you'll get the same sequence of "random" numbers.
See here:
Consider this code:
#include <stdio.h>
#include <stdlib.h>
int main(void) {
srand(0);
printf("random number 1: %d\n", rand());
printf("random number 2: %d\n", rand());
printf("random number 3: %d\n", rand());
printf("random number 4: %d\n", rand());
return 0;
}
Running the program (a.out) multiple times always generates the same numbers:
marco#Marcos-MacBook-Pro-16 Desktop % ./a.out
random number 1: 520932930
random number 2: 28925691
random number 3: 822784415
random number 4: 890459872
marco#Marcos-MacBook-Pro-16 Desktop % ./a.out
random number 1: 520932930
random number 2: 28925691
random number 3: 822784415
random number 4: 890459872
marco#Marcos-MacBook-Pro-16 Desktop % ./a.out
random number 1: 520932930
random number 2: 28925691
random number 3: 822784415
random number 4: 890459872
marco#Marcos-MacBook-Pro-16 Desktop % ./a.out
random number 1: 520932930
random number 2: 28925691
random number 3: 822784415
random number 4: 890459872
Of course the exact numbers will differ on the exact implementation used to generate the numbers. So the sequence of numbers probably will be different on your system. Nevertheless, using the same seed will always result in the same sequence.
Real systems use a combination of "real" (based on physical randomness) random numbers + a pseudo random number generator simply because its way more efficient to generate random numbers that way.
These are usually cryptographically secure pseudorandom number generators because the numbers are used for cryptographic operations (cryptography heavily relies on randomness to work).
The basic idea is as long as the initial seed is "secret" (unknown) you can't work it back and determine the pre-defined sequence of numbers generated.
It is possible (and has been done) to work back the initial seed simply by looking at the numbers generated by a pseudorandom number generator.
Now on how to solve the exercise given by your professor:
The easiest way would be to "freeze" time to have a fixed seed value for the random numbers (as shown in my code example above).
Since there isn't an easy way to do that, you can print the current seed by running another program to just output the first random number generated (since that's the "secret"):
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
int main(void) {
srand(time(0));
printf("the secret number is %d\n", rand());
return 0;
}
You can then use that number to "unlock" the program given by your professor.
However you have to do that within a second or less, since time() returns a new value every second.
The more reliable way would be to have your program input the "random" number as soon as you generated it.
Here's an example code of how you could do that:
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <string.h>
// where to put the "random" number on disk
const char *tmp_file = "/tmp/input";
// where the executable of your professor is
const char *executable = "/path/to/your/professors/executable";
void writeRandomNumberToDisk(const char *path, int number) {
char buf[128];
// convert int to string
memset(buf, 0, sizeof(buf));
snprintf(buf, sizeof(buf), "%d\n", number);
FILE *fp = fopen(path, "w+");
fwrite(buf, strlen(buf), 1, fp);
fclose(fp);
}
int main(void) {
srand(time(0));
int secret = rand();
printf("the secret number is %d\n", secret);
writeRandomNumberToDisk(tmp_file, secret);
char buf[512];
memset(buf, 0, sizeof(buf));
snprintf(buf, sizeof(buf), "/bin/sh -c 'cat %s | %s'", tmp_file, executable);
printf("Now executing %s\n", buf);
system(buf);
return 0;
}
Essentially, it writes the first "random" number to disk, then invokes the shell that will feed the "random" number into the program.
You can also bypass the file system entirely by using something like:
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <string.h>
// where the executable of your professor is
const char *executable = "/path/to/your/professors/executable";
int main(void) {
srand(time(0));
int secret = rand();
printf("the secret number is %d\n", secret);
char buf[512];
memset(buf, 0, sizeof(buf));
snprintf(buf, sizeof(buf), "/bin/sh -c 'printf \"%d\\n\" | %s'", secret, executable);
printf("Now executing %s\n", buf);
system(buf);
return 0;
}
I am new to C programming.
I am trying to work through an example in my textbook.
Problem:
1 : Can't make random number generator pause for one second, without having to
insert printf(); in a place where I shouldn't.
2: Can't make the program pause for 1 second, and then delete random sequence. I have tried using printf(\r), but it just deletes the entire sequence without pausing for 1 second.
Help appreciated.
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
int main(void)
{
time_t Start_Of_Seq = (time(NULL));
time_t Now = 0;
Now = clock();
srand((unsigned int)Start_Of_Seq);
for(int i = 1; i <= 5; i++)
{
printf("%d",rand()% 10);
}
printf("\n"); //This shouldn't be here.
for(; clock() - Now < CLOCKS_PER_SEC;);
printf("Testing the to see if there is a pause\n");
}
The printf function outputs everything to a buffer. The buffer is actually printed only after a newline. Try fflush(stdout); to print the buffer contents immediately.
Besides, if you use Linux or another Unix-like system, for pauses there is a system call sleep. Try the man 3 sleep command to see more info.
Is there is a simple but sure way to measure the relative differences in performance between two algorithm implementations in C programs. More specifically, I want to compare the performance of implementation A vs. B? I'm thinking of a scheme like this:
In a unit test program:
start timer
call function
stop timer
get difference between start stop time
Run the scheme above for a pair of functions A and B, then get a percentage difference in execution time to determine which is faster.
Upon doing some research I came across this question about using a Monotonic clock on OSX in C, which apparently can give me at least nanosecond precision. To be clear, I understand that precise, controlled measurements are hard to perform, like what's discussed in "With O(N) known and system clock known, can we calculate the execution time of the code?, which I assume should be irrelevant in this case because I only want a relative measurement.
Everything considered, is this a sufficient and valid approach towards the kind of analysis I want to perform? Are there any details or considerations I might be missing?
The main modification I make to the timing scheme you outline is to ensure that the same timing code is used for both functions — assuming they do have an identical interface, by passing a function pointer to skeletal code.
As an example, I have some code that times some functions that validate whether a given number is prime. The control function is:
static void test_primality_tester(const char *tag, int seed, int (*prime)(unsigned), int count)
{
srand(seed);
Clock clk;
int nprimes = 0;
clk_init(&clk);
clk_start(&clk);
for (int i = 0; i < count; i++)
{
if (prime(rand()))
nprimes++;
}
clk_stop(&clk);
char buffer[32];
printf("%9s: %d primes found (out of %d) in %s s\n", tag, nprimes,
count, clk_elapsed_us(&clk, buffer, sizeof(buffer)));
}
I'm well aware of srand() — why call it once?, but the point of using srand() once each time this function is called is to ensure that the tests process the same sequence of random numbers. On macOS, RAND_MAX is 0x7FFFFFFF.
The type Clock contain analogues to two struct timespec structures, for the start and stop time. The clk_init() function initializes the structure; clk_start() records the start time in the structure; clk_stop() records the stop time in the structure; and clk_elapsed_us() calculates the elapsed time between the start and stop times in microseconds. The package is written to provide me with cross-platform portability (at the cost of some headaches in determining which is the best sub-second timing routine available at compile time).
You can find my code for timers on Github in the repository https://github.com/jleffler/soq, in the src/libsoq directory — files timer.h and timer.c. The code has not yet caught up with macOS Sierra having clock_gettime(), though it could be compiled to use it with -DHAVE_CLOCK_GETTIME as a command-line compiler option.
This code was called from a function one_test():
static void one_test(int seed)
{
printf("Seed; %d\n", seed);
enum { COUNT = 10000000 };
test_primality_tester("IsPrime1", seed, IsPrime1, COUNT);
test_primality_tester("IsPrime2", seed, IsPrime2, COUNT);
test_primality_tester("IsPrime3", seed, IsPrime3, COUNT);
test_primality_tester("isprime1", seed, isprime1, COUNT);
test_primality_tester("isprime2", seed, isprime2, COUNT);
test_primality_tester("isprime3", seed, isprime3, COUNT);
}
And the main program can take one or a series of seeds, or uses the current time as a seed:
int main(int argc, char **argv)
{
if (argc > 1)
{
for (int i = 1; i < argc; i++)
one_test(atoi(argv[i]));
}
else
one_test(time(0));
return(0);
}
I'm trying to generate large files (4-8 GB) with C code.
Now I use fopen() with 'wb' parameters to open file binary and fwrite() function in for loop to write bytes to file. I'm writing one byte in every loop iteration. There is no problem until the file is larger or equal to 4294967296 bytes (4096 MB). It looks like some memory limit in 32-bit OS, because when it writes to that opened file, it is still in RAM. Am I right? The symptom is that the created file has smaller size than I want. The difference is 4096 MB, e.g. when I want 6000 MB file, it creates 6000 MB - 4096 MB = 1904 MB file.
Could you suggest other way to do that task?
Regards :)
Part of code:
unsigned long long int number_of_data = (unsigned int)atoi(argv[1])*1024*1024; //MB
char x[1]={atoi(argv[2])};
fp=fopen(strcat(argv[3],".bin"),"wb");
for(i=0;i<number_of_data;i++) {
fwrite(x, sizeof(x[0]), sizeof(x[0]), fp);
}
fclose(fp);
fwrite is not the problem here. The problem is the value you are calculating for number_of_data.
You need to be careful of any unintentional 32-bit casting when dealing with 64-bit integers. When I define them, I normally do it in a number of discrete steps, being careful at each step:
unsigned long long int number_of_data = atoi(argv[1]); // Should be good for up to 2,147,483,647 MB (2TB)
number_of_data *= 1024*1024; // Convert to MB
The assignment operator (*=) will be acting on the l-value (the unsigned long long int), so you can trust it to be acting on a 64-bit value.
This may look unoptimised, but a decent compiler will remove any unnecessary steps.
You should not have any problem creating large files on Windows but I have noticed that if you use a 32 bit version of seek on the file it then seems to decide it is a 32 bit file and thus cannot be larger that 4GB. I have had success using _open, _lseeki64 and _write when working with >4GB files on Windows. For instance:
static void
create_file_simple(const TCHAR *filename, __int64 size)
{
int omode = _O_WRONLY | _O_CREAT | _O_TRUNC;
int fd = _topen(filename, omode, _S_IREAD | _S_IWRITE);
_lseeki64(fd, size, SEEK_SET);
_write(fd, "ABCD", 4);
_close(fd);
}
The above will create a file over 4GB without issue. However, it can be slow as when you call _write() there the file system has to actually allocate the disk blocks for you. You may find it faster to create a sparse file if you have to fill it up randomly. If you will fill the file sequentially from the beginning then the above code will be fine. Note that if you really want to use the buffered IO provided by fwrite you can obtain a FILE* from a C library file descriptor using fdopen().
(In case anyone is wondering, the TCHAR, _topen and underscore prefixes are all MSVC++ quirks).
UPDATE
The original question is using sequential output for N bytes of value V. So a simple program that should actually produce the file desired is:
#include <stdlib.h>
#include <sys/stat.h>
#include <sys/types.h>
#include <fcntl.h>
#include <io.h>
#include <tchar.h>
int
_tmain(int argc, TCHAR *argv[])
{
__int64 n = 0, r = 0, size = 0x100000000LL; /* 4GB */
char v = 'A';
int fd = _topen(argv[1], _O_WRONLY | _O_CREAT| _O_TRUNC, _S_IREAD | _S_IWRITE);
while (r != -1 && n < count) {
r = _write(fd, &v, sizeof(value));
if (r >= 0) n += r;
}
_close(fd);
return 0;
}
However, this will be really slow as we are only writing one byte at a time. That is something that can be improved by using a larger buffer or using buffered I/O by calling fdopen on the descriptor (fd) and switching to fwrite.
Yuo have no problem with fwrite(). The problem seems to be your
unsigned long long int number_of_data = (unsigned int)atoi(argv[1])*1024*1024; //MB
which indeed should be rather something like
uint16_t number_of_data = atoll(argv[1])*1024ULL*1024ULL;
unsigned long long would still be ok, but unsigned int * int * int will give you a unsinged int no matter how large your target variable is.
I am searching for way to create a somewhat random string of 64k size.
But I want this to be fast as well. I have tried the following ways:
a) read from /dev/random -- This is too slow
b) call rand() or a similar function of my own -- a soln with few(<10) calls should be ok.
c) malloc() -- On my Linux, the memory region is all zeroes always,
instead of some random data.
d) Get some randomness from stack variable addresses/ timestamp etc. to init initial few bytes.
Followed by copying over these values to the remaining array in different variations.
Would like to know if there is a better way to approach this.
/dev/random blocks after its pool of random data has been emptied until it gathered new random data. You should try /dev/urandom instead.
rand() should be fairly fast in your c runtime implementation. If you can relax your "random" requirement a bit (accepting lower quality random numbers), you can generate a sequence of numbers using a tailored implementaton of a linear congruential generator. Be sure to choose your parameters wisely (see the wikipedia entry) to allow additional optimizations.
To generate such a long set of random numbers faster, you could use SSE/AVX and generate four/eight 32 random bits in parallel.
You say "somewhat random" so I assume you do not need high quality random numbers.
You should probably use a "linear congruential generator" (LGC). See wikipedia for details:
http://en.wikipedia.org/wiki/Linear_congruential_generator
That will require one addition, one multiplication and one mod function per element.
Your options:
a) /dev/random is not intended to be called frequently. See "man 4 random" for details.
b) rand etc. are like the LGC above but some use a more sophisticated algorithm that gives better random numbers at a higher computational cost. See "man 3 random" and "man 3 rand" for details.
c) The OS deliberately zeros the memory for security reasons. It stops leakage of data from other processes. Google "demand zero paging" for details.
d) Not a good idea. Use /dev/random or /dev/urandom once, that's what they're for.
Perhaps calling OpenSSL routines, something like the programmatic equivalent of:
openssl rand NUM_BYTES | head -c NUM_BYTES > /dev/null
which should run faster than /dev/random and /dev/urandom.
Here's some test code:
/* randombytes.c */
#include <stdlib.h>
#include <stdio.h>
#include <openssl/rand.h>
/*
compile with:
gcc -Wall -lcrypto randombytes.c -o randombytes
*/
int main (int argc, char **argv)
{
unsigned char *random_bytes = NULL;
int length = 0;
if (argc == 2)
length = atoi(argv[1]);
else {
fprintf(stderr, "usage: randombytes number_of_bytes\n");
return EXIT_FAILURE;
}
random_bytes = malloc((size_t)length + 1);
if (! random_bytes) {
fprintf(stderr, "could not allocate space for random_bytes...\n");
return EXIT_FAILURE;
}
if (! RAND_bytes(random_bytes, length)) {
fprintf(stderr, "could not get random bytes...\n");
return EXIT_FAILURE;
}
*(random_bytes + length) = '\0';
fprintf(stdout, "bytes: %s\n", random_bytes);
free(random_bytes);
return EXIT_SUCCESS;
}
Here's how it performs on a Mac OS X 10.7.3 system (1.7 GHz i5, 4 GB), relative to /dev/urandom and OpenSSL's openssl binary:
$ time ./randombytes 100000000 > /dev/null
real 0m6.902s
user 0m6.842s
sys 0m0.059s
$ time cat /dev/urandom | head -c 100000000 > /dev/null
real 0m9.391s
user 0m0.050s
sys 0m9.326s
$ time openssl rand 100000000 | head -c 100000000 > /dev/null
real 0m7.060s
user 0m7.050s
sys 0m0.118s
The randombytes binary is 27% faster than reading bytes from /dev/urandom and about 2% faster than openssl rand.
You could profile other approaches in a similar fashion.
Don't over think it. dd if=/dev/urandom bs=64k count=1 > random-bytes.bin.