How to predict Rand() function in C? [closed] - c

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 years ago.
Improve this question
I am trying to make an oracle which predicts next random number in a sequence. I have an array of random generated numbers.

The rand() function is a Pseudo-Random Number Generator (PRNG). It is not a cryptographically secure source of entropy. If you know the seed, you can completely predict the sequence as it is deterministic, typically based on Linear Congruential Generator (LCG). Such generators have a finite period length, after which they repeat.
If you know the given sequence starts from the beginning, it would be trivial to brute-force the seed to find matching initial sequence. Otherwise there are statistical methods you could use to narrow down the potential seeds.

If you have actual random numbers, there's no way to predict them.
Software can be programmed to use actual random numbers that are acquired from monitoring random events like background radiation, atomic decay and electrical noise from various components. This is usually used only for critical applications like creating cryptographic keys, since the operation will block until enough "Random bits" have been collected.
Most software uses an algorithm that creates random-looking numbers based on a seed and past events like calls to the PRNG, elapsed time, etc. These are possible to predict with 100 accuracy if you know the algorithm used and all the events it uses for inputs, or have the ability to reset the seed to a known value.

Related

Estimating running time of a "C language" code block: Is it possible? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I want to write a function in C language such that it must check millions of parameters and if all of them are true, function returns true as well, otherwise, it returns false.
However, estimating the time of this operation is important, meaning that we need to know it takes how many milliseconds. (An approximate time would be enough.) We need to know this time to know the throughput of this function.
Note: This parameters are read locally from a file and we use an ordinary computers.
Rather than estimating the time, measure it. Modern CPU architecture performs optimizations so complex that a simple change in the data ordering could increase the running time of your function by a factor of six or more.
In your case it is very important to run a realistic benchmark: all parameters that you check need to be placed in memory at the same positions as in the actual program, and your code should be checking them in the same order. This way you would see the effect of caching. Since your function is all-or-nothing, branch prediction would have almost no effect on your code, because prediction would fail at most once before the loop exits.
Since you are reading your parameters from a file, it is important to use the same API in the test program as you plan to use in the actual program. Depending on the platform, I/O APIs may exhibit significant difference in performance, so your test code should test what you plan to use in production.

How to give an estimation of the energy consumed by a program on an ARM platform? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Is there a way to estimate the energy consumed by a program on an ARM CPU? In embedded systems, energy consumption is one of the most important parameters and I was wondering whether it is possible for a programmer to know approximately how much energy is needed to run the program?
For example, since on the ARM CPU division executed on multiple cycles I imagine that a code using divisions would consume more energy than a code that doesn't. But this reasing is quite intuitive, is there a better way to qulify the energy consumed by a CPU when executing a code?
I don't think there are any ARM-specific tricks here (and 'ARM' covers umpteen different things anyway). You usually look at the current consumption in the various different power states you use (run, sleep, etc) and then estimate what proportion of time is spent in each state. This lets you calculate average current/power.
It doesn't usually make much sense to say 'this instruction uses a lot of power' - what you might instead care about is 'this sequence of instructions take a lot of time to run, hence I can't get back to sleep quickly'.
Closest you'll get with off the shelf tools is something similar to http://ds.arm.com/ds-5/optimize/arm-energy-probe/
Generally battery run systems have fuel gauges which are exposed through sysfs entries and can provide how much current is passing by. Think it like smart phone battery/charge indicator. Those are generally not that reliable and hard to correlate with exact time of application run, but may give you a rough estimate.

large integer multiplication and addition on gpu [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I'm developing an encryption algorithm on the GPU. This algorithm requires the addition and multiplication of very very large integers . These numbers have a bit length of an estimated 150,000 bit or more.These numbers have different bit length. What algorithms can be used to perform addition and multiplication of these numbers? Please give me your information. Thank you.
Large integer addition is relatively simple: JackOLantern already provided the link to the post. Basically it's just doing carry propagation via parallel prefix sum.
For large-integer multiplication on CUDA, there are two ways come to my mind:
convert the integers to RNS (Residue Number System): then multiplication and addition can be done in parallel (as long as RNS base is large enough). Whenever you need to compare the numbers you can convert them to mixed radix system (see, e.g., How to Convert from a Residual Number System to a Mixed Radix System?). Finally, you can use CRT (Chinese Remaindering) to convert the numbers back to positional number system
implement large-integer multiplication directly using FFT since multiplication can be viewed as acyclic convolution of sequences (150Kbits length is not that much for FFT but can already give you some speedup). Still GNU MP switches to FFT multiplication routines starting from 1Mbit or even more. Again for multiplication via FFT there are two options:
use floating-point double-precision FFT and encode large-integer bits into mantissa (easier to implement)
use the so-called Number-Theoretic transform (FFT over finite field)
Anyway, there is a bunch of theory behind these things. You can also check my paper on FFT mul in CUDA. But there are also many research papers on this subject especially in cryptography field.

C: How to get a 4096 bit prime number? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
How to get a random, really big (f.e. 4096 bit) prime number in C?
Does anyone know a good Library for this?
Your best bet is libgmp.
It has a function that will scan for the next prime number (using Miller-Rabin) starting from some starting number.
void mpz_nextprime ( mpz_t rop, mpz_t op );
Set rop to the next prime greater than op.
This function uses a probabilistic algorithm to identify primes. For practical purposes it's adequate, the chance of a composite passing will be extremely small.
Is the function you want.
You just roll a random number with as many bits as you need and then fire mpz_nextprime. Runtime should be somewhere around O(log(op)) (probabilistic).
You will also need one of the random number generators.
Generally you generate a large random number, using a strong random number generator (e.g. on Windows use CryptGenRandom), then apply some checks to determine whether it is likely to be prime.
The only way to check that it really is prime is to try dividing by every number between 1 and (potential-prime / 2). If any of them divides equally with no remainder, it's not prime. Since that will take an infeasibly long time to compute (that's the whole point of using really big prime numbers), the tests used are far simpler and based on the probability that the number is unlikely to have easily guessable factors.
If you're implementing software that uses encryption, I strongly recommend that you use a NIST-certified cryptographic library or module to generate your keys and do the encryption.

Are there any algorithms for scrambling a word? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I am trying to make a word scrambler and am wondering if there are any algorithms I should use or if I should just build it from scratch. Any pointers would be helpful!
The standard algorithm for finding a random permutation of a sequence of elements (or, in your case, letters in a word) is the Fisher-Yates shuffle, which in linear time produces a truly random permutation of a sequence of elements. The algorithm is well-established and many standard libraries provide implementations of it (for example, the C++ std::random_shuffle algorithm is typically implemented using this algorithm), so you may be able to find a prewritten implementation. If not, the algorithm is extremely easy to implement, and here's some pseudocode for it:
for each index i = 0 to n - 1, inclusive:
choose a random index j in the range i to n - 1, inclusive.
swap A[i] and A[j]
Be careful when implementing this that when picking a random index, you do not pick an index between 0 and n-1 inclusive; this produces a nonuniform distribution of letters (you can read more about that in this earlier question).
Hope this helps!
Go with the Knuth Shuffle (AKA the Fisher–Yates Shuffle). It has the desirable feature of ensuring that every permutation of the set is equally likely. Here's a link to an implementation in C (along with implementations in other languages) that works on arbitrarily sized objects.

Resources