Extracting base K random “bits” from a pre-filled random buffer - c

Say I need N cryptographically-secure pseudorandom integers within the range [0, K). The most obvious way of achieving this would be N calls to arc4random_uniform(3), and this is exactly what I’m doing.
However, the profiler tells me that numerous calls to arc4random_uniform(3) are taking 2/3 of the whole execution time, and I really need to make my code faster. This is why I‘m planning to generate some random bytes in advance (probably with arc4random_buf(3)) and subsequently extract from it bit by bit.
For K = 2, I can simply mask the desired bit out, but when K is not a power of 2, things are getting hairy. Surely I can use a bunch of %= and /=, but then I would have modulo bias. Another problem is when N grows too large, I can no longer interpret the whole buffer as an integer and perform arithmetic operations on it.
In case it’s relevant, K would be less than 20, whereas N can be really large, like millions.

You can use the modulus operator and division, you just need to do a bit of extra preprocessing. Generate your array of values as normal. Take P to be the largest power of K less than or equal to 2^32 (where ^ denotes exponentiation), and iterate over your array making sure all random values are strictly less then P. Any which aren't, replace with a new random number which is less than P. This will remove the bias.
Now to handle large N, you'll need to two loops. The first loop iterates over the elements in the array, the second extracts multiple random numbers from each element. If P = k ^ e, then you can extract e random numbers from [0, k) from each element in the array. Each time you extract a random number from an element, do a floored division by k on that element.
Of course, this doesn't necessarily need to be actual loops. You can store two variables (array index, sub-element index) and extract from the array_index element as a function gets called. If sub_element_index == e, then reset it to zero and increase the array_index. Extract a random number from this array element and return it instead.

Related

Is there any good algorithm implementation that can find nearest negative elements from specific array elements?

I am currently working on C programming, and when current index is defined as int variable idx, I need to find the nearest negative element in range of [0, idx-1] from idx in array.
For example, If array is 1 -2 3 -4 5 6 and idx is 5 (array[idx] will be 6), function has to return 3, as -4 is the nearest negative element from array[idx].
I know how to solve this problem linearly, like
for(int i = idx-1; i>=0; i--){
if(array[i] < 0) return i;
}
but I want to know faster algorithm (which means the algorithm that has lower time complexity) because I am currently working on big-sized arrays that have elements more than million. Can somebody help?
You can create a bit array once setting one bit for each index containing a negative number. If you store this bit array as 64 bit unsigned integers, you can check 64 indexes at the same time.
If you have 100 million entries, and one in 10,000 are negative only, you can create a second bit array setting one bit for each 64 bit number in the first array. So checking one array element in that array allows you to check 4096 entries simultaneously.
It's a bit more code of course. It's faster when negative numbers are rare.
If you have to do this while iterating through an array, that makes it much more easy. Just remember the last result. Say you found that the negative number closest to index 500,000 was at index 493,005. Now you want the negative number closest to index 500,001. Where could it be? It could be at index 500,000, and if that number is not negative, then it is again at #493,005. Trivial to calculate in O(n) for all i, not O(n^2).
O(N) is the best that can be done without additional information. Consider an array that contains no negative values. The only way to determine that is to visit the entire array.
If the array is very large and negative numbers are sparse, you might get faster execution by OR-ing chunks of 8 or 16 numbers and comparing the result to 0. A positive or zero result means none of these numbers are negative, and a negative result means these is at least one, which you can find with a simpler loop (no boundary condition).
This method produces fewer tests and OR-ing blocks of array elements can compile to vectorized code, so the performance should be better, but the complexity stays the same: linear time. Careful benchmarking will tell if this is worthwhile given your data sets.

Finding first duplicated element in linear time [duplicate]

There is an array of size n and the elements contained in the array are between 1 and n-1 such that each element occurs once and just one element occurs more than once. We need to find this element.
Though this is a very FAQ, I still haven't found a proper answer. Most suggestions are that I should add up all the elements in the array and then subtract from it the sum of all the indices, but this won't work if the number of elements is very large. It will overflow. There have also been suggestions regarding the use of XOR gate dup = dup ^ arr[i] ^ i, which are not clear to me.
I have come up with this algorithm which is an enhancement of the addition algorithm and will reduce the chances of overflow to a great extent!
for i=0 to n-1
begin :
diff = A[i] - i;
sum = sum + diff;
end
diff contains the duplicate element, but using this method I am unable to find out the index of the duplicate element. For that I need to traverse the array once more which is not desirable. Can anyone come up with a better solution that does not involve the addition method or the XOR method works in O(n)?
There are many ways that you can think about this problem, depending on the constraints of your problem description.
If you know for a fact that exactly one element is duplicated, then there are many ways to solve this problem. One particularly clever solution is to use the bitwise XOR operator. XOR has the following interesting properties:
XOR is associative, so (x ^ y) ^ z = x ^ (y ^ z)
XOR is commutative: x ^ y = y ^ x
XOR is its own inverse: x ^ y = 0 iff x = y
XOR has zero as an identity: x ^ 0 = x
Properties (1) and (2) here mean that when taking the XOR of a group of values, it doesn't matter what order you apply the XORs to the elements. You can reorder the elements or group them as you see fit. Property (3) means that if you XOR the same value together multiple times, you get back zero, and property (4) means that if you XOR anything with 0 you get back your original number. Taking all these properties together, you get an interesting result: if you take the XOR of a group of numbers, the result is the XOR of all numbers in the group that appear an odd number of times. The reason for this is that when you XOR together numbers that appear an even number of times, you can break the XOR of those numbers up into a set of pairs. Each pair XORs to 0 by (3), and th combined XOR of all these zeros gives back zero by (4). Consequently, all the numbers of even multiplicity cancel out.
To use this to solve the original problem, do the following. First, XOR together all the numbers in the list. This gives the XOR of all numbers that appear an odd number of times, which ends up being all the numbers from 1 to (n-1) except the duplicate. Now, XOR this value with the XOR of all the numbers from 1 to (n-1). This then makes all numbers in the range 1 to (n-1) that were not previously canceled out cancel out, leaving behind just the duplicated value. Moreover, this runs in O(n) time and only uses O(1) space, since the XOR of all the values fits into a single integer.
In your original post you considered an alternative approach that works by using the fact that the sum of the integers from 1 to n-1 is n(n-1)/2. You were concerned, however, that this would lead to integer overflow and cause a problem. On most machines you are right that this would cause an overflow, but (on most machines) this is not a problem because arithmetic is done using fixed-precision integers, commonly 32-bit integers. When an integer overflow occurs, the resulting number is not meaningless. Rather, it's just the value that you would get if you computed the actual result, then dropped off everything but the lowest 32 bits. Mathematically speaking, this is known as modular arithmetic, and the operations in the computer are done modulo 232. More generally, though, let's say that integers are stored modulo k for some fixed k.
Fortunately, many of the arithmetical laws you know and love from normal arithmetic still hold in modular arithmetic. We just need to be more precise with our terminology. We say that x is congruent to y modulo k (denoted x ≡k y) if x and y leave the same remainder when divided by k. This is important when working on a physical machine, because when an integer overflow occurs on most hardware, the resulting value is congruent to the true value modulo k, where k depends on the word size. Fortunately, the following laws hold true in modular arithmetic:
For example:
If x ≡k y and w ≡k z, then x + w ≡k y + z
If x ≡k y and w ≡k z, then xw ≡k yz.
This means that if you want to compute the duplicate value by finding the total sum of the elements of the array and subtracting out the expected total, everything will work out fine even if there is an integer overflow because standard arithmetic will still produce the same values (modulo k) in the hardware. That said, you could also use the XOR-based approach, which doesn't need to consider overflow at all. :-)
If you are not guaranteed that exactly one element is duplicated, but you can modify the array of elements, then there is a beautiful algorithm for finding the duplicated value. This earlier SO question describes how to accomplish this. Intuitively, the idea is that you can try to sort the sequence using a bucket sort, where the array of elements itself is recycled to hold the space for the buckets as well.
If you are not guaranteed that exactly one element is duplicated, and you cannot modify the array of elements, then the problem is much harder. This is a classic (and hard!) interview problem that reportedly took Don Knuth 24 hours to solve. The trick is to reduce the problem to an instance of cycle-finding by treating the array as a function from the numbers 1-n onto 1-(n-1) and then looking for two inputs to that function. However, the resulting algorithm, called Floyd's cycle-finding algorithm, is extremely beautiful and simple. Interestingly, it's the same algorithm you would use to detect a cycle in a linked list in linear time and constant space. I'd recommend looking it up, since it periodically comes up in software interviews.
For a complete description of the algorithm along with an analysis, correctness proof, and Python implementation, check out this implementation that solves the problem.
Hope this helps!
Adding the elements is perfectly fine you just have to take mod(%) of the intermediate aggregate when calculating the sum of the elements and the expected sum. For the mod operation you can use something like 2n. You also have to fix the value after substraction.

Is it correct to use a table of interpolated prime-counting function `pi(x)` values as an upper bound for an array of primes?

Suppose I want to allocate an array of integers to store all the prime numbers less than some N. I would then need an estimate for the array size, E(N). There is mathematical function that gives the exact number of primes below N, it's the Prime-counting function - pi(n). However, it looks impossible to define the function in terms of elementary functions.
There exist some approximations to the function, but all of them are asymptotic approximations, so they can be either above or below the true number of primes and cannot in general be used as the estimate E(N).
I've tried to use tabulated values of pi(n) for certain n like power-of-two and interpolate between them. However I noticed that the function pi(n) is convex, so the interpolation between sparse table points may accidentally yield values of E(n) below true pi(n) that may result in buffer overflow.
I then decided to exploit the monotonic nature of pi(n) and use the table values of pi(2^(n+1)) as an far upper estimate for E(2^n) an interpolate between them this time.
I still feel not completely sure that for some 2^n < X < 2^(n+1) an interpolation between pi(2^(n+1)) and pi(2^(n+2)) would be the safe upper estimate. Is it correct? How do I prove it?
You are overthinking this. In C, you just use malloc and realloc. I'd 100 times prefer an algorithm that just obviously works instead of one that requires a deep mathematical proof.
Use an upper bound. There are a number to choose from, each more complicated but tighter. I call this prime_count_upper(n) since you want a value guaranteed to be greater than or equal to the number of primes under n. See Chebyshev, Rosser and Schoenfeld, Dusart 1999, Dusart 2010, Axler 2014, and Büthe 2015. R&S is simple and not terrible: π(x) <= x/(log(x)-3/2) for x >= 67 but Dusart gives better ones for larger values. Either way, no tables or original research needed.
The prime number theorem guarantees the nth prime P(n) is on the range n log n < P(n) < n log n + n log log n for n > 5. As DanaJ suggests, tighter bounds can be computed.
If you want to store the primes in an array, you can't be talking about anything too big. As you suggest, there is no direct computation of pi(n) in terms of elementary arithmetic functions, but there are several methods for computing pi(n) exactly that aren't too hard, as long as n isn't too big. See this, for instance.

Algorithm to find count of numbers between 2 integers where digits do not repeat

I'm looking for an algorithm to find a list of numbers between two integers such that the digits in each number don't repeat?
For example, given an input of 2 and 12, the answer would be all numbers except 11.
The naive solution would be to iterate over the numbers and check whether any digit repeats. However, for big numbers, this approach would take a huge amount of time.
i need to find the count of such no.s between the given two large no.s;
Another method i thought was to take an array(a[10]) of size 10 where each index would store the frequecy of each digit of a certain no. b/w the limits,and if we get an index where freq exceeds 1,that no would be discarded.
I would repeat this for all the no.s between the limits,each time initializing the array 'a' indexes to 0.
But this method too will take huge comutation time for large inputs(like when the limits are 1 to 10^9).
I need a still better method.
I will not give you the exact solution but will try to give you some tips on how to approach similar problems.
First of all - whenever you have a problem of the type find the count of numbers between a and b it is (almost) always easier to implement a solution that will give you the answer for the interval (0, x] for a given x. For instance if you want to count the number in the interval [a,b] and you implement a function that returns the answer for the interval (0,x] for all non-negative x(say that function is f), then you can compute the answer for [a,b] as f(b) - f(a - 1). Trust me this saves a lot of corner cases checking and is also usaully faster to implement.
Having said this try to think how you can count the numbers that don't have a repeating digit in the interval (0,a]. What I would suggest is that you compute the answer for the numbers having a fixed number of digits separately. For the numbers having less digits then a this is pretty straight forward - simply compute a variation. For the numbers with count of digits equal to our number a it is a bit trickier. I believe it is easiest to count them using dynamic programming.
Hope this helps and hope it is not too detailed so that you still have to solve something on your own.
You're looking for an algorithm to count permutations of digits. Given integer A with m digits and integer B with n digits (m >= n), you need to:
1. for i = m; i < n; i++
if i < 10, choose i digits out of 10
Permutate through digits of length i (if i=m, discard permutations that are less than A)
2. Permutate digits of length n, as long as result isn't larger than B.
Choosing and permutating are combinatorial operations, you can easily find a mathematical formulae for them (recursive versions are available as well), e.g. here is a link describing permutations: http://en.wikipedia.org/wiki/Permutation
The complexity will be O((n+1)!)

Finding out the duplicate element in an array

There is an array of size n and the elements contained in the array are between 1 and n-1 such that each element occurs once and just one element occurs more than once. We need to find this element.
Though this is a very FAQ, I still haven't found a proper answer. Most suggestions are that I should add up all the elements in the array and then subtract from it the sum of all the indices, but this won't work if the number of elements is very large. It will overflow. There have also been suggestions regarding the use of XOR gate dup = dup ^ arr[i] ^ i, which are not clear to me.
I have come up with this algorithm which is an enhancement of the addition algorithm and will reduce the chances of overflow to a great extent!
for i=0 to n-1
begin :
diff = A[i] - i;
sum = sum + diff;
end
diff contains the duplicate element, but using this method I am unable to find out the index of the duplicate element. For that I need to traverse the array once more which is not desirable. Can anyone come up with a better solution that does not involve the addition method or the XOR method works in O(n)?
There are many ways that you can think about this problem, depending on the constraints of your problem description.
If you know for a fact that exactly one element is duplicated, then there are many ways to solve this problem. One particularly clever solution is to use the bitwise XOR operator. XOR has the following interesting properties:
XOR is associative, so (x ^ y) ^ z = x ^ (y ^ z)
XOR is commutative: x ^ y = y ^ x
XOR is its own inverse: x ^ y = 0 iff x = y
XOR has zero as an identity: x ^ 0 = x
Properties (1) and (2) here mean that when taking the XOR of a group of values, it doesn't matter what order you apply the XORs to the elements. You can reorder the elements or group them as you see fit. Property (3) means that if you XOR the same value together multiple times, you get back zero, and property (4) means that if you XOR anything with 0 you get back your original number. Taking all these properties together, you get an interesting result: if you take the XOR of a group of numbers, the result is the XOR of all numbers in the group that appear an odd number of times. The reason for this is that when you XOR together numbers that appear an even number of times, you can break the XOR of those numbers up into a set of pairs. Each pair XORs to 0 by (3), and th combined XOR of all these zeros gives back zero by (4). Consequently, all the numbers of even multiplicity cancel out.
To use this to solve the original problem, do the following. First, XOR together all the numbers in the list. This gives the XOR of all numbers that appear an odd number of times, which ends up being all the numbers from 1 to (n-1) except the duplicate. Now, XOR this value with the XOR of all the numbers from 1 to (n-1). This then makes all numbers in the range 1 to (n-1) that were not previously canceled out cancel out, leaving behind just the duplicated value. Moreover, this runs in O(n) time and only uses O(1) space, since the XOR of all the values fits into a single integer.
In your original post you considered an alternative approach that works by using the fact that the sum of the integers from 1 to n-1 is n(n-1)/2. You were concerned, however, that this would lead to integer overflow and cause a problem. On most machines you are right that this would cause an overflow, but (on most machines) this is not a problem because arithmetic is done using fixed-precision integers, commonly 32-bit integers. When an integer overflow occurs, the resulting number is not meaningless. Rather, it's just the value that you would get if you computed the actual result, then dropped off everything but the lowest 32 bits. Mathematically speaking, this is known as modular arithmetic, and the operations in the computer are done modulo 232. More generally, though, let's say that integers are stored modulo k for some fixed k.
Fortunately, many of the arithmetical laws you know and love from normal arithmetic still hold in modular arithmetic. We just need to be more precise with our terminology. We say that x is congruent to y modulo k (denoted x ≡k y) if x and y leave the same remainder when divided by k. This is important when working on a physical machine, because when an integer overflow occurs on most hardware, the resulting value is congruent to the true value modulo k, where k depends on the word size. Fortunately, the following laws hold true in modular arithmetic:
For example:
If x ≡k y and w ≡k z, then x + w ≡k y + z
If x ≡k y and w ≡k z, then xw ≡k yz.
This means that if you want to compute the duplicate value by finding the total sum of the elements of the array and subtracting out the expected total, everything will work out fine even if there is an integer overflow because standard arithmetic will still produce the same values (modulo k) in the hardware. That said, you could also use the XOR-based approach, which doesn't need to consider overflow at all. :-)
If you are not guaranteed that exactly one element is duplicated, but you can modify the array of elements, then there is a beautiful algorithm for finding the duplicated value. This earlier SO question describes how to accomplish this. Intuitively, the idea is that you can try to sort the sequence using a bucket sort, where the array of elements itself is recycled to hold the space for the buckets as well.
If you are not guaranteed that exactly one element is duplicated, and you cannot modify the array of elements, then the problem is much harder. This is a classic (and hard!) interview problem that reportedly took Don Knuth 24 hours to solve. The trick is to reduce the problem to an instance of cycle-finding by treating the array as a function from the numbers 1-n onto 1-(n-1) and then looking for two inputs to that function. However, the resulting algorithm, called Floyd's cycle-finding algorithm, is extremely beautiful and simple. Interestingly, it's the same algorithm you would use to detect a cycle in a linked list in linear time and constant space. I'd recommend looking it up, since it periodically comes up in software interviews.
For a complete description of the algorithm along with an analysis, correctness proof, and Python implementation, check out this implementation that solves the problem.
Hope this helps!
Adding the elements is perfectly fine you just have to take mod(%) of the intermediate aggregate when calculating the sum of the elements and the expected sum. For the mod operation you can use something like 2n. You also have to fix the value after substraction.

Resources