Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I am trying to make a word scrambler and am wondering if there are any algorithms I should use or if I should just build it from scratch. Any pointers would be helpful!
The standard algorithm for finding a random permutation of a sequence of elements (or, in your case, letters in a word) is the Fisher-Yates shuffle, which in linear time produces a truly random permutation of a sequence of elements. The algorithm is well-established and many standard libraries provide implementations of it (for example, the C++ std::random_shuffle algorithm is typically implemented using this algorithm), so you may be able to find a prewritten implementation. If not, the algorithm is extremely easy to implement, and here's some pseudocode for it:
for each index i = 0 to n - 1, inclusive:
choose a random index j in the range i to n - 1, inclusive.
swap A[i] and A[j]
Be careful when implementing this that when picking a random index, you do not pick an index between 0 and n-1 inclusive; this produces a nonuniform distribution of letters (you can read more about that in this earlier question).
Hope this helps!
Go with the Knuth Shuffle (AKA the Fisher–Yates Shuffle). It has the desirable feature of ensuring that every permutation of the set is equally likely. Here's a link to an implementation in C (along with implementations in other languages) that works on arbitrarily sized objects.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 years ago.
Improve this question
I am trying to make an oracle which predicts next random number in a sequence. I have an array of random generated numbers.
The rand() function is a Pseudo-Random Number Generator (PRNG). It is not a cryptographically secure source of entropy. If you know the seed, you can completely predict the sequence as it is deterministic, typically based on Linear Congruential Generator (LCG). Such generators have a finite period length, after which they repeat.
If you know the given sequence starts from the beginning, it would be trivial to brute-force the seed to find matching initial sequence. Otherwise there are statistical methods you could use to narrow down the potential seeds.
If you have actual random numbers, there's no way to predict them.
Software can be programmed to use actual random numbers that are acquired from monitoring random events like background radiation, atomic decay and electrical noise from various components. This is usually used only for critical applications like creating cryptographic keys, since the operation will block until enough "Random bits" have been collected.
Most software uses an algorithm that creates random-looking numbers based on a seed and past events like calls to the PRNG, elapsed time, etc. These are possible to predict with 100 accuracy if you know the algorithm used and all the events it uses for inputs, or have the ability to reset the seed to a known value.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
The Warshall-Floyd algorithm is based on essentially the idea: exploit a relationship between a problem and its simpler rather than smaller version. Warshall and Floyd published their algorithms without mentioning dynamic programming. Nevertheless, the algorithms certainly have a dynamic programming flavor and have come to be considered applications of this technique.
ALGORITHM Warshall(A[1..n, 1..n])
//ImplementsWarshall’s algorithm for computing the transitive closure
//Input: The adjacency matrix A of a digraph with n vertices
//Output: The transitive closure of the digraph
R(0) ←A
for k←1 to n do
for i ←1 to n do
for j ←1 to n do
R(k)[i, j ]←R(k−1)[i, j ] or (R(k−1)[i, k] and R(k−1)[k, j])
return R(n)
We can speed up the above implementation of Warshall’s algorithm for some inputs by restructuring its innermost loop
My question on above text are following
What does author mean by idea is " exploit a relationship between a problem and its simpler rather than smaller version" Please elobaorate.
How can we improve speed as author mentioned in above implemenation.
The formulation from 1. means that the shortest path problem (which can be seen as a generalization of the transitive closure problem) has the optimal substructure property; however for this property does not exist a formal description (in the sense of a mathematical definition). The optimal substructure property is necessary for a problem to be amenable to dynamic programming.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I'm developing an encryption algorithm on the GPU. This algorithm requires the addition and multiplication of very very large integers . These numbers have a bit length of an estimated 150,000 bit or more.These numbers have different bit length. What algorithms can be used to perform addition and multiplication of these numbers? Please give me your information. Thank you.
Large integer addition is relatively simple: JackOLantern already provided the link to the post. Basically it's just doing carry propagation via parallel prefix sum.
For large-integer multiplication on CUDA, there are two ways come to my mind:
convert the integers to RNS (Residue Number System): then multiplication and addition can be done in parallel (as long as RNS base is large enough). Whenever you need to compare the numbers you can convert them to mixed radix system (see, e.g., How to Convert from a Residual Number System to a Mixed Radix System?). Finally, you can use CRT (Chinese Remaindering) to convert the numbers back to positional number system
implement large-integer multiplication directly using FFT since multiplication can be viewed as acyclic convolution of sequences (150Kbits length is not that much for FFT but can already give you some speedup). Still GNU MP switches to FFT multiplication routines starting from 1Mbit or even more. Again for multiplication via FFT there are two options:
use floating-point double-precision FFT and encode large-integer bits into mantissa (easier to implement)
use the so-called Number-Theoretic transform (FFT over finite field)
Anyway, there is a bunch of theory behind these things. You can also check my paper on FFT mul in CUDA. But there are also many research papers on this subject especially in cryptography field.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
Given a number how to recognize if it is bleak or supported by some number in efficient manner?
Given an array of numbers, how to check efficiently whether each number is supported with in the
array or bleak if not supported with in the array?
Brute force : Find binary equivalent, count number of 1's and search for it in the array.
About Bleak and supported numbers:
For each number, count the number of ones in its own binary representation, and add this count to itself to obtain the value of the number it supports. That is, if j is the number of ones in the binary representation of m, then m supports m+j.
Example:number eight (1000 in binary) supports nine, whereas nine supports eleven.
However, in this way not all the numbers get supported; some are left without support, and these numbers are called bleak. For example since one supports two, two supports three and three supports five, there is no number less than four, which would support four, so four is bleak.
If n is not bleak, it must be supported by a number in the range n-ceil(log2(n)) to n-1. This gives a very small range you have to check. For the array, first sorting the array then using the same principle should give you an efficient solution.
Denote a as the given array and count(x) as the number of 1 bits in x.
Question 2:
Iterate through the array and save the the number a[i] + count(a[i]) into a binary search tree. Time: O(n log n)
Iterate through the array and output "Supported" if a[i] is in the binary search tree and "Bleak" otherwise. Time: O(n log n)
Note: a[i] is the current element at the iteration.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I want to know the different techniques that are used for performing arithmetic operations on very large integers in C. One that I know of is using string to hold a number and define operations add, subtract etc. for it. I am not interested in using libraries, this question is purely for knowledge. Please suggest any other such methods/techniques used.
You can go as low level as representing your integers as an array of bytes, and do all the operations (like addition, subtraction, multiplication, division or comparison) just like a CPU does them, at word level.
The simplest algorithms are for addition and subtraction, where you simply add or subtract the digits in sequence, carrying as necessary.
Negative numbers can be represented in 2's complement.
For comparison, you just compare the high order digits until a difference is found.
For multiplication the most straightforward algorithm (and slowest) you can implement is repeated addition.
For division, things are a little more complicated than multiplication, see: http://en.wikipedia.org/wiki/Division_algorithm
A common application for this is public-key cryptography, whose algorithms commonly employ arithmetic with integers having hundreds of digits.
Check the OpenSSL BIGNUM documentation for this: https://www.openssl.org/docs/crypto/bn.html
You could use 3 linked lists, one for number A, one for number B and one for the result.
You would then read each digit as a character input from the user, make it an integer and and save it to a new node in the list, corresponding to the number you read at the moment.
And Finally you would just write as functions the operations for adding,subtracting etc.
In each you would follow their respective algorithm you learned at school, starting from the LSB node, going up to the MSB node, always keeping at mind the base powers of each number(1 node * 10^0, 2 node * 10^1, 3 node * 10^2, ...,n node * 10^n ).