random number generator objective c [duplicate] - c

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How to generate a random float between 0 and 1?
I want to generate a random number between 0 and 1 (uniform distribution) and I use:
float x = arc4random_uniform(1);
However, this produces only 0.00000
I used
float y = arc4random() %11 * 0.1;
which returns a random number in the interval but I am not sure if it is uniform distribution.
Why isn't the first function working as expected?

I use:
float x = arc4random_uniform(1);
However, this produces only 0.00000
Of course it does. arc4random_uniform() returns a 32-bit unsigned integer. It does not return a float. What you're looking for is something like
#define RAND_PRECISION 1024
float x = arc4random_uniform(RAND_PRECISION) / (float)RAND_PRECISION;
Also,
I am not sure if it is uniform distribution.
Since it isn't. Using the modulo (%) operator results in a non-uniform distribution.

According to the documentation, arc4random_uniform returns an integer, so using it with an upper bound of 1 won't produce what you want.
Here's the page on the arc4random functions:
http://www.unix.com/man-page/FreeBSD/3/arc4random_uniform/

#define ARC4RANDOM_MAX 0x100000000
...
double val = ((double)arc4random() / ARC4RANDOM_MAX);

arc4random_uniform returns a number up to the upper bound you provide (e.g. arc4random_uniform(10) will return a number between 0 and 9). Due to that arc4random_uniform(1) should always return 0 (which is what you're seeing). If you want a random float you should see this answer:
Generate a random float between 0 and 1

Related

C: is there anyway i can get the modulo operator to work on non integer values? [duplicate]

This question already has answers here:
How to use % operator for float values in c
(6 answers)
Floating Point Modulo Operation
(4 answers)
Closed 2 years ago.
I need to reset the value of a variable called theta back to 0 everytime its value reaches or exceeds 2 PI. I was thinking something along the lines of:
int n = 10;
float inc = 2*PI/n;
for(int i=0;i<10;i++)
theta = (theta + inc) % 2*PI;
Of course it wont work because % doesn't work on floating points in C. Is there another equivalent or better way to achieve what I'm trying to do here? All replies are welcome. Thanks
Use the standard fmod function. See https://en.cppreference.com/w/c/numeric/math/fmod or 7.2.10 in the C17 standard.
The fmod functions return the value x − n y , for some integer n such that, if y is nonzero, the result
has the same sign as x and magnitude less than the magnitude of y.
So theta = fmod(theta, 2*PI) should be what you want, if I understand your question correctly.
If it really must be done on float instead of double, you can use fmodf instead.
Since division is really just repeated subtraction, you can get the remainder by checking if the value is at least 2*PI, and if so subtract that value.
int n = 10;
float inc = 2*PI/n;
for(int i=0;i<10;i++) {
theta += inc;
if (theta >= 2*PI) theta -= 2*PI;
}
Note that because the amount of the increment is less than the 2*PI limit we can do the "over" check just once. This is likely cheaper than the operations that would be involved if fmod was called. If it was more you would at least need while instead, or just use fmod.

Log calls return NaN in C [duplicate]

This question already has answers here:
Why does dividing two int not yield the right value when assigned to double?
(10 answers)
Closed 6 years ago.
I have an array of double:
double theoretical_distribution[] = {1/21, 2/21, 3/21, 4/21, 5/21, 6/21};
And I am trying to computer it's entropy as:
double entropy = 0;
for (int i = 0; i < sizeof(theoretical_distribution)/sizeof(*theoretical_distribution); i++) {
entropy -= (theoretical_distribution[i] * (log10(theoretical_distribution[i])/log10(arity)));
}
However I am getting NaN, I have checked the part
(theoretical_distribution[i] * (log10(theoretical_distribution[i])/log10(arity)))
And found it to return NaN itself, so I assume it's the culprit, however all it's supposed to be is a simple base conversion of the log? Am I missing some detail about the maths of it?
Why is it evaluating to NaN.
You are passing 0 to the log10 function.
This is because your array theoretical_distribution is being populated with constant values that result from integer computations, all of which have a denominator larger than the numerator.
You probably intended floating computations, so make at least one of the numerator or denominator a floating constant.

C - generate random numbers within an interval with respect to a mean

I need to generate a set of random numbers within an interval which also happens to have a mean value. For instance min = 1000, max = 10000 and a mean of 7000. I know how to create numbers within a range but I am struggling with the mean value thing. Is there a function that I can use?
What you're looking for is done most easily with so called acceptance rejection method.
Split your interval into smaller intervals.
Specify a probability density function (PDF), can be a very simple one too, like a step function. For Gaussian distrubution you would have left and right steps lower than your middle step i.e (see the image bellow that has a more general distribution).
Generate a random number in the whole interval. If the generated number is greater than the value of your PDF at that point reject the generated number.
Repeat the steps until you get desired number of points
EDIT 1
Proof of concept on a Gaussian PDF.
Ok, so the basic idea is shown in graph (a).
Define/Pick your probability density function (PDF). PDF is a function of, statistically speaking, a random variable and describes the probability of finding the value x in a measurement/experiment. A function can be a PDF of a random variable x if it satisfies: 1) f(x) >= 0 and 2) it's normalized (meaning it sums, or integrates, up to the value 1).
Get maximum (max) and "zero points" (z1 < z2) of PDF. Some PDF's can have their zero points in infinity. In that case, determine cutoff points (z1, z2) for which PDF(z1>x>z2) < eta where you pick eta yourself. Basically means, set some small-ish value eta and then say your zero points are those values for which the value of PDF(x) is smaller than eta.
Define the interval Ch(z1, z2, max) of your random generator. This is the interval in which you generate your random variables.
Generate a random variable x such that z1<x<z2.
Generate a second unrelated random variable y in the range (0, max). If the value of y is smaller than PDF(x) reject both randomly generated values (x,y) and go back to step 4. If the generated value y is larger than PDF(x) accept the value x as the randomly generated point on a distribution and return it.
Here's the code that reproduces similar behavior for a Gaussian PDF.
#include "Random.h"
#include <fstream>
using namespace std;
double gaus(double a, double b, double c, double x)
{
return a*exp( -((x-b)*(x-b)/(2*c*c) ));
}
double* random_on_a_gaus_distribution(double inter_a, double inter_b)
{
double res [2];
double a = 1.0; //currently parameters for the Gaussian
double b = 2.0; //are defined here to avoid having
double c = 3.0; //a long function declaration line.
double x = kiss::Ran(inter_a, inter_b);
double y = kiss::Ran(0.0, 1.0);
while (y>gaus(a,b,c,x)) //keep creating values until step 5. is satisfied.
{
x = kiss::Ran(inter_a, inter_b); //this is interval (z1, z2)
y = kiss::Ran(0.0, 1.0); //this is the interval (0, max)
}
res[0] = x;
res[1] = y;
return res; //I return (x,y) for plot reasons, only x is the randomly
} //generated value you're looking for.
void main()
{
double* x;
ofstream f;
f.open("test.txt");
for(int i=0; i<100000; i++)
{
//see bellow how I got -5 and 10 to be my interval (z1, z2)
x = random_on_a_gaus_distribution(-5.0, 10.0);
f << x[0]<<","<<x[1]<<endl;
}
f.close();
}
Step 1
So first we define a general look of a Gaussian PDF in a function called gaus. Simple.
Then we define a function random_on_a_gaus_distribution which uses a well defined Gaussian function. In an experiment\measurement we would get coefficients a, b, c by fitting our function. I picked some random ones (1, 2, 3) for this example, you can pick the ones that satisfy your HW assignment (that is: coefficients that make a Gaussian that has a mean of 7000).
Step 2 and 3
I used wolfram mathematica to plot gaus. with parameters 1,2,3 too see what would be the most appropriate values for max and (z1, z2) . You can see the graph yourself. Maximum of the function is 1.0 and via ancient method of science called eyeballin' I estimated that the cutoff points are -5.0 and 10.0.
To make random_on_a_gaus_distribution more general you could follow step 2) more rigorously and define eta and then calculate your function in successive points until PDF gets smaller than eta. Dangers with this are that your cutoff points can be very far apart and this could take long for very monotonous functions. Additionally you have to find the maximum yourself. This is generally tricky, However a simpler problem is minimization of a negative of a function. This can also be tricky for a general case but not "undoable". Easiest way is to cheat a bit like I did and just hard-code this for a couple of functions only.
Step 4 and 5
And then you bash away. Just keep creating new and new points until you reach satisfactory hit. DO NOTICE the returned number x is a random number. You wouldn't be able to find a logical link between two successively created x values, or first created x and the millionth.
However the number of accepted x values in the interval around the x_max of our distribution is greater than the number of x values created in intervals for which PDF(x) < PDF(x_max).
This just means that your random numbers will be weighted within the chosen interval in such manner that the larger PDF value for a random variable x will correspond to more random points accepted in a small interval around that value than around any other value of xi for which PDF(xi)<PDF(x).
I returned both x and y to be able to plot the graph bellow, however what you're looking to return is actually just the x. I did the plots with matplotlib.
It's probably better to show just a histogram of randomly created variable on a distribution. This shows that the x values that are around the mean value of your PDF function are the most likely ones to get accepted, and therefore more randomly created variables with those approximate values will be created.
Additionally I assume you would be interested in implementation of the kiss Random number generator. IT IS VERY IMPORTANT YOU HAVE A VERY GOOD GENERATOR. I dare to say to an extent kiss doesn't probably cut it (mersene twister is used often).
Random.h
#pragma once
#include <stdlib.h>
const unsigned RNG_MAX=4294967295;
namespace kiss{
// unsigned int kiss_z, kiss_w, kiss_jsr, kiss_jcong;
unsigned int RanUns();
void RunGen();
double Ran0(int upper_border);
double Ran(double bottom_border, double upper_border);
}
namespace Crand{
double Ran0(int upper_border);
double Ran(double bottom_border, double upper_border);
}
Kiss.cpp
#include "Random.h"
unsigned int kiss_z = 123456789; //od 1 do milijardu
unsigned int kiss_w = 378295763; //od 1 do milijardu
unsigned int kiss_jsr = 294827495; //od 1 do RNG_MAX
unsigned int kiss_jcong = 495749385; //od 0 do RNG_MAX
//KISS99*
//Autor: George Marsaglia
unsigned int kiss::RanUns()
{
kiss_z=36969*(kiss_z&65535)+(kiss_z>>16);
kiss_w=18000*(kiss_w&65535)+(kiss_w>>16);
kiss_jsr^=(kiss_jsr<<13);
kiss_jsr^=(kiss_jsr>>17);
kiss_jsr^=(kiss_jsr<<5);
kiss_jcong=69069*kiss_jcong+1234567;
return (((kiss_z<<16)+kiss_w)^kiss_jcong)+kiss_jsr;
}
void kiss::RunGen()
{
for (int i=0; i<2000; i++)
kiss::RanUns();
}
double kiss::Ran0(int upper_border)
{
unsigned velicinaIntervala = RNG_MAX / upper_border;
unsigned granicaIzbora= velicinaIntervala*upper_border;
unsigned slucajniBroj = kiss::RanUns();
while(slucajniBroj>=granicaIzbora)
slucajniBroj = kiss::RanUns();
return slucajniBroj/velicinaIntervala;
}
double kiss::Ran (double bottom_border, double upper_border)
{
return bottom_border+(upper_border-bottom_border)*kiss::Ran0(100000)/(100001.0);
}
Additionally there's the standard C random generators:
CRands.cpp
#include "Random.h"
//standardni pseudo random generatori iz C-a
double Crand::Ran0(int upper_border)
{
return rand()%upper_border;
}
double Crand::Ran (double bottom_border, double upper_border)
{
return (upper_border-bottom_border)*rand()/((double)RAND_MAX+1);
}
It's worthy also to comment on the (b) graph above. When you have a very badly behaved PDF, PDF(x) will vary significantly between large numbers and very small ones.
Issue with that is that the interval area Ch(x) will match the extreme values of the PDF well, but since we create a random variable y for small values of PDF(x) as well; the chances of accepting that value are minute! It is more likely that the generated y value will always be larger than PDF(x) at that point. This means that you'll spend a lot of cycles creating numbers that won't get chosen and that all your chosen random numbers will be very locally bound to the max of your PDF.
That's why it's often useful not to have the same Ch(x) intervals everywhere, but to define a parametrized set of intervals. However this adds a fair bit of complexity to the code.
Where do you set your limits? How to deal with borderline cases? When and how to determine that you indeed need to suddenly use this approach? Calculating max might not be as simple now, depending on the method you originally envisioned would be doing this.
Additionally now you have to correct for the fact that a lot more numbers get accepted more easily in the areas where your Ch(x) box height is lower which skews the original PDF.
This can be corrected by weighing numbers created in the lowered boundary by the ratio of heights of higher and lower boundary, basically you repeat the y step one more time. Create a random number z from 0 to 1 and compare it to the ratio lower_height/higher_height, guaranteed to be <1. If z is smaller than the ratio: accept x and if it's larger reject.
Generalizations of code presented are also possible by writing a function, that takes in an object pointer instead. By defining your own class i.e. function which would generally describe functions, have a eval method at a point, be able to store your parameters, calculate and store it's own max/min values and zero/cutoff points, you wouldn't have to pass, or define them in a function like I did.
Good Luck have fun!
tl;dr: Raise a uniform 0 to 1 distribution to the power (1 - m) / m where m is the desired mean (between 0 and 1). Shift/scale as desired.
I was curious about how to implement this. I figured a trapezoid would be the easiest method, but then you're limited in that the most extreme mean you can get is with a triangle, which isn't that extreme. The math started getting hard, so I reverted to a purely empirical method that seems to work pretty well.
Anyways, for a distribution, how about starting with the uniform [0, 1) distribution and raising the values to some arbitrary power. Square them and the distribution shifts to the right. Square root them and they shift to the left. You can go to whatever extreme you want and shove the distribution as hard as you want.
def randompow(p):
return random.random() ** p
(Everything's written in Python, but should be easy enough to translate. If something's unclear, just ask. random.random() returns floats from 0 to 1)
So, how do we adjust that power? Well, how's the mean seem to shift with varying powers?
Looks like some sort of sigmoid curve. There are lots of sigmoid functions, but hyperbolic tangent seems to work pretty well.
Not 100% there, lets try to scale it in the X direction...
# x are the values from -3 to 3 (log transformed from the powers used)
# y are the empirically-determined means given all those powers
def fitter(tanscale):
xsc = tanscale * x
sigtan = np.tanh(xsc)
sigtan = (1 - sigtan) / 2
resid = sigtan - y
return sum(resid**2)
fit = scipy.optimize.minimize(fitter, 1)
The fitter says the best scaling factor is 1.1514088816214016. The residuals are actually pretty low, so sounds good.
Implementing the inverse of all the math I didn't talk about looks like:
def distpow(mean):
p = 1 - (mean * 2)
p = np.arctanh(p) / 1.1514088816214016
return 10**p
That gives us the power to use in the first function to get whatever mean to the distribution. A factory function can return a method to churn out a bunch of numbers from the distribution with the desired mean
def randommean(mean):
p = distpow(mean)
def f():
return random.random() ** p
return f
How's it do? Reasonably well out to 3-4 decimals:
for x in [0.01, 0.1, 0.2, 0.4, 0.5, 0.6, 0.8, 0.9, 0.99]:
f = randommean(x)
# sample the distribution 10 million times
mean = np.mean([f() for _ in range(10000000)])
print('Target mean: {:0.6f}, actual: {:0.6f}'.format(x, mean))
Target mean: 0.010000, actual: 0.010030
Target mean: 0.100000, actual: 0.100122
Target mean: 0.200000, actual: 0.199990
Target mean: 0.400000, actual: 0.400051
Target mean: 0.500000, actual: 0.499905
Target mean: 0.600000, actual: 0.599997
Target mean: 0.800000, actual: 0.799999
Target mean: 0.900000, actual: 0.899972
Target mean: 0.990000, actual: 0.989996
A more succinct function that just gives you a value given a mean (not a factory function):
def randommean(m):
p = np.arctanh(1 - (2 * m)) / 1.1514088816214016
return random.random() ** (10 ** p)
Edit: fitting against the natural log of the mean instead of log10 gave a residual suspiciously close to 0.5. Doing some math to simplify out the arctanh gives:
def randommean(m):
'''Return a value from the distribution 0 to 1 with average *m*'''
return random.random() ** ((1 - m) / m)
From here it should be fairly easy to shift, rescale, and round off the distribution. The truncating-to-integer might end up shifting the mean by 1 (or half a unit?), so that's an unsolved problem (if it matters).
You simply define 2 distributions dist1 operating in [1000, 7000] and dist2 operating in [7000, 10000].
Let's call m1 the mean of dist1 and m2 the mean of dist2.
You are looking for a mixture between dist1and dist2the mean of which is 7000.
You must adjust the weights (w1, w2 = 1-w1) such as :
7000 = w1 * m1 + w2 * m2
which leads to:
w1 = (m2 - 7000) / (m2 - m1)
Using the OpenTURNS library, the code will look as follow:
import openturns as ot
dist1 = ot.Uniform(1000, 7000)
dist2 = ot.Uniform(7000, 10000)
m1 = dist1.getMean()[0]
m2 = dist2.getMean()[0]
w = (m2 - 7000) / (m2 - m1)
dist = ot.Mixture([dist1, dist2], [w, 1 - w])
print ("Mean of dist = ", dist.getMean())
>>> Mean of dist = [7000]
Now you can draw a sample of size N by calling dist.getSample(N). For instance:
print(dist.getSample(10))
>>> [ X0 ]
0 : [ 3019.97 ]
1 : [ 7682.17 ]
2 : [ 9035.1 ]
3 : [ 8873.59 ]
4 : [ 5217.08 ]
5 : [ 6329.67 ]
6 : [ 9791.22 ]
7 : [ 7786.76 ]
8 : [ 7046.59 ]
9 : [ 7088.48 ]

Adding two doubles gives weird rounding result in C [duplicate]

This question already has answers here:
Why adding these two double does not give correct answer? [duplicate]
(2 answers)
Closed 8 years ago.
I'm a bit of C newbie but this problem is really confusing me.
I have a variable double = 436553940.0000000000 (it was cast from an Int) and an other variable double 0.095832496.
My result should be 436553940.0958324*96*, however I get 436553940.0958324*67*.
Why does this happen and how can I prevent it from happening?
The number you expect is simply not representable by a double. The value you receive is instead a close approximation based on rounding results:
In [9]: 436553940.095832496
Out[9]: 436553940.09583247
In [18]: 436553940.095832496+2e-8
Out[18]: 436553940.09583247
In [19]: 436553940.095832496+3e-8
Out[19]: 436553940.0958325
In [20]: 436553940.095832496-2e-8
Out[20]: 436553940.09583247
In [21]: 436553940.095832496-3e-8
Out[21]: 436553940.0958324
You've just run out of significand bits.
Doubles are not able to represent every number. We can write some C++ code (that implements doubles in the same way) to show this.
#include <cstdio>
#include <cmath>
int main() {
double x = 436553940;
double y = 0.095832496;
double sum = x + y;
printf("prev: %50.50lf\n", std::nextafter(sum, 0));
printf("sum: %50.50lf\n", sum);
printf("next: %50.50lf\n", std::nextafter(sum, 500000000));
}
This code computes the sum of the two numbers you are talking about, and stores it as sum. We then compute the next representable double before that number, and after that number.
Here's the output:
[11:43am][wlynch#watermelon /tmp] ./foo
prev: 436553940.09583240747451782226562500000000000000000000000000
sum: 436553940.09583246707916259765625000000000000000000000000000
next: 436553940.09583252668380737304687500000000000000000000000000
So, we are not able to have the calculation equal 436553940.0958324_96_, because that number is not a valid double. So the IEEE-754 standard (and your compiler) defines some rules that tell us how the number should be rounded, to reach the nearest representable double.

Color code value converted to 2 digit numeral

I am working on a project that changes the color of an LED to the color of an RGB value input by the user.
The way the program works right now to change the brightness of each color is by providing it a number 0-1 so 0 is fully on 1 is fully off 0.50 is 50% brightness etc.
It seems kind of backwards providing a 0 for on but that's just the way the code is written. My question is how would I convert the values 0-255 to the 0-99 I have done the math by dividing the 3 digit number by 2.58 which gives me the proper number if it was 1 for on and 0 for off, but since its backwards how would I obtain the opposite of this?
Would this be the remainder? So for example 240/2.58=93 the remainder is 7 so is that the number I would use? Math was never my strong point unfortunately. I know this question does not pertain to a certain language but I am going to tag it with C since that's what it is going to be written in. If someone could provide an example of getting the remainder of a number using C that would be awesome. I know of the modulo operator but I don't think that would work in my case but I could be wrong.
double mapNumbers(double x, long xMin, long xMax, long yMin, long yMax)
{
return ((double)(x - xMin) * (yMax - yMin)) / ((double)(xMax - xMin)) + yMin;
}
Use the above function:
you can map x which is between 0-255 and get the corresponding number from 0-99 - infact between any range of numbers and any other range of numbers:
so:
mapNumbers(10/*assuming this is x*/, 0, 255, 0, 99);
Well, to convert from a 0 - 1 range to a 0 - 255 range, just multiple by 255. To do it inversely, just subtract the first value from 1.
result = (1 - x) * 255;

Resources