for loop condition ignored - loops

I've just started learning Java a few days ago.
I was trying to create a method with two parameters that prints the products of the two numbers, until the product is less than or equal to 200.
For example: the inputs are 5.0 and 0.5. I want this operation: 5.0 * 0.5, print the result (2.5), then I want the update to be increased by 0.1, so from 0.5 to 0.6. Make the operation again 5.0 * 0.6, print the result, increase the update to 0.7 and so on, until the result of the operation n*n is <=200.
Just like this:
inputs: 5.0, 0.5
print:
2,5
3
3,5
...
200
I don't understand why the loop is infinite, I guess the condition is ignored. What is the problem?
Here are some variations on the code I wrote
void ration(double lbs, double increment) {
for(double product=(lbs*increment); product <= 200.0; increment+=0.1){
System.out.println(lbs*increment);
}
}
this prints an infinite loop of the results between n*n (let's say 5 10 15 20 25..)
void ration(double lbs, double increment) {
for(double product=(lbs*increment); product <= 200.0; increment+=0.1){
System.out.println(product);
}
}
This prints the first result of n * n forver... (let's say 5 5 5 5 5 5..)
What am I doing wrong? Any advice would be greately appreciated.
Thank you

Related

Convert continues binary fraction to decimal fraction in C

I implemented a digit-by-digit calculation of the square root of two. Each round it will outpute one bit of the fractional part e.g.
1 0 1 1 0 1 0 1 etc.
I want to convert this output to decimal numbers:
4 1 4 2 1 3 6 etc.
The issue I´m facing is, that this would generally work like this:
1 * 2^-1 + 0 * 2^-2 + 1 * 2^-3 etc.
I would like to avoid fractions altogether, as I would like to work with integers to convert from binary to decimal. Also I would like to print each decimal digit as soon as it has been computed.
Converting to hex is trivial, as I only have to wait for 4 bits. Is there a smart aproach to convert to base10 which allows to observe only a part of the whole output and idealy remove digits from the equation, once we are certain, that it wont change anymore, i.e.
1 0
2 0,25
3 0,375
4 0,375
5 0,40625
6 0,40625
7 0,4140625
8 0,4140625
After processing the 8th bit, I´m pretty sure that 4 is the first decimal fraction digit. Therefore I would like to remove 0.4 complelty from the equation to reduce the bits I need to take care of.
Is there a smart approach to convert to base10 which allows to observe only a part of the whole output and ideally remove digits from the equation, once we are certain that it wont change anymore (?)
Yes, eventually in practice, but in theory, no in select cases.
This is akin to the Table-maker's dilemma.
Consider the below handling of a value near 0.05. As long as the binary sequence is .0001 1001 1001 1001 1001 ... , we cannot know it the decimal equivalent is 0.04999999... or 0.05000000...non-zero.
int main(void) {
double a;
a = nextafter(0.05, 0);
printf("%20a %.20f\n", a, a);
a = 0.05;
printf("%20a %.20f\n", a, a);
a = nextafter(0.05, 1);
printf("%20a %.20f\n", a, a);
return 0;
}
0x1.9999999999999p-5 0.04999999999999999584
0x1.999999999999ap-5 0.05000000000000000278
0x1.999999999999bp-5 0.05000000000000000971
Code can analyse the incoming sequence of binary fraction bits and then ask two questions, after each bit: "if the remaining bits are all 0" what is it in decimal?" and "if the remaining bits are all 1" what is it in decimal?". In many cases, the answers will share common leading significant digits. Yet as shown above, as long as 1001 is received, there are no common significant decimal digits.
A usual "out" is to have an upper bound as to the number of decimal digits that will ever be shown. In that case code is only presenting a rounded result and that can be deduced in finite time even if the binary input sequence remains 1001
ad nauseam.
The issue I´m facing is, that this would generally work like this:
1 * 2^-1 + 0 * 2^-2 + 1 * 2^-3 etc.
Well 1/2 = 5/10 and 1/4 = 25/100 and so on which means you will need powers of 5 and shift the values by powers of 10
so given 0 1 1 0 1
[1] 0 * 5 = 0
[2] 0 * 10 + 1 * 25 = 25
[3] 25 * 10 + 1 * 125 = 375
[4] 375 * 10 + 0 * 625 = 3750
[5] 3750 * 10 + 1 * 3125 = 40625
Edit:
Is there a smart aproach to convert to base10 which allows to observe only a part of the whole output and idealy remove digits from the equation, once we are certain, that it wont change anymore
It might actually be possible to pop the most significant digits(MSD) in this case. This will be a bit long but please bear with me
Consider the values X and Y:
If X has the same number of digits as Y, then the MSD will change.
10000 + 10000 = 20000
If Y has 1 or more digits less than X, then the MSD can change.
19000 + 1000 = 20000
19900 + 100 = 20000
So the first point is self explanatory but the second point is what will allow us to pop the MSD. The first thing we need to know is that the values we are adding is continuously being divided in half every iteration. Which means that if we only consider the MSD, the largest value in base10 is 9 which will produce the sequence
9 > 4 > 2 > 1 > 0
If we sum up these values it will be equal to 16, but if we try to consider the values of the next digits (e.g. 9.9 or 9.999), the value actually approaches 20 but it doesn't exceed 20. What this means is that if X has n digits and Y has n-1 digits, the MSD of X can still change. But if X has n digits and Y has n-2 digits, as long as the n-1 digit of X is less than 8, then the MSD will not change (otherwise it would be 8 + 2 = 10 or 9 + 2 = 11 which means that the MSD will change). Here are some examples
Assuming X is the running sum of sqrt(2) and Y is 5^n:
1. If X = 10000 and Y = 9000 then the MSD of X can change.
2. If X = 10000 and Y = 900 then the MSD of X will not change.
3. If X = 19000 and Y = 900 then the MSD of X can change.
4. If X = 18000 and Y = 999 then the MSD of X can change.
5. If X = 17999 and Y = 999 then the MSD of X will not change.
6. If X = 19990 and Y = 9 then the MSD of X can change.
In the example above, on point #2 and #5, the 1 can already be popped. However for point #6, it is possible to have 19990 + 9 + 4 = 20003, but this also means that both 2 and 0 can be popped after that happened.
Here's a simulation for sqrt(2)
i Out X Y flag
-------------------------------------------------------------------
1 0 5 0
2 25 25 1
3 375 125 1
4 3,750 625 0
5 40,625 3,125 1
6 406,250 15,625 0
7 4 140,625 78,125 1
8 4 1,406,250 390,625 0
9 4 14,062,500 1,953,125 0
10 41 40,625,000 9,765,625 0
11 41 406,250,000 48,828,125 0
12 41 4,062,500,000 244,140,625 0
13 41 41,845,703,125 1,220,703,125 1
14 414 18,457,031,250 6,103,515,625 0
15 414 184,570,312,500 30,517,578,125 0
16 414 1,998,291,015,625 152,587,890,625 1
17 4142 0,745,849,609,375 762,939,453,125 1
You can use multiply and divide approach to reduce the floating point arithmetic.
1 0 1 1
Which is equivalent to 1*2^0+0*2^1+2^(-2)+2^(-3) can be simplified to (1*2^3+0*2^2+1*2^1+1*2^0)/(2^3) only division remains floating point arithmetic rest all is integer arithmetic operation. Multiplication by 2 can be implemented through left shift.

Reduction or atomic operator on unknown global array indices

I have the following algorithm:
__global__ void Update(int N, double* x, double* y, int* z, double* out)
{
int i = blockIdx.x * blockDim.x + threadIdx.x;
if (i < N)
{
x[i] += y[i];
if (y[i] >= 0.)
out[z[i]] += x[i];
else
out[z[i]] -= x[i];
}
}
Important to note that out is smaller than x. Say x, y and z are always the same size, say 1000, and out is always smaller, say 100. z is the indices in out that each of x and y correspond to.
This is all find except the updates to out. There may be clashes across threads as z does not contain only unique values and has duplicates. Therefore I currently have this implemented with atomic versions of atomicAdd and subtract using compare and swap. This is obviously expensive and means my kernel takes 5-10x longer to run.
I would like to reduce this however the only way I can think of doing this is for each thread to have its own version of out (which can be large, 10000+, X 10000+ threads). This would mean I set up 10000 double[10000] (perhaps in shared?) call my kernel, and then sum across these arrays, perhaps in another kernel. Surely there must be a more elegant way to do this?
It might be worth noting that x, y, z and out reside in global memory. As my kernel (I have others like this) is very simple I have not decided to copy across bits to shared (nvvp on the kernel shows equal computation and memory so I am thinking not much performance to be gained when adding overhead of moving data from global to shared and back again, any thoughts?).
Method 1:
Build a set of "transactions". Since you only have one update per thread, you can easily build a fixed size "transaction" record, one entry per thread. Suppose I have 8 threads (for simplicity of presentation) and some arbitrary number of entries in my out table. Let's suppose my 8 threads wanted to do 8 transactions like this:
thread ID (i): 0 1 2 3 5 6 7
z[i]: 2 3 4 4 3 2 3
x[i]: 1.5 0.5 1.0 0.5 0.1 -0.2 -0.1
"transaction": 2,1.5 3,0.5 4,1.0 4,0.5 3,0.1 2,-0.2 3,-0.1
Now do a sort_by_key on the transactions, to arrange them in order of z[i]:
sorted: 2,1.5 2,-0.2 3,0.5 3,-0.1 3,0.1 4,1.0 4,0.5
Now do a reduce_by_key operation on the transactions:
keys: 2 3 4
values: 1.3 0.5 1.5
Now update out[i] according to the keys:
out[2] += 1.3
out[3] += 0.5
out[4] += 1.5
thrust and/or cub might be pre-built options for the sort and reduce operations.
Method 2:
As you say, you have arrays x, y, z, and out in global memory. If you are going to use z which is a "mapping" repeatedly, you might want to rearrange (group) or sort your arrays in order of z:
index (i): 0 1 2 3 4 5 6 7
z[i]: 2 8 4 8 3 1 4 4
x[i]: 0.2 0.4 0.3 0.1 -0.1 -0.4 0.0 1.0
group by z[i]:
index (i): 0 1 2 3 4 5 6 7
z[i]: 1 2 3 4 4 4 8 8
x[i]:-0.4 0.2 -0.1 0.3 0.0 1.0 0.4 0.1
This, or some variant of it, would allow you to eliminate having to repeatedly do the sorting operation in method 1 (again, if you were using the same "mapping" vector repeatedly).

Correct way to get weighted average of concrete array-values along continous interval

I've been looking for a while onto websearch, however, possibly or probably I am missing the right terminology.
I have arbitrary sized arrays of scalars ...
array = [n_0, n_1, n_2, ..., n_m]
I also have a function f->x->y, with 0<=x<=1, and y an interpolated value from array. Examples:
array = [1,2,9]
f(0) = 1
f(0.5) = 2
f(1) = 9
f(0.75) = 5.5
My problem is that I want to compute the average value for some interval r = [a..b], where a E [0..1] and b E [0..1], i.e. I want to generalize my interpolation function f->x->y to compute the average along r.
My mind boggles me slightly w.r.t. finding the right weighting. Imagine I want to compute f([0.2,0.8]):
array --> 1 | 2 | 9
[0..1] --> 0.00 0.25 0.50 0.75 1.00
[0.2,0.8] --> ^___________________^
The latter being the range of values I want to compute the average of.
Would it be mathematically correct to compute the average like this?: *
1 * (1-0.8) <- 0.2 'translated' to [0..0.25]
+ 2 * 1
avg = + 9 * 0.2 <- 0.8 'translated' to [0.75..1]
----------
1.4 <-- the sum of weights
This looks correct.
In your example, your interval's length is 0.6. In that interval, your number 2 is taking up (0.75-0.25)/0.6 = 0.5/0.6 = 10/12 of space. Your number 1 takes up (0.25-0.2)/0.6 = 0.05 = 1/12 of space, likewise your number 9.
This sums up to 10/12 + 1/12 + 1/12 = 1.
For better intuition, think about it like this: The problem is to determine how much space each array-element covers along an interval. The rest is just filling the machinery described in http://en.wikipedia.org/wiki/Weighted_average#Mathematical_definition .

apending for loop/recursion / strange error

I have a matlab/octave for loop which gives me an inf error messages along with the incorrect data
I'm trying to get 240,120,60,30,15... every number is divided by two then that number is also divided by two
but the code below gives me the wrong value when the number hits 30 and 5 and a couple of others it doesn't divide by two.
ang=240;
for aa=2:2:10
ang=[ang;ang/aa];
end
240
120
60
30
40
20
10
5
30
15
7.5
3.75
5
2.5
1.25
0.625
24
12
6
3
4
2
1
0.5
3
1.5
0.75
0.375
0.5
0.25
0.125
0.0625
PS: I will be accessing these values from different arrays, that's why I used a for loop so I can access the values using their indexes
In addition to the divide-by-zero error you were starting with (fixed in the edit), the approach you're taking isn't actually doing what you think it is. if you print out each step, you'll see why.
Instead of that approach, I suggest taking more of a "matlab way": avoid the loop by making use of vectorized operations.
orig = 240;
divisor = 2.^(0:5); #% vector of 2 to the power of [0 1 2 3 4 5]
ans = orig./divisor;
output:
ans = [240 120 60 30 15 7.5]
Try the following:
ang=240;
for aa=1:5
% sz=size(ang,1);
% ang=[ang;ang(sz)/2];
ang=[ang;ang(end)/2];
end
You should be getting warning: division by zero if you're running it in Octave. That says pretty much everything.
When you divide by zero, you get Inf. Because of your recursion... you see the problem.
You can simultaneously generalise and vectorise by using logic:
ang=240; %Replace 240 with any positive integer you like
ang=ang*2.^-(0:log2(ang));
ang=ang(1:sum(ang==floor(ang)));
This will work for any positive integer (to make it work for negatives as well, replace log2(ang) with log2(abs(ang))), and will produce the vector down to the point at which it goes odd, at which point the vector ends. It's also faster than jitendra's solution:
octave:26> tic; for i=1:100000 ang=240; ang=ang*2.^-(0:log2(ang)); ang=ang(1:sum(ang==floor(ang))); end; toc;
Elapsed time is 3.308 seconds.
octave:27> tic; for i=1:100000 ang=240; for aa=1:5 ang=[ang;ang(end)/2]; end; end; toc;
Elapsed time is 5.818 seconds.

Generate Random number between two number with one rare number

i can generate random number between two numbers in c using this..
arc4random()%(high-low+1)+low;
then now my requirement is...i want to make a number rare....thats mean if
high=5,
low=1,
and rare=3,
than 3 will be appeared much rarely than 1,2,4 and 5...
Thanks
You can use tables to calculate your final roll, similar to how pen and paper RPGs do this same type of calculation:
Roll 1 D 21 (easily possibly w/ code).
If you get 1-5, it counts as a 1
If you get 6-10, it counts as a 2
If you get 11-15, it counts as a 4
If you get 16-20, it counts as a 5
If you get a 21, it counts as a 3
The advantage to this option is you get a strong sense of the exact probabilities you are dealing with. You can get a feeling of exactly how rare or common each number is, and you get fine-grained control of how common each number is, in comparison to the other numbers.
You could also use fractions to generate the table. Use the Least Common Multiple to determine a common base. That base is the max random number size you will need. Then, put all the fractions in like terms. Use the resulting numerators to determine the size of the range for each number in the table.
With this automated solution, the input numbers are very easy to understand in relation to each other. E.g:
1/4 for 1
1/4 for 2
1/4 for 4
1/5 for 5
1/20 for 3
This would generate a table like so:
LCM = 20
1-5 = 1 (like terms - 5/20)
6-10 = 2 (5/20)
11-15 = 4 (5/20)
16-19 = 5 (4/20)
20 = (1/20)
Some more on LCM: http://en.wikipedia.org/wiki/Least_common_multiple
One simple-to-understand option:
Generate one number to determine whether you're going to return the rare number (e.g. generate a number in the range [0-99], and if it's 0, return the rare number
If you get to this step, you're returning a non-rare number: keep generating numbers in the normal range until you get any non-rare number, and return that
There are other alternative approaches which would only require you to generate a single number, but the above feels like it would be the simplest one to write and understand.
You could create an array containing the numbers according to their probability:
list = (1, 1, 2, 2, 3, 4, 4, 5, 5);
return list.itemAtIndex(random() % list.count());
This is not very elegant, but it works and easily scales should the probabilities get more complex.
The sum of all probabilities must be 1. Now we are working here with discrete probabilities over a finite range so we are looking at (here) 5 possibilities with some distribution you have, call them p1, p2, p3, p4 and p5 the sum of which is 1.
f0 = 0
f1 = p1
f2 = f1 + p2
f3 = f2 + p3
f4 = f3 + p4
f5 = f4 + p5 and must be 1
Generate a random number from 0 to 1 and we will assume it cannot be exactly 1. Look at the f value that fits into its ceiling and that is the value of your random event. So perhaps
f1 = 0.222
f2 = 0.444
f3 = 0.555
f4 = 0.777
f5 = 1
If your random number is 0.645 then you have generated a 4 event.
With the above you have half as much chance of generating a 3 than any of the others. We can make it less likely still, eg:
f1 = 0.24
f2 = 0.48
f3 = 0.52
f4 = 0.76
f5 = 1
0.24 probably of the others and only 0.04 of a 3.
Lets go through this. First we use the srand() function to seed the randomizer. Basically, the computer can generate random numbers based on the number that is fed to srand(). If you gave the same seed value, then the same random numbers would be generated every time.
Therefore, we have to seed the randomizer with a value that is always changing. We do this by feeding it the value of the current time with the time() function.
Now, when we call rand(), a new random number will be produced every time.
#include<stdio.h>
int random_number(int min_num, int max_num);
int main(void) {
printf("Min : 1 Max : 30 %d\n",random_number(0,5));
printf("Min : 100 Max : 1000 %d\n",random_number(100,1000));
return 0;
}
int random_number(int min_num, int max_num)
{
int result=0,low_num=0,hi_num=0;
if(min_num<max_num)
{
low_num=min_num;
hi_num=max_num+1; // this is done to include max_num in output.
}else{
low_num=max_num+1;// this is done to include max_num in output.
hi_num=min_num;
}
srand(time(NULL));
result = (rand()%(hi_num-low_num))+low_num;
return result;
}
while true
generate a random number
if it's not the rare number, return it
generate a second random number - say from 1 to 100
if that second number's <= the percentage chance of the rare number compared to the others, return the rare number
Note: this is fast for the common case or returning the non-rare number.

Resources