Getting the index of a float number in a vector - arrays

I've executed this code, but it doesn't work as I expect:
A = 1:0.1:1.4
A =
1.0000 1.1000 1.2000 1.3000 1.4000
A == 1.3000
ans =
0 0 0 0 0
I thought I was going to get:
ans =
0 0 0 1 0
Why does it not work? And how can I make it work as I want?
Thank you.

That's the usual when you compare floats. Try A(4)-1.3. It'll give you something small but not zero. That's because floats have finite precision. In general, it's better not to test for equality with floats.
The usual approach is to define a small tolerance (for example 1e-9) and compare taking that tolerance into account:
abs(A-1.3)<1e-9

0.1 has an infinite expansion when written in base 2:
0.000110011001100110011001100110011001100110011001100110011001100
shell code to obtain that:
bc -lq
obase=2
1/10
Matlab will truncate to 50(?) digits. Because of this, 0.1*3 and 0.3 are different.

It's because of double precision. Try format long g and have a look at A again, you'll see that it's not exactly 1.3. Have a look at the MATLAB wiki to understand why that is. It's never a good idea to do an equality test on a floating point number.

Related

Eiffel: REAL_32.to_double gives a strange value

Trying to transform a real_32 to real_64, I'm getting
real_32: 61.55
real_64: 61.54999923706055
Am I wrong with the to_double function?
This is expected. In the particular example, the binary representation of the decimal 61.55 with single and double precision respectively is:
REAL_32: 0 10000100 11101100011001100110011
REAL_64: 0 10000000100 1110110001100110011001100110011001100110011001100110
As you can see, the trailing pattern 0011 is recurrent and should go ad infinitum to give a precise value.
When REAL_32 is assigned to REAL_64, the trailing 0011s are not added automatically, but filled with zeroes instead:
REAL_32: 0 10000100 11101100011001100110011
REAL_64: 0 10000000100 1110110001100110011001100000000000000000000000000000
In decimal notation, this corresponds to 61.54999923706055. What is essential here, 61.54999923706055 and 61.55 have exactly the same binary representation when using single precision floating numbers. You can check it yourself with print ({REAL_32} 61.55 = {REAL_32} 61.54999923706055). In other words, the results you get are correct, and the two values are the same. The only difference is that when REAL_32 is printed, it is rounded to lower number of meaningful decimal digits.
This is the reason why accounting and bookkeeping software never uses floating-point numbers, only integer and decimal.
As a workaround working for getting from JSON into typescript deserialization, the following worked:
a_real_32.out.to_real_64

ValueError: matplotlib display text must have all code points < 128 or use Unicode strings

In my code I get an array like this:
array(['2.83100e+07', '2.74000e+07', '2.79400e+07'],dtype='|S11')
How can I "cut" my values like:
2.83100e+07 --> 2.831 ?
Best regards!
using a for loop and round(n)
In [23]: round(66.66666666666,4)
Out[23]: 66.6667
In [24]: round(1.29578293,6)
Out[24]: 1.295783
help on round():
round(number[, ndigits]) -> floating point number
Round a number to a given precision in decimal digits (default 0 digits). This always returns a floating point number. Precision may be negative

What is the advantage of linspace over the colon ":" operator?

Is there some advantage of writing
t = linspace(0,20,21)
over
t = 0:1:20
?
I understand the former produces a vector, as the first does.
Can anyone state me some situation where linspace is useful over t = 0:1:20?
It's not just the usability. Though the documentation says:
The linspace function generates linearly spaced vectors. It is
similar to the colon operator :, but gives direct control over the
number of points.
it is the same, the main difference and advantage of linspace is that it generates a vector of integers with the desired length (or default 100) and scales it afterwards to the desired range. The : colon creates the vector directly by increments.
Imagine you need to define bin edges for a histogram. And especially you need the certain bin edge 0.35 to be exactly on it's right place:
edges = [0.05:0.10:.55];
X = edges == 0.35
edges = 0.0500 0.1500 0.2500 0.3500 0.4500 0.5500
X = 0 0 0 0 0 0
does not define the right bin edge, but:
edges = linspace(0.05,0.55,6); %// 6 = (0.55-0.05)/0.1+1
X = edges == 0.35
edges = 0.0500 0.1500 0.2500 0.3500 0.4500 0.5500
X = 0 0 0 1 0 0
does.
Well, it's basically a floating point issue. Which can be avoided by linspace, as a single division of an integer is not that delicate, like the cumulative sum of floting point numbers. But as Mark Dickinson pointed out in the comments:
You shouldn't rely on any of the computed values being exactly what you expect. That is not what linspace is for. In my opinion it's a matter of how likely you will get floating point issues and how much you can reduce the probabilty for them or how small can you set the tolerances. Using linspace can reduce the probability of occurance of these issues, it's not a security.
That's the code of linspace:
n1 = n-1
c = (d2 - d1).*(n1-1) % opposite signs may cause overflow
if isinf(c)
y = d1 + (d2/n1).*(0:n1) - (d1/n1).*(0:n1)
else
y = d1 + (0:n1).*(d2 - d1)/n1
end
To sum up: linspace and colon are reliable at doing different tasks. linspace tries to ensure (as the name suggests) linear spacing, whereas colon tries to ensure symmetry
In your special case, as you create a vector of integers, there is no advantage of linspace (apart from usability), but when it comes to floating point delicate tasks, there may is.
The answer of Sam Roberts provides some additional information and clarifies further things, including some statements of MathWorks regarding the colon operator.
linspace and the colon operator do different things.
linspace creates a vector of integers of the specified length, and then scales it down to the specified interval with a division. In this way it ensures that the output vector is as linearly spaced as possible.
The colon operator adds increments to the starting point, and subtracts decrements from the end point to reach a middle point. In this way, it ensures that the output vector is as symmetric as possible.
The two methods thus have different aims, and will often give very slightly different answers, e.g.
>> a = 0:pi/1000:10*pi;
>> b = linspace(0,10*pi,10001);
>> all(a==b)
ans =
0
>> max(a-b)
ans =
3.5527e-15
In practice, however, the differences will often have little impact unless you are interested in tiny numerical details. I find linspace more convenient when the number of gaps is easy to express, whereas I find the colon operator more convenient when the increment is easy to express.
See this MathWorks technical note for more detail on the algorithm behind the colon operator. For more detail on linspace, you can just type edit linspace to see exactly what it does.
linspace is useful where you know the number of elements you want rather than the size of the "step" between them. So if I said make a vector with 360 elements between 0 and 2*pi as a contrived example it's either going to be
linspace(0, 2*pi, 360)
or if you just had the colon operator you would have to manually calculate the step size:
0:(2*pi - 0)/(360-1):2*pi
linspace is just more convenient
For a simple real world application, see this answer where linspace is helpful in creating a custom colour map

matlab division by vector?

Where can I find the the documentation for this sort of division and output? Why is the results different from 1./a?
a = [4,5,6,8]
>> 1/a'
ans =
0 0 0 0.1250
For arrays, the operand / is the mrdivide function: the result of B/A will be one solution of the linear system xA=B.
It is completely different from the operand ./, which corresponds to the rdivide function.
Note that, as stated in the comments, a scalar in Matlab is treated as a 1x1 matrix.
You are asking matlab to solve the equitation x*[4,5,6,8]'=1, one possible solution is [0,0,0,.125]

Bitmask to flip bits ... without XOR?

Pretty simple, really. I want to negate an integer which is represented in 2's complement, and to do so, I need to first flip all the bits in the byte. I know this is simple with XOR--just use XOR with a bitmask 11111111. But what about without XOR? (i.e. just AND and OR). Oh, and in this crappy assembly language I'm using, NOT doesn't exist. So no dice there, either.
You can't build a NOT gate out of AND and OR gates.
As I was asked to explain, here it is nicely formatted. Let's say you have any number of AND and OR gates. Your inputs are A, 0 and 1. You have six possibilities as you can make three pairs out of three signals (pick one that's left out) and two gates. Now:
Operation Result
A AND A A
A AND 1 A
A AND 0 0
A OR A A
A OR 1 1
A OR 0 A
So after you fed any of your signals into the first gate, your new set of signals is still just A, 0 and 1. Therefore any combination of these gates and signals will only get you A, 0 and 1. If your final output is A, then this means that for both values of A it won't equal !A, if your final output is 0 then A = 0 is such a value that your final value is not !A same for 1.
Edit: that monotony comment is also correct! Let me repeat here: if you change any of the inputs of AND / OR from 0 to 1 then the output won't decrease. Therefore if you claim to build a NOT gate then I will change your input from 0 to 1 , your output also can't decrease but it should -- that's a contradiction.
Does (foo & ~bar) | (~foo & bar) do the trick?
Edit: Oh, NOT doesn't exist. Didn't see that part!

Resources