I'm facing an issue with the exponential function in postgresql. If I use this select statement select exp(5999), I'm getting:
ERROR: value out of range: overflow
SQL state: 22003
If I use this select statement select exp(5999.1), I'm getting the exponential result.
In other case if I use this statement select exp(9999.1), I'm getting the following error:
ERROR: argument for function "exp" too big
SQL state: 22003
Please let me know why this issue happening and what is the solution for this kind of issue?
I think your first problem is caused by the fact that the output type of exp() is the same as the input type. Because you're using an integral value, it's complaining that the result won't fit in an integer.
That's likely why exp(5999.1), and probably exp(5999.0) works, the floating point type has a larger range.
Your second error is slightly different, it's complaining not of overflow during the calculation, but of the fact the input argument is too large. It's possible that it has some sanity checking on the input somewhere.
Even floating point values run out of range eventually. e9999 is somewhere around 104300, a rather large number and probably well beyond what you'd expect to see in a database application.
In fact, I'd be interested in the use case of such large numbers in a database application. This sounds like something better suited to a bignum package like MPIR.
If you pass an INTEGER argument the exp() function will try to return double precision value. Just above value n=709 it will reach the limit of 64-bit floating point number (about 10^308) and fail to calculate the e^n. The solution is to pass your argument with NUMERIC type:
SELECT EXP(710); -- failure!
SELECT EXP(710::NUMERIC); -- OK
SELECT EXP(5999.1::NUMERIC); -- huge but OK
EDIT!
As for the ERROR: argument for function "exp" too big SQL state: 22003. I've tried to write a work-around. Just run this:
SELECT n, POWER(EXP(1::NUMERIC), n) FROM (VALUES(9998), (9999), (10000)) AS foo (n)
and it will work. But then change 9999 to 9999.1 and you will get that stupid error again. This is ridiculous! 9999.1 is too big but 10000 is fine :D It looks like Postgres doesn't like decimal point in POWER()'s argument. Sorry but I can't fix that.
One solution would be to use the arithmetic property of power and write POWER(POWER(EXP(1::NUMERIC), n*10), 0.1) but that value combination is still too big for Postgres' implementation of power. Good luck with your battle.
Related
I receive an invalid floating point operation error when I SQRT(X) a value.
Under which reasons does this error occur, and what could I to to the X value to prevent this error?
For some context, the X value is a calculated average sales for stock items.
I have tried SQRT(ABS(x)) with no luck.
Thanks
As far as I know, the only case when SQRT(X) gives the error "An invalid floating point operation occurred" is when X is negative. However, you already fixed this by using the ABS function like this: SQRT(ABS(X)).
So, my guess is that the error does not really come from the SQRT function but from something else nearby. Let's look at the expression you gave:
SQRT(2*50*ABS(X)) / NULLIF(B*0.2/12,0))
This expression obviously has an extra right parenthesis. This makes me think that it is only a part of a larger expression, and that the larger expression is the reason for the error.
For instance, if B is 0 then the NULLIF() becomes NULL. You divide by NULL, thus getting a NULL result. Now, what do you do with this result? Maybe some more calculations that does not handle the NULL well?
There is a lot of guessing here. If my guesses did not point you in the right direction, then it would be helpful to know which values of B and X that give the error, and also the full statement that includes the expression.
There is a T_SQL command that calculates the type of the variables. You can view the type of variable before calculating its square root.
The command is :
SQL_VARIANT_PROPERTY
(See the site https://blog.sqlauthority.com/2013/12/15/sql-server-how-to-identify-datatypes-and-properties-of-variable/)
On the other hand, I'm sorry to tell you that I do not know if it works with the 2018 version, because I do not have the means to check it. But you can find the equivalent command for your SQL Server version.
Hope that can help.
I know that similar questions have been asked again in the past, but I think my case is slightly different. I have a column which has Logarithmic values and I'm trying to invert them using the following formula:
SELECT POWER(10,CAST(9.695262723 AS NUMERIC(30,15)))
Let's say the value 9.695262723 is one of the values of that column.
When trying to run this query I get an Arithmetic overflow error for type int, value = 4957500001.400178.
On the other hand, the same query works fine for smaller values e.g. SELECT POWER(10,CAST(8.662644523 AS NUMERIC(30,15)))
How could I overcome that error and calculate the inverse values of the log10 entries I have? Just for information the greater value that exists in the table (in log10 scale) is 12.27256096.
The problem here is your first input parameter (10) which SQL server will, by default, treat as the datatype int.int has a maximum value of 2^31-1 (2,147,483,647), and the number 4,957,500,001 is far larger than this, so you need to use a bigint:
SELECT POWER(CONVERT(bigint,10),CONVERT(numeric(30,15),9.695262723));
Edit: If you need to retain the decimal places, then use a numeric with a large enough scale and precision, instead of bigint.
I have a weird problem if you can call it a problem that is.
Sorry in advance, the database is in french.
I have a table which hold the time a user passed on a specific task
I want to sum the time passed for every task
I'm able to get a sum from the database but the data is kind of wierd
The field is a real number to start with
Example, if I sum 0,35 + 0,63 + 1 I should get 1,98 Data without a sum:
But instead Access give me 1,97999998927116 Data with sum:
If I was to sum only integer the number would be correct
I know I could simply use a round function to get rid of it.
But I would like to know why it does this.
This is because Sum uses floating-point arithmetic if you execute it on a column that is defined as a Single or a Double
Floating-point arithmetic is often inaccurate.
You can avoid these kinds of errors by defining your column as a Decimal or as Currency
I was performing some simple financial calculations in SQL Server when I discovered some odd behavior. I was trying to convert a string of numbers to a decimal type. While the string did not contain a decimal point, I knew from my specifications that the last 3 positions in the string were supposed to be behind the decimal point.
My first approach was flawed, but went something like this:
select convert(decimal(11,3),89456123/1000) as TotalUnits
This resulted in 89456.000. Performing the division before the cast resulted in the decimal parts being truncated.
So I moved the division operation outside the cast, like this:
select convert(decimal(11,3),89456123)/1000 as TotalUnits
This resulted in an explosion of positions after the decimal point. It returned 89456.12300000
According to my decimal specification, I wanted 11 digits, with 3 of them behind the decimal point. Now I have 13 total digits, with 8 behind the decimal. What happened?
To get what I want, I guess I have to double cast, like this:
select convert(decimal(11,3), convert(decimal(11,3),89456123)/1000)
which gives 89456.123.
It turns out no matter what I divide by, the resulting decimal point explosion is the same. Is the division converting the datatype into a double or something?
My question is this:
Why is this happening, and is there a more elegant way to compensate for it, instead of double-casting to decimal.
EDIT
I found this similar question on SO, but it looks like they are again double-casting.
SQL server does integer arithmetic, to force it to use numeric, you can multiply it by 1.0
No need of using convert twice. This gives 89456.123 with out double convert.
select convert(decimal(11,3),89456123*1.0/1000) as TotalUnits
Why does convert(decimal(11,3),89456123)/1000 end up with 6 decimal places? The rules demand it. numeric division has rather complicated rules about the resulting type.
When you say 1.0 you end up with a numeric with the least scale factors possible to represent this value:
SELECT SQL_VARIANT_PROPERTY(1.11, 'BaseType')
SELECT SQL_VARIANT_PROPERTY(1.11, 'Precision')
SELECT SQL_VARIANT_PROPERTY(1.11, 'Scale')
SELECT SQL_VARIANT_PROPERTY(1.11, 'TotalBytes')
What should you do? I think there is no really elegant solution because of the complicated rules. Any solution I can think of involves rather crazy type inference of intermediate results. I recommend pretty much the same solution that RADAR already gave:
select convert(decimal(11,3), convert(decimal(11, 3), 89456123)/1000) as TotalUnits
The main difference is that I think the *1.0 "trick" used as a short hand for a cast is obfuscating the meaning of the code. If you happen to like it feel free to use it, though.
select convert(decimal(11,3),89456123/CONVERT(decimal(11,3),1000))
I inherited a project that uses SQL Server 200x, wherein a column that stores a value that is always considered as a percentage in the problem domain is stored as its greater than 1 decimal equivalent. For example, 70% (0.7, literally) is stored as 70, 100% as 100, etc. Aside from the need to remember to * 0.01 on retrieved values and * 100 before persisting values, it doesn't seem to be a problem in and of itself. It does make my head explode though... so is there a good reason for it that I'm missing? Are there compelling reasons to fix it, given that there is a fair amount of code written to work with the pseudo-percentages?
There are a few cases where greater than 100% occurs, but I don't see why the value wouldn't just be stored as 1.05, for example, in those cases.
EDIT: Head feeling better, and slightly smarter. Thanks for all the insights.
There are actually four good reasons I can think of that you might want to store—and calculate with—whole-number percentage values rather than floating-point equivalents:
Depending on the data types chosen, the integer value may take up less space.
Depending on the data type, the floating-point value may lose precision (remember that not all languages have a data type equivalent to SQL Server's decimal type).
If the value will be input from or output to the user very frequently, it may be more convenient to keep it in a more user-friendly format (decision between convert when you display and convert when you calculate ... but see the next point).
If the principle values are also integers, then
principle * integerPercentage / 100
which uses all integer arithmetic is usually faster than its floating-point equivalent (likely significantly faster in the case of a floating-point type equivalent to T-SQL's decimal type).
If its a byte field then it takes up less room in the db than floating point numbers, but unless you have millions and millions of records, you'll hardly see a difference.
Since floating-point values can't be compared for equality, an integer may have been used to make the SQL simpler.
For example
(0.3==3*.1)
is usually False.
However
abs( 0.3 - 3*.1 )
Is a tiny number (5.55e-17). But it's pain to have to do everything with (column-SomeValue) BETWEEN -0.0001 AND 0.0001 or ABS(column-SomeValue) < 0.0001. You'd rather do column = SomeValue in your WHERE clause.
Floating point numbers are prone to rounding errors and, therefore, can act "funny" in comparisons. If you always want to deal with it as fixed decimal, you could either choose a decimal type, say decimal(5,2), or do the convert and store as int thing that your db does. I'd probably go the decimal route, even though the int would take up less space.
A good guess is because anything you do with integers (storing, calculating, stuffing into an edit for for a user, etc.) is marginally easier and more efficient than doing the same with floating point numbers. And the rounding issues aren't so obvious when you look at the data.
If these are numbers that end users are likely to see and interact with, percentages are easier to understand than decimals.
This is one of those situations where a notation aid can help; in the program, be consistent in using a prefix (Hungarian) or postfix to specify values that are percentages vs. those that are decimal. If you can extend a naming convention to the database fields themselves, so much the better.
And to add to the data storage issue, if you can use integer arithmetic for whatever processing you are doing, the performance is much better than when doing floating point arithmetic... So storing ther percetages as integer values may allow the processing logic to itilize integer arithmetic
If you're actually using them as a coefficient (or expect users of the database to do this sort of thing in reports), there's a case for storing them as a coefficient - particularly if there's a reason to do calculations involving more than one.
However, if you do this you should be consistent - either all percentages or all coefficients.