I have code
procedure _UUEncode;
var
Sg: string;
Triple: string[3];
begin
...
Byte(Sg[1]) := Byte(Sg[1]) + Length(Triple); // <- on this line I got error
...
end;
I got "left sign cannot be assigne to" error, someone can help me?
I try to make conversion from Delphi 7 component to XE2 component
thanks for suggestion, I realy appreciated, may be someone have cheklist what I must pay attention while convert delphi7 vcl component to xe2
I would write it like this, in all versions of Delphi:
inc(Sg[1], Length(Triple));
It's always worth avoiding casts if possible. In this case you are wanting to increment an ordinal value, and inc is what does that.
The reason your typecast failed is that casts on the target of an assignment are special. These typecasts are known as variable typecasts and the documentation says:
You can cast any variable to any type, provided their sizes are the same and you do not mix integers with reals.
In your case the failure is because the sizes do not match. That's because Char is two bytes wide in Unicode Delphi. So, the most literal conversion of your original code is:
Word(Sg[1]) := Word(Sg[1]) + Length(Triple);
However, it's just better and clearer to use inc.
It's also conceivable that your uuencode function should be working with AnsiString since uuencode maps binary data to a subset of ASCII. If you did switch to AnsiString then your original code would work unchanged. That said, I still think inc is clearer!
Related
I receive an invalid floating point operation error when I SQRT(X) a value.
Under which reasons does this error occur, and what could I to to the X value to prevent this error?
For some context, the X value is a calculated average sales for stock items.
I have tried SQRT(ABS(x)) with no luck.
Thanks
As far as I know, the only case when SQRT(X) gives the error "An invalid floating point operation occurred" is when X is negative. However, you already fixed this by using the ABS function like this: SQRT(ABS(X)).
So, my guess is that the error does not really come from the SQRT function but from something else nearby. Let's look at the expression you gave:
SQRT(2*50*ABS(X)) / NULLIF(B*0.2/12,0))
This expression obviously has an extra right parenthesis. This makes me think that it is only a part of a larger expression, and that the larger expression is the reason for the error.
For instance, if B is 0 then the NULLIF() becomes NULL. You divide by NULL, thus getting a NULL result. Now, what do you do with this result? Maybe some more calculations that does not handle the NULL well?
There is a lot of guessing here. If my guesses did not point you in the right direction, then it would be helpful to know which values of B and X that give the error, and also the full statement that includes the expression.
There is a T_SQL command that calculates the type of the variables. You can view the type of variable before calculating its square root.
The command is :
SQL_VARIANT_PROPERTY
(See the site https://blog.sqlauthority.com/2013/12/15/sql-server-how-to-identify-datatypes-and-properties-of-variable/)
On the other hand, I'm sorry to tell you that I do not know if it works with the 2018 version, because I do not have the means to check it. But you can find the equivalent command for your SQL Server version.
Hope that can help.
It may just be me, but... Despite the fact that most sql developers may consider cast & convert to be very basic stuff, and that may be true, I find Microsoft's documentation page on CAST & CONVERT to be one of the most hideous, not-intuitively-laid-out, hard to understand things I have ever seen. Much of their documentation is great. Like constantly trying to blend the entire page into a mix of both cast and convert, jumping back and forth in each sentence... rather than dealing with them separately. And who puts the target_type as the first argument? Putting the expression as the first argument would be more intuitive - and follow the other 99% of numerous programming languages' syntax. UH
MS says that I can only convert to 3 data types: (well actually I'm not really sure if this applies to both CAST and CONVERT, since they ARE, in fact, different... But according to the layout of that webpage, it apparently applies equally to both - even though I already know for a fact that it is not true for CAST, which I use much more frequently).
It says: "Is the target data type. This includes xml, bigint, and sql_variant"
Putting aside for the moment the fact that I CAST things as many other datatypes all the time (date, varchar),
My immediate question is: if I can only CONVERT to those data types, then why does this work?
select CONVERT(varchar(200), cast(50 as smallint))
And finally, I'd like to run an INSERT that will be getting a smallint and putting it into a varchar(200) column.
All I'm trying to do is avoid any failures, so maybe I don't really "need" to convert or cast over to varchar, but any commments on
answer on what is my apparent misunderstanding about the CONVERT documentation
or
how to safely convert it to insert to varchar
are welcome. As long as you're not just overly unpleasant, since there are always those MS fans who get hot under the collar at all critiques of MS .. :|
Yes, you can convert from smallint to varchar.
1) answer on what is my apparent misunderstanding about the CONVERT
documentation
This may be product of general lack of understanding on what data types are, how can they be converted from one type to another and equally important; what styles are when it comes to the aesthetic representation of a data type.
CAST is an explicit cast operation with no style options.
CONVERT is also an explicit cast that gives you the ability to specify a style for the output.
The documentation clearly states:
Implicit Conversions
Implicit conversions are those conversions that occur without
specifying either the CAST or CONVERT function. Explicit conversions
are those conversions that require the CAST or CONVERT function to be
specified. The following illustration shows all explicit and implicit
data type conversions that are allowed for SQL Server system-supplied
data types. These include xml, bigint, and sql_variant. There is no
implicit conversion on assignment from the sql_variant data type, but
there is implicit conversion to sql_variant.
For your second question
2) how to safely convert it to insert to varchar
Depending of what you mean by safe. Converting to varchar is the convertion that most likely succeed. But whenever to cast to any toher datatype you are intrinsically changing the very nature of the data and will lose precision when casting to smaller types (or applying styles).
The documentation clearly states:
Truncating and Rounding Results
When you convert character or binary expressions (char, nchar,
nvarchar, varchar, binary, or varbinary) to an expression of a
different data type, data can be truncated, only partially displayed,
or an error is returned because the result is too short to display.
Conversions to char, varchar, nchar, nvarchar, binary, and varbinary
are truncated, except for the conversions shown in the following
table.
in other words, casting is never safe.
Numbers always get silently truncated for me. I would propose:
Option 1
Compare the converted value with the original value.
DECLARE #ORIGINAL DECIMAL(13,2) = -99999999999.99 --
DECLARE #EXPECTED VARCHAR(15) = ''
SELECT #EXPECTED = CONVERT(VARCHAR(15),#ORIGINAL)
IF CONVERT(DECIMAL(13,2),#EXPECTED) != #ORIGINAL SELECT 'Ooops'
Option 2
Make sure that all possible values will fit in target varchar.
Decimal(13,2). Widest number possible will be "-99999999999.99" needs varchar(15):
13 chars for digits
1 char for decimal separator
1 char for minus sign
Smallint stores 2 bytes, from "-32768" to "32767", needs varchar(6):
- 5 chars for digits
- 1 char for minus sign
Not sure if you need chars for thousands separators, or if you can change it via settings.
I am using an EV3 Cube to scan a sheet that represents a binary number; i.e a black line represents a 1 and a white line represents a 0.
Using this, I generate a numeric array consisting of 1's and 0's and convert them by using an Index Array to divide them into a single digit, use a quick comparison (!= 0) to generate their Boolean values, then using the Build Array block, I turn it into a Boolean array.
However, despite this, while using the Convert Boolean Array to Integer block, I receive an error which I do know the reason to.
If anyone could help me, I would be greatful! Thank you.
(By the way, I am a Freshman engineering student with no prior knowledge of LabView, just a year of C++ and 2 years of Java to help me. So thorough explanations would be much easier for me to comprehend)
Attached are pictures of my code along with the error I receive.
Unfortunately the error isn't fully visible as it is truncated in your screenshot,
it would help to either have the code or be able to read the entire message.
But what I'm guessing on what I see is, it says that this is Target Specific error Boolean Array To Number function is not supported.
This could mean that a function you are trying to use that normally is available on PC version of LabVIEW will not work on the target platform ( embedded CPU and OS of your EV3 ).
I'm writing a bison/flex parser, with multiple data types, all compatible with ANSI C. It won't be a C language, but will retain its data types.
Thing is... I am not sure how to do this correctly.
For example, in an expression, say 'n1' + 'n2', if 'n1' is double and 'n2' is a 32 bit integer, I will need to do type conversion right? How to do it correctly?
i.e. I will logically need to evaluate which type is bigger (here it's double), then convert the int32 to double and then perform the add operation, which would result in a double of value n1 + n2.
I also want to provide support for type casting.
What's the best way to do it correctly? Is there a way to do it nicely or will I have to put a billion of conversion functions like uint32todouble, int32todouble, int32tolongdouble, int64tolongdouble, etc etc.
Thanks!
EDIT: I have been asked to clarify my question, so I will.
I agree this is not directly related to bison/flex, but I would like people experienced in this context to hint me.
Say I have such an operation in my own 'programming' language (i would say it's more scripting, but anyways) i.e. the one I would parse :
int64 b = 237847823435ll
int64 a = int64(82 + 3746.3746434 * 265.345 + b)
Here, the int64() pseudo-function is a type cast. First, we can see the 82 is an int constant, followed by 3746.3746434 and 265.345, and b is an int64. So when I do the operation at A, I will have to :
Change the type of 82 to double
Change the type of b to double
Do the calculations
Since we have a double and that we want to cast it to an int64, convert the double to an int64, and store the result in variable 'a'
As you see, it's quite lots of type changes... And I wonder how I can for example do them in the most elegant and less work possible. I'm talking about the internal implementation.
I could for example write things like :
int64_t double_to_int64(double k) {
return (int64_t) k; // make specific double to int64 conversion
}
For each of the types, so I'd have functions specific to each conversion, but it would take quite lots of time to achieve it and bsides it's an ugly way of doing things. Since some of the variables and number tokens in my parser/lexer are stored in buffers (for different reasons), I don't see really how I could find a way to convert from a type to another without doing such functions. Not to mention with all the unsigned/signed types, it will double the number of required functions.
Thanks
This has nothing to do with flex or bison. It is a language design question.
I suggest you have a look at the type promotion features of other languages. For example, C and Java promote byte, char, and short to int whenever used in an expression. So that cuts a lot of cackle straight away.
These operations are single instructions on the hardware. You don't need to write any functions at all; just generate the appropriate code. If you're designing an interpretive system, design the p-code accordingly.
I'm facing an issue with the exponential function in postgresql. If I use this select statement select exp(5999), I'm getting:
ERROR: value out of range: overflow
SQL state: 22003
If I use this select statement select exp(5999.1), I'm getting the exponential result.
In other case if I use this statement select exp(9999.1), I'm getting the following error:
ERROR: argument for function "exp" too big
SQL state: 22003
Please let me know why this issue happening and what is the solution for this kind of issue?
I think your first problem is caused by the fact that the output type of exp() is the same as the input type. Because you're using an integral value, it's complaining that the result won't fit in an integer.
That's likely why exp(5999.1), and probably exp(5999.0) works, the floating point type has a larger range.
Your second error is slightly different, it's complaining not of overflow during the calculation, but of the fact the input argument is too large. It's possible that it has some sanity checking on the input somewhere.
Even floating point values run out of range eventually. e9999 is somewhere around 104300, a rather large number and probably well beyond what you'd expect to see in a database application.
In fact, I'd be interested in the use case of such large numbers in a database application. This sounds like something better suited to a bignum package like MPIR.
If you pass an INTEGER argument the exp() function will try to return double precision value. Just above value n=709 it will reach the limit of 64-bit floating point number (about 10^308) and fail to calculate the e^n. The solution is to pass your argument with NUMERIC type:
SELECT EXP(710); -- failure!
SELECT EXP(710::NUMERIC); -- OK
SELECT EXP(5999.1::NUMERIC); -- huge but OK
EDIT!
As for the ERROR: argument for function "exp" too big SQL state: 22003. I've tried to write a work-around. Just run this:
SELECT n, POWER(EXP(1::NUMERIC), n) FROM (VALUES(9998), (9999), (10000)) AS foo (n)
and it will work. But then change 9999 to 9999.1 and you will get that stupid error again. This is ridiculous! 9999.1 is too big but 10000 is fine :D It looks like Postgres doesn't like decimal point in POWER()'s argument. Sorry but I can't fix that.
One solution would be to use the arithmetic property of power and write POWER(POWER(EXP(1::NUMERIC), n*10), 0.1) but that value combination is still too big for Postgres' implementation of power. Good luck with your battle.