I have a probably simple question, which I just cant seem to understand.
I am creating a serial parser for a datalogger which sends a serial stream. Under the documentation for the product a calculation is stated, which I don't understand.
Lateral = Data1 And 0x7F + Data2 / 0x100
If (Data1 And 0x80)=0 Then Lateral = -Lateral
What does Data1 And 0x7f means? I know that 7F is 127, but besides that I don't understand the combination with the And statement.
What would the real formula look like?
Bitwise AND -- a bit in the output is set if and only if the corresponding bit is set in both inputs.
Since your tags indicate that you're working in C, you can perform bitwise AND with the & operator.
(Note that 0x7F is 01111111 and 0x80 is 10000000 in binary, so ANDing with these correspond respectively to extracting the lower seven bits and extracting the upper bit of a byte.)
1st sentence
Lateral = Data1 And(&) 0x7f + Data2/ 0x100
means take the magnitude of Data1(Data and 0x7f) and add to it the value of Data2/256
2nd sentence
check the sign od Data1 and assign the same to Lateral.
Related
I have seen something like this in some of my coworkers code today :
I2C1ADB1= (slave_read_address | 0x01);
What does this | 0x01 part do? Does it end 1 at the end of the bits?
Let's say I2C1ADB1=0b00000000. If I use above line, will the new I2C1ADB1 be 0b000000001? Will it also increase the bit count from 8 to 9?
'|' is bit-wise OR operator in C. It does bit-wise OR between two values and return the final value.
I2C1ADB1= (slave_read_address | 0x01);
Assume slave_read_address in binary is 0bxxxxxxxx where each x is bit value 1 or 0. Similarly, 0x01 in binary is 0x00000001.
As you know OR will return true (1) if at least one of the value is true (1). Otherwise returns false (0).
So After the above C line, I2C1ADB1 will have 0bxxxxxxx1.
The operator will not ADD bits. Usually '|' (OR) operator is used to set a particular set of bits without altering other bits.
The statement I2C1ADB1 = (slave_read_address | 0x01); stores the value of slave_read_address into I2C1ADB1, forcing the low order bit to 1.
Your interpretation is incorrect, the value is not shifted, no extra bit is appended. The lowest bit is set to 1:
0 becomes 1,
1 is unchanged,
2 becomes 3,
3 does not change,
4 becomes 5,
etc.
Because on the left you have a variable and on the right a constant the result is to set all the corresponding 1 bits from the constant in the variable. In this case you’re right: it sets the last bit. No bit count increase occur!
Based on the recent question.
Can someone point me to explanation of the following?
If I cast binary(4) constant 0x80000000 to int, take the resulting value and cast it to bit type, the result is 1.
select cast(0x80000000 as int) --> -2147483648
select cast(-2147483648 as bit) --> 1
But if I cast 0x80000000 to bit type directly the result is 0.
select cast(0x80000000 as bit) --> 0
I hoped to get 1 in this case as well, thinkning that probably this expression equivalent to
select cast(cast(0x80000000 as binary(1)) as bit)
but this is not the case. Instead, it seems that the highest byte of the binary constant is taken and converted to bit. So, effectively it is something like
select cast(cast(right(0x80000000, 1) as binary(1)) as bit)
I'm clear with first binary -> int -> bit part. What I'm not clear with is the second binary -> bit part. I was not able to find this behavior explained in the documentation, where only
Converting to bit promotes any nonzero value to 1.
is stated.
binary is not a number, it's a string of bytes. When you cast binary to another type, a conversion is performed. When binary is longer than the target data-type, it is truncated from the left. When it's shorter than the target, it is padded with zeroes from the left. The exception is when casting to another string type (e.g. varchar or another binary) - there it's padding and truncation from the right, which may be a bit confusing at first :)
So what happens here?
select cast(cast(0x0F as binary(1)) as bit) -- 1 - 0x0F is nonzero
select cast(cast(0x01 as binary(1)) as bit) -- 1 - 0x01 is nonzero
select cast(cast(0x01 as binary(2)) as bit) -- 0 - truncated to 0x00, which is zero
select cast(cast(0x0100 as binary(2)) as bit) -- 0 - truncated to 0x00
select cast(cast(0x0001 as binary(2)) as bit) -- 1 - truncated to 0x01, nonzero
As the documentation says:
When data is converted from a string data type (char, varchar, nchar, nvarchar, binary, varbinary, text, ntext, or image) to a binary or varbinary data type of unequal length, SQL Server pads or truncates the data on the right. When other data types are converted to binary or varbinary, the data is padded or truncated on the left. Padding is achieved by using hexadecimal zeros.
Which is something you can use, because:
select cast(0x0100 as binary(1)) -- 0x01
So if you need non-zero on the whole value, you basically need to convert to an integer data type, if possible. If you want the rightmost byte, use cast as bit, and if you want the leftmost, use cast as binary(1). Any other can be reached by using the string manipulation functions (binary is a string, just not a string of characters). binary doesn't allow you to do something like 0x01000 = 0 - that includes an implicit conversion to int (in this case), so the usual rules apply - 0x0100000000 = 0 is true.
Also note that there are no guarantees that conversions from binary are consistent between SQL server versions - they're not really managed.
Yes, in general when converting from an arbitrary length binary or varbinary value to a fixed size type, it's the rightmost bits or bytes that are converted:
select
CAST(CAST(0x0102030405060708 as bigint) as varbinary(8)),
CAST(CAST(0x0102030405060708 as int) as varbinary(8)),
CAST(CAST(0x0102030405060708 as smallint) as varbinary(8)),
CAST(CAST(0x0102030405060708 as tinyint) as varbinary(8))
Produces:
------------------ ------------------ ------------------ ------------------
0x0102030405060708 0x05060708 0x0708 0x08
I can't actually find anywhere in the documentation that specifically states this, but there again, the documentation does basically state that conversions between binary and other types is not guaranteed to follow any specific conventions:
Converting any value of any type to a binary value of large enough size and then back to the type, will always result in the same value if both conversions are taking place on the same version of SQL Server. The binary representation of a value might change from version to version of SQL Server.
So, the above shown conversions were the "expected" results running on SQL Server 2012, on my machine, but others may get different results.
How can I switch the certain bits of a number? For example, given bit representation (just an example, the syntax is surely wrong!):
someNumber = 00110111
changeNumber = 11100110
Then how can I change the far right bit of someNumber with the far right bit of the changeNumber without changing the rest bits of the someNumber? So the result would be:
00110111 //someNumber
11100110 //changeNumber
________
00110110
Extract the far right bit of changeNumber:
changeNumber & 1
Remove the far right bit of someNumber:
someNumber & ~1
And OR them together:
(changeNumber & 1) | (someNumber & ~1)
To set bit n, change 1 to 2n.
One a similar line as Martin,
Test the last bit of someNumber, and use the result to select the operation to change some number ('bitwise and' or 'bitwise or')
#DEFINE SWITCH_MASK_OR 0b00000001
#DEFINE SWITCH_MASK_AND (~SWITCH_MASK_OR)
...
result = changeNumber & SWITCH_MASK_OR ? \
someNumber | SWITCH_MASK_OR : someNumber & SWITCH_MASK_AND;
I suggest the following steps:
Clear the last bit of someNumber.
someNumber &= ~1
Extract the last bit of changeNumber
int lastBit = changeNumber & 1;
Set the last bit of someNumber:
someNumber |= lastBit;
AND the changenumber with a mask of 00000001, so extracting the state of the lowest bit, all others set to 0: 00000000
AND the somenumber with a mask of 11111110, so setting the lowest bit to 0 while leaving the rest unchanged: 00110110
OR the two results together, so : 00110110
I have a SQL entry that is of type hex (varbinary) and I want to do a SELECT COUNT for all the entries that have this hex value ending in 1.
I was thinking about using CONVERT to make my hex into a char and then use WHERE my_string LIKE "%1". The thing is that varchar is capped at 8000 chars, and my hex is longer than that.
What options do I have?
Varbinary actually works with some string manipulation functions, most notably substring. So you can use eg.:
select substring(yourBinary, 1, 1);
To get the first byte of your binary column. To get the last bit then, you can use this:
select substring(yourBinary, len(yourBinary), 1) & 1;
This will give you zero if the bit is off, or one if it is on.
However, if you really only have to check at most the last 4-8 bytes, you can easily use the bitwise operators on the column directly:
select yourBinary & 1;
As a final note, this is going to be rather slow. So if you plan on doing this often, on large amounts of data, it might be better to simply create another bit column just for that, which you can index. If you're talking about at most a thousand rows or so, or if you don't care about speed, fire away :)
Check last four bits = 0001
SELECT SUM(CASE WHEN MyColumn % 16 IN (-15,1) THEN 1 END) FROM MyTable
Check last bit = 1
SELECT SUM(CASE WHEN MyColumn % 2 IN (-1,1) THEN 1 END) FROM MyTable
If you are wondering why you have to check for negative moduli, try SELECT 0x80000001 % 16
Try using this where
WHERE LEFT(my_string,1) = 1
It it's text values ending in 1 then you want the Right as opposed to the Left
WHERE RIGHT(my_string,1) = 1
Sorry for the very basic question. What does the & operator do in this SQL
WHERE (sc.Attributes & 1) = 0
sc is an alias for a table which contains a column attributes.
I'm trying to understand some SQL in a report and that line is making it return 0 entries. If I comment it out it works. I have limited SQL knowledge and I'm not sure what the & 1 is doing.
& is the bitwise logical and operator - It performs the operation on 2 integer values.
WHERE (sc.Attributes & 1) = 0
The above code checks to see if sc.Attributes is an even number. Which is the same as saying that the first bit is not set.
Because of the name of the column though: "Attributes", then the "1" value is probably just some flag that has some external meaning.
It is common to use 1 binary digit for each flag stored in a number for attributes. So to test for the first bit you use sc.Attributes&1, to test for the second you use sc.Attributes&2, to test for the third you use sc.Attributes&4, to test for the fourth you use sc.Attributes&8, ...
The = 0 part is testing to see if the first bit is NOT set.
Some binary examples: (== to show the result of the operation)
//Check if the first bit is set, same as sc.Attributes&1
11111111 & 00000001 == 1
11111110 & 00000001 == 0
00000001 & 00000001 == 1
//Check if the third bit is set, same as sc.Attributes&4
11111111 & 00000100 == 1
11111011 & 00000100 == 0
00000100 & 00000100 == 1
It is a bitwise logical AND operator.
It's a bitwise and.
Seeing as you tagged this as sql server, I thought I'd add something from a different angle as also ran into one of these this week.
These can hurt the performance of your queries if used in the predicate. Very easy to manufacture an example of your own. Here is the snippet from my query
WHERE
advertiserid = #advertiserid
AND (is_deleted & #dirty > 0)
WHERE
advertiserid = #advertiserid
AND (is_deleted > 0 AND #dirty > 0)
by simply defining each column with a proper value this allowed the optimizer to remove a bookmark lookup and performance stats showed a X10 performance increase.