How to determine different bit values given an integer value? - sql-server

I have a SQL field (Integer type), and in this example, it holds the value "1024".
In looking at my documentation for the DB schema, this field can be parsed such that:
"Bit 0 means Default"
"Bit 1 means Opposite of bit 0"
"Bit 2 means Verified"
"Bit 3 means Duplicate"
"Bit 4 means Manual"
...
And it goes on all the way up to Bit 21.
I don't know what the Bit thing means or how it relates to the Integer value. I'm not even really sure how to google this (if that makes sense), so I'm hoping SO can help (or help me with the wording to use for Googling).
Thanks.
(added SQL-Server as a tag, but not sure if it's relevant here or not)

It sounds like a bit field. Have a read of these...
https://en.wikipedia.org/wiki/Bit_field
https://en.wikipedia.org/wiki/Flag_field
The number is represented in binary as 0's and 1's (or bits), e.g. 00000100. For each position there is a different meaning, e.g. for the first Bit it is a 0 so it means Default for example.
1024 as binary is 10000000000.

Thanks to #VR46 and #Quantumplate, I understand how this works now. The value in the field is an integer, and when converted to Binary (using language of choice), turns into something like: 010001000010 - Counting from the right, each digit represents a "Bit" and corresponds to a "lookup value" in the documentation. So it's possible for multiple Bits to be set to 1.
Thank you all.

Related

Find variable Hex values from single int

I imagine this might be trivial but I don't know the correct terminology to search for a solution. What I need is when selecting an int value from a table column, it represents a number of possible Hex values which I would read using Hex Signed 2's complement. I am not sure what to store the Hex value as but it seems varbinary would be the likely option.
Table A holds the Enum's
EnumName nvarchar(10)
EnumValue varbinary(10)
Sample Values
(Val1,0x00000001),(Val2,0x00000002),(Val5,0x00000010)
It follows the patter 1,2,4,8 before moving on to the next place.
So as an example a value in Table b is -2147483648, using an online converter I can see the 2's complement is 80000000 so I can see in the enum list that it is the last value 0x80000000 (Value 28 in the table). However, some can have multiple values such as -2147090430 is 80060002 so this is the enums 0x00000002 (Value 2) and 0x80000000 (Value 28).
In SQL server there may be a way to do this but as I say, I don't know exactly what I am asking to search.
If anyone wants to correct my terminology I would be grateful.

Bitwise and of subsets of an array

Can anyone give a hint how to approach this problem?
Given an array A. Is there any subset of array A in which if we do AND of all elements of that subset then output should be in power of two.
I've thought of generating a power-set and solving but it will have a very bad complexity (2^n).
Thanks in Advance.
You can look at it from a different perspective: pick a power of two. Can we generate it?
This question is easy to answer. Take all items from the set in which the bit corresponding to the power of two is set. Calculate the AND of all of those. The result must by construction have the bit that we looked for set, but it may or may not have any other bits set. If it has other bits as well, then choosing some other (smaller - you can't choose any extra items because they don't have the target bit set) subset wouldn't work either, it could only have more wrong bits set because it would have fewer possibilities to unset bits.
Just do that for all possible powers of two, that's only as many as there are bits in the largest integer in the set.

Arinc429 32 bit words

So I have started a new project at work which is introducing a lot of new concepts.
I'm using the arinc429 USB box and c and need to read information coming in from the buffer, which is fine, it's what to do with the data I get afterwards I'm having a little trouble with.
With each data item in the buffer I get a arinc_low and arinc_high unsigned longs which both represent the end and beginning of a hex value. How do I take these two hex values, put them together and construct the 32 bit word so I can retrieve what I need, for example, bits 1-8 are used for the Label etc.
So far I have the following:
snprintf(low, 16, "%lx", buffer[i].arinc_low);
snprintf(high, 16, "%lx", buffer[i].arinc_high);
Through which I can iterate the resulting chars:
for(i=0;i<sizeof(low);i++)
{
printf("%d", low[i]);
}
etc.
The results im getting for example are:
102 55 102 -24 ......
Are these the bits that make up the 32 bit word that I am after, from looking at the documentation it would seem that these are sensible numbers to be getting back. Like I said, this project really brings in a lot of new concepts for me that I have glanced at fleetingly in the past, but never really put into practical use.
Thanks.
You will need to have a look at bitwise operators. Especially right and left shift operators (watch your Endianness) in order to, for example, set a variable to arinc_high and then to move it accordingly. Then you will be setting the remaining part with bitwise OR, for example.

Binary data different when viewed with CFDUMP

I have a SQL Server database that has a table that contains a field of type varbinary(256).
When I view this binary field via a query in MMS, the value looks like this:
0x004BC878B0CB9A4F86D0F52C9DEB689401000000D4D68D98C8975425264979CFB92D146582C38D74597B495F87FEA09B68A8440A
When I view this same field (and same record) using CFDUMP, the value looks like this:
075-56120-80-53-10279-122-48-1144-99-21104-1081000-44-42-115-104-56-10584373873121-49-714520101-126-61-115116891237395-121-2-96-101104-886810
(For the example below, the original binary value will be #A, and the CFDUMP value above will be #B)
I have tried using CAST(#B as varbinary(256)) but didn't get the same value as #A.
What must I do to convert the value retrieved from CFDUMP into the correct binary representation?
Note: I no longer have the applicable records in the database. I need to convert #B into the correct value that can re-INSERT into a varbinary(256) field.
(Expanded from comments)
I do not mean this sarcastically, but what difference does it make how they display binary? It is simply a difference in how the data is presented. It does not mean the actual binary values differ.
It is similar to how dates are handled. Internally, they are a big numbers. But since most people do not know which date 1234567890 represents, applications chose to display the number in a more human friendly format. So SSMS might present the date as 2009-02-13 23:31:30.000, while CF might present it as {ts '2009-02-13 23:31:30'}. Even though the presentations differ, it still the same value internally.
As far as binary goes, SSMS displays it as hexadecimal. If you use binaryEncode() on your query column, and convert the binary to hex, you can see it is the same value. Just without the leading 0x:
writeDump( binaryEncode(yourQuery.binaryColumn, "hex") )
If you are having some other issue with binary, could you please elaborate?
Update:
Unfortunately, I do not think you can easily convert the cfdump representation back into binary. Unlike Railo's implementation, Adobe's cfdump just concatenates the numeric representation of the individual bytes into one big string, with no delimiter. (The dashes are simply negative numbers). You can reproduce this by looping through the bytes of your sample string. The code below produces the same string of numbers you posted.
bytes = binaryDecode("004BC878B0CB9A4F...", "hex");
for (i=1; i<=arrayLen(bytes); i++) {
WriteOutput( bytes[i] );
}
I suppose it is theoretically possible to convert that string into binary, but it would be very difficult. AFAIK, there is no way to accurately determine where one number (or byte) begins and the other ends. There are some clues, but ultimately it would come down to guesswork.
Railo's implementation, displays the byte values separated by a dash "-". Two consecutive dashes indicates a negative number. ie "0", "75", "-56", ...
0-75--56-120--80--53--102-79--122--48--11-44--99--21-104--108-1-0-0-0--44--42--115--104--56--105-84-37-38-73-121--49--71-45-20-101--126--61--115-116-89-123-73-95--121--2--96--101-104--88-68-10
So you could probably parse that string back into an array of bytes. Then insert the binary into your database using <cfqueryparam cfsqltype="CF_SQL_BINARY" ..>. Unfortunately that does not help you, but the explanation might help the next guy.
At this point, I think your best bet is to just restore the data from a database backup.

Can Microsoft store three-valued fields in a single bit?

I'm completely ignorant of SQL/databases, but I was chatting with a friend who does a lot of database work about how some databases use a "boolean" field that can take a value of NULL in addition to true and false.
Regarding this, he made a comment along these lines: "To Microsoft's credit, they have never referred to that kind of field as a boolean, they just call it a bit. And it's a true bit - if you have eight or fewer bit fields in a record, it only requires one byte to store them all."
Naturally that seems impossible to me - if the field can hold three values you're not going to fit eight of them into a byte. My friend agreed that it seemed odd, but begged ignorance of the low-level internals and said that so far as he knew, such fields can hold three values when viewed from the SQL side, and it does work out to require a byte of storage. I imagine one of us has a wire crossed. Can anyone explain what's really going on here?
I recommend reading this for a good explanation of null storage: How does SQL Server really store NULL-s. In short, the null/not null bit is stored in a different place, the null bitmap for the row.
From the article:
Each row has a null bitmap for columns that allow nulls. If the row in that column is null then a bit in the bitmap is 1 else it's 0.
So while the actual values for 8 bit columns are stored in 1 byte, there are extra bits in the row's null bitmap that indicate if that column is NULL or not...so depends on how you're counting. To be completely accurate, 8 bit columns use 2 bytes, just split up in 2 different locations.
The null indicator is stored separately, so a nullable bit actually requires two bits. And strictly speaking, "null" isn't a third value; it's sort of a placeholder that says, "There could be a value here, but we don't know what it is." So if a bit is null, you can compare it to true and the comparison will fail, but you can also compare it to false and the comparison will fail.
You are correct. You can pack the eight true/false values into a single byte, but you still need additional storage to indicate whether it is NULL or not. Representing 38 different states with only 28 is impossible.
Your friend is right, but wrong at the same time. It's possible for a BIT field to be considered as being able to maintain three different values, but by definition NULL is the absence of a value.
Additionally, allowing NULL on the bit fields, means that 2 bits will be used for that field (one for the value, and one for if it is NULL or not). But the NULL state of the field (the NULL Bit) is stored in a bitmap for the row, and not in the exact memory space for the given column.
Others have already said that BIT requires 2 bits, not one.
Another important point that is often forgotten: Bit in SQL Server is not a Boolean or logic data type; it's a numeric (integer) data type. "An integer data type that can take a value of 1, 0, or NULL". Bit supports only numeric operators (<, >, +, -). It does not support any of the logic operators (AND, OR, NOT, etc).

Resources