I imagine this might be trivial but I don't know the correct terminology to search for a solution. What I need is when selecting an int value from a table column, it represents a number of possible Hex values which I would read using Hex Signed 2's complement. I am not sure what to store the Hex value as but it seems varbinary would be the likely option.
Table A holds the Enum's
EnumName nvarchar(10)
EnumValue varbinary(10)
Sample Values
(Val1,0x00000001),(Val2,0x00000002),(Val5,0x00000010)
It follows the patter 1,2,4,8 before moving on to the next place.
So as an example a value in Table b is -2147483648, using an online converter I can see the 2's complement is 80000000 so I can see in the enum list that it is the last value 0x80000000 (Value 28 in the table). However, some can have multiple values such as -2147090430 is 80060002 so this is the enums 0x00000002 (Value 2) and 0x80000000 (Value 28).
In SQL server there may be a way to do this but as I say, I don't know exactly what I am asking to search.
If anyone wants to correct my terminology I would be grateful.
Related
I have a SQL field (Integer type), and in this example, it holds the value "1024".
In looking at my documentation for the DB schema, this field can be parsed such that:
"Bit 0 means Default"
"Bit 1 means Opposite of bit 0"
"Bit 2 means Verified"
"Bit 3 means Duplicate"
"Bit 4 means Manual"
...
And it goes on all the way up to Bit 21.
I don't know what the Bit thing means or how it relates to the Integer value. I'm not even really sure how to google this (if that makes sense), so I'm hoping SO can help (or help me with the wording to use for Googling).
Thanks.
(added SQL-Server as a tag, but not sure if it's relevant here or not)
It sounds like a bit field. Have a read of these...
https://en.wikipedia.org/wiki/Bit_field
https://en.wikipedia.org/wiki/Flag_field
The number is represented in binary as 0's and 1's (or bits), e.g. 00000100. For each position there is a different meaning, e.g. for the first Bit it is a 0 so it means Default for example.
1024 as binary is 10000000000.
Thanks to #VR46 and #Quantumplate, I understand how this works now. The value in the field is an integer, and when converted to Binary (using language of choice), turns into something like: 010001000010 - Counting from the right, each digit represents a "Bit" and corresponds to a "lookup value" in the documentation. So it's possible for multiple Bits to be set to 1.
Thank you all.
I have a SQL Server database that has a table that contains a field of type varbinary(256).
When I view this binary field via a query in MMS, the value looks like this:
0x004BC878B0CB9A4F86D0F52C9DEB689401000000D4D68D98C8975425264979CFB92D146582C38D74597B495F87FEA09B68A8440A
When I view this same field (and same record) using CFDUMP, the value looks like this:
075-56120-80-53-10279-122-48-1144-99-21104-1081000-44-42-115-104-56-10584373873121-49-714520101-126-61-115116891237395-121-2-96-101104-886810
(For the example below, the original binary value will be #A, and the CFDUMP value above will be #B)
I have tried using CAST(#B as varbinary(256)) but didn't get the same value as #A.
What must I do to convert the value retrieved from CFDUMP into the correct binary representation?
Note: I no longer have the applicable records in the database. I need to convert #B into the correct value that can re-INSERT into a varbinary(256) field.
(Expanded from comments)
I do not mean this sarcastically, but what difference does it make how they display binary? It is simply a difference in how the data is presented. It does not mean the actual binary values differ.
It is similar to how dates are handled. Internally, they are a big numbers. But since most people do not know which date 1234567890 represents, applications chose to display the number in a more human friendly format. So SSMS might present the date as 2009-02-13 23:31:30.000, while CF might present it as {ts '2009-02-13 23:31:30'}. Even though the presentations differ, it still the same value internally.
As far as binary goes, SSMS displays it as hexadecimal. If you use binaryEncode() on your query column, and convert the binary to hex, you can see it is the same value. Just without the leading 0x:
writeDump( binaryEncode(yourQuery.binaryColumn, "hex") )
If you are having some other issue with binary, could you please elaborate?
Update:
Unfortunately, I do not think you can easily convert the cfdump representation back into binary. Unlike Railo's implementation, Adobe's cfdump just concatenates the numeric representation of the individual bytes into one big string, with no delimiter. (The dashes are simply negative numbers). You can reproduce this by looping through the bytes of your sample string. The code below produces the same string of numbers you posted.
bytes = binaryDecode("004BC878B0CB9A4F...", "hex");
for (i=1; i<=arrayLen(bytes); i++) {
WriteOutput( bytes[i] );
}
I suppose it is theoretically possible to convert that string into binary, but it would be very difficult. AFAIK, there is no way to accurately determine where one number (or byte) begins and the other ends. There are some clues, but ultimately it would come down to guesswork.
Railo's implementation, displays the byte values separated by a dash "-". Two consecutive dashes indicates a negative number. ie "0", "75", "-56", ...
0-75--56-120--80--53--102-79--122--48--11-44--99--21-104--108-1-0-0-0--44--42--115--104--56--105-84-37-38-73-121--49--71-45-20-101--126--61--115-116-89-123-73-95--121--2--96--101-104--88-68-10
So you could probably parse that string back into an array of bytes. Then insert the binary into your database using <cfqueryparam cfsqltype="CF_SQL_BINARY" ..>. Unfortunately that does not help you, but the explanation might help the next guy.
At this point, I think your best bet is to just restore the data from a database backup.
I have a table with IDs and locales. The same ID can be listed more than once with a different locale:
ID Locale
123456 EN_US
234567 EN_US
234567 EN_CA
345678 EN_US
I need to create an unique identifier in the form of an numeric ID (Integer) for each record, while maintaining the ability to reverse engineer the original components.
I was thinking bit shifting might work: assign a numerical value to each locale, but I'm not quite sure how to implement. Has anyone faced this challenge before? Also, I have 75 locales so I'm not sure if that would be an issue with bit shifting.
Lastly, I'm using SQL Server with a Linked Server connection to Teradata (that's my data source). I don't think Teradata supports bitwise out-of-the-box so I'm assuming I'll have to do it in MSSQL.
Thank you.
You can create a composite numeric key, mapping your 75 unique values into the last 2 digits of the numeric key. You can parse into components with simple modulus 100 arithmetic or just a substring. If you will ever exceed 100 values, use 3 digits instead. 9 digits total will fit int an int, 10-18 will fit in a bigint.
Converting 234567-EN_US into an integer is easy. Just use CHECKSUM on the concatenated string value. It would not be reversible, however.
You could store this CHECKSUM value on the original table, however, and then use it to backtrack from whatever table you're going to store the integer in.
Another solution would be to assign each locale an Integer value (as Marc B suggested). Call that X. Then call your existing integer ID (234567) as Y. Your final key would be (X * 1,000,000) + Y. You could then reverse the formula to get the values back. This would only work, of course, if your existing integer IDs are well below 1,000,000, and also if your final integer can be a BigInt.
I've got a question about storing bitwise flags in sql. I have a number of status flags which i'd like to store in a SQL smallint field. So a smallint can represent -32768 to 32767.
If I want to use all 32 bits to store boolean values how do I reference the bits. For instance. If I want to store bits that make up the number 1 i would normally see 31 zeros and a 1 in the LSB. Would would that sequence equate to as a value in my smallint field? What about 1 in the MSB and zero in all other bits? Maybe there is a better way to store and query bitwise data in SQL.
Thanks in advance.
Use the bit datatype? One per flag you need
SQL Server packs bit columns into as many bytes as needed
upto 8 = 1 byte
9-16 is 2 bytes
...
The DB engine will also take care of all bit masks etc for you
All you see are discrete bit values
So... why roll your own?
The ideal solution would be to use 32 separate TinyInt fields. TinyInt supports values 0-255. If you try to use a single field and do bit processing you lose the ability to index any of these flags just as you would if you used Bit type fields.
I have a table which contains a field of type numeric(28,10). I would like to display the records with a monospaced font, and a matching number of decimal places so that they line up when right aligned.
Is there any way to figure out the maximum number of decimal places that can be found in a given result set so that I can format accordingly as I display each record?
Edit: sorry, I should have been clearer ... if the result set contains numbers with only 3 decimal places, then all of the numbers should have only 3 decimal places (padded with zeroes).
The monospaced font is entirely a presentation issue...
I don't see your need for right alignment when I test:
CREATE TABLE [dbo].[Table_1](
[num] [numeric](28, 10) NOT NULL
)
INSERT INTO [example].[dbo].[Table_1] VALUES (1.1234567890);
INSERT INTO [example].[dbo].[Table_1] VALUES (1.123456789);
INSERT INTO [example].[dbo].[Table_1] VALUES (1.1234567);
SELECT [num]
FROM [example].[dbo].[Table_1]
...returns:
num
---------------
1.1234567890
1.1234567890
1.1234567000
So the question is--what are you trying to do that isn't giving you the output you desire?
Where do you want to display the results? Query Analyzer? In an application?
You can either
a) format the column to have a
finite number (known in advance) of
digits to the right of the decimal
point, truncating at that position;
this is the typical practice or
b) read through all of the rows in the
resultset to determine the value
with the greatest number of digits
to the right of the decimal point
(casting to string, splitting the
value using the decimal point as
delimiter, and getting the length of
the decimal-fraction string)
If for some reason option a) is unacceptable then you'd have do do b) procedurally, either server-side in a stored procedure or client-side in your client program.