SQL Server varbinary trailing 0 interpretation - sql-server

CREATE TABLE [sql_table1] ([c0] varbinary(25) NOT NULL primary key)
go
insert into sql_table1 values (0x3200),(0x32);
go
I get
Cannot insert duplicate key in object 'dbo.sql_table'. The duplicate
key value is (0x32).
Why? 0x32 does not equal 0x3200

It gets right padded. BINARY data gets tricky when you try to specify what should normally be equivalent numerically hex values. If you try this it will work:
insert into #sql_table1 values (0x32),(CAST(50 as VARBINARY(25)));
-- inserts 0x00000032
-- and 0x32
But these are numerically equivalent. Generally speaking, it's a bad idea to have a BINARY column of any sort be a primary key or try to put a unique index on it (moreso than a CHAR/VARCHAR/NVARCHAR column) - any application that inserts into it is going to almost certainly be CASTing from some native format/representation to binary, but there's no guarantee that that CAST actually works in a unique manner in either direction - did the application insert the value 50 (= 0x32), or did it try to insert the literal 0x32, or did it try to insert the ASCII value of 2 (= 0x32), or did it insert the first byte(s) of something else....? If one app submits 0x32 and another 0x0032 are they the same or different (SQL Server says different - see below)?
The bottom line is that SQL Server is going to behave unusually if you try to compare binary columns flat out without some context. What will work is comparing binary data using BINARY_CHECKSUM
SELECT BINARY_CHECKSUM(0x32) -- 50
SELECT BINARY_CHECKSUM(0x320) -- 16! -- it's treating this as having a different number or ordering of bytes
SELECT BINARY_CHECKSUM(0x3200) -- 50
SELECT BINARY_CHECKSUM(0x32000) -- 16
SELECT BINARY_CHECKSUM(0x0032) -- 50
SELECT BINARY_CHECKSUM(0x00032) -- 50
SELECT BINARY_CHECKSUM(0x000000000032) -- 50
but again, this only helps you see that the hexidecimal represenation of the binary data isn't going to work exactly the way it would seem. The point is, your primary key is going to be based on the BINARY_CHECKSUMs of the data instead of any particular format/representation of the data. Normally that's a good thing, but with binary data (and padding) it becomes a lot trickier. Even then, in my example above the BINARY_CHECKSUM of both columns will be exactly the same (SELECT BINARY_CHECKSUM(c0) FROM sql_table1 will output 50 for both rows). Weird - a little further testing is showing that any different number of leading 0s that fit into the column length will bypass the unique check even though the checksum is the same (e.g. VALUES (0x32), (0x032), (0x0032) etc.).
This only gets worse if you start throwing different versions of SQL Server into the mix (per MSDN documentation).
What you should do for PK/Unique design on a table is figure out what context will make sense of this data - an order number, a file reference, a timestamp, a device ID, some other business or logical identifier, etc.... If nothing else, pseudokey it with an IDENTITY column.

Related

Issue with datatype Money in SQL SERVER vs string

I have a spreadsheet that gets all values loaded into SQL Server. One of the fields in the spreadsheet happens to be money. Now in order for everything to be displayed correcctly - i added a field in my tbl with Money as DataType.
When i read the value from spreadsheet I pretty much store it as a String, such as this... "94259.4". When it get's inserted in sql server it looks like this "94259.4000". Is there a way for me to basically get rid of the 0's in the sql server value when I grab it from DB - because the issue I'm running across is that - even though these two values are the same - because they are both compared as Strings - it thinks that there not the same values.
I'm foreseeing another issue when the value might look like this...94,259.40 I think what might work is limiting the numbers to 2 after the period. So as long as I select the value from Server with this format 94,259.40 - I thin I should be okay.
EDIT:
For Column = 1 To 34
Select Case Column
Case 1 'Field 1
If Not ([String].IsNullOrEmpty(CStr(excel.Cells(Row, Column).Value)) Or CStr(excel.Cells(Row, Column).Value) = "") Then
strField1 = CStr(excel.Cells(Row, Column).Value)
End If
Case 2 'Field 2
' and so on
I go through each field and store the value as a string. Then I compare it against the DB and see if there is a record that has the same values. The only field in my way is the Money field.
You can use the Format() to compare strings, or even Float For example:
Declare #YourTable table (value money)
Insert Into #YourTable values
(94259.4000),
(94259.4500),
(94259.0000)
Select Original = value
,AsFloat = cast(value as float)
,Formatted = format(value,'0.####')
From #YourTable
Returns
Original AsFloat Formatted
94259.40 94259.4 94259.4
94259.45 94259.45 94259.45
94259.00 94259 94259
I should note that Format() has some great functionality, but it is NOT known for its performance
The core issue is that string data is being used to represent numeric information, hence the problems comparing "123.400" to "123.4" and getting mismatches. They should mismatch. They're strings.
The solution is to store the data in the spreadsheet in its proper form - numeric, and then select a proper format for the database - which is NOT the "Money" datatype (insert shudders and visions of vultures circling overhead). Otherwise, you are going to have an expanding kluge of conversions between types as you go back and forth between two improperly designed solutions, and finding more and more edge cases that "don't quite work," and require more special cases...and so on.

How is Unicode (UTF-16) data that is out of collation stored in varchar column?

This is purely theoretical question to wrap my head around
Let's say I have Unicode cyclone (🌀 1F300) symbol. If I try to store it in varchar column that has default Latin1_General_CI_AS collation, cyclone symbol cannot not fit into one byte that is used per symbol in varchar...
The ways I can see this done:
Like javascript does for symbols out of Basic plane(BMP) where it stores them as 2 symbols (surrogate pairs), and then additional processing is needed to put them back together...
Just truncate the symbol, store first byte and drop the second.... (data is toast - you should have read the manual....)
Data is destroyed and nothing of use is saved... (data is toast - you should have read the manual....)
Some other option that is outside of my mental capacity.....
I have done some research after inserting couple of different unicode symbols
INSERT INTO [Table] (Field1)
VALUES ('👽')
INSERT INTO [Table] (Field1)
VALUES ('🌀')
and then reading them as bytes SELECT
cast (field1 as varbinary(10)) in both cases I got 0x3F3F.
3F in ascii is ? (question mark) e.g two question marks (??) that I also see when doing normal select * does that mean that data is toast and not even 1st bite is being stored?
How is Unicode data that is out of collation stored in varchar column?
The data is toast and is exactly what you see, 2 x 0x3F bytes. This happens during the type conversion prior to the insert and is effectively the same as cast('👽' as varbinary(2)) which is also 0xF3F3 (as opposed to casting N'👽').
When Unicode data must be inserted into non-Unicode columns, the columns are internally converted from Unicode by using the WideCharToMultiByte API and the code page associated with the collation. If a character cannot be represented on the given code page, the character is replaced by a question mark (?) Ref.
Yes the data has gone.
Varchar requires less space, compared to NVarchar. But that reduction comes at a cost. There is no space for a Varchar to store Unicode characters (at 1 byte per character the internal lookup just isn't big enough).
From Microsoft's Developer Network:
...consider using the Unicode nchar or nvarchar data types to minimize character conversion issues.
As you've spotted, unsupported characters are repalced with question marks.

Storage issue with Key,Value datatypes, particularly Hstores in Postgresql

Say I have a table with 3 columns: varchar(20), hstore, smallint
Now if I insert the following: "ABCDEF", "abc=>123, xyz=>888, lmn=>102", 5
How much space will the record take in PostgreSQL? Is the hstore stored as plain text?
So if I have a million records, the space taken by the keys (abc,xyz,lmn) will be duplicated across all the records?
I'm asking this because I have a use case in which I need to store an unknown number of key,value pairs; with each key taking upto 20B and the value not more than smallint range.
The catch is that the number of records is massive, around 90 million a day. And the number of Key,Value pairs are ~400. This quickly leads storage problems since just a day's data would total upto around 800GB; with a massive percentage being taken by the Keys which are duplicated across all records.
So considering there are 400 Key/Value pairs, a single Hstore in a record (if stored as plain text) would take 400*22 Bytes. Multiplied by 90 Million, that is 737GB.
If stored in normal columns as 2 Byte ints, it will take just 67GB.
Are HStores not suitable for this use case? Do I have any option which can help me with this storage issue? I know this is a big ask and I might just have to go with a regular columnar storage solution and forgo the flexibility offered by key value.
How much space will the record take in PostgreSQL?
To get the raw uncompressed size:
SELECT pg_column_size( ROW( 'ABCDEF', 'abc=>123, xyz=>888, lmn=>102'::hstore, 5) );
but due to TOAST compressed out-of-line storage that might not be the on-disk size... though it often is:
CREATE TABLE blah(col1 text, col2 hstore, col3 integer);
INSERT INTO blah (col1, col2, col3)
VALUES ('ABCDEF', 'abc=>123, xyz=>888, lmn=>102'::hstore, 5);
regress=> SELECT pg_column_size(blah) FROM blah;
pg_column_size
----------------
84
(1 row)
If you used a bigger hstore value here it might get compressed and stored out of line. In that case, the size would depend on how compressible it is.
Is the hstore stored as plain text?
no, it's a binary format, but nor is it compressed; the keys/values are plain text.
So if I have a million records, the space taken by the keys (abc,xyz,lmn) will be duplicated across all the records?
Correct. Each hstore value is a standalone value. It has no relationship with any other value anywhere in the system. It's just like a text or json or whatever else. There's no sort of central key index or anything like that.
Demo:
CREATE TABLE hsdemo(hs hstore);
INSERT INTO hsdemo(hs)
SELECT hstore(ARRAY['thisisthefirstkey', 'thisisanotherbigkey'], ARRAY[x::text, x::text])
FROM generate_series(1,10000) x;
SELECT pg_size_pretty(pg_relation_size('hsdemo'::regclass));
-- prints 992kb
INSERT INTO hsdemo(hs)
SELECT hstore(ARRAY['thisisthefirstkey', 'thisisanotherbigkey'], ARRAY[x::text, x::text])
FROM generate_series(10000,20000) x;
SELECT pg_size_pretty(pg_relation_size('hsdemo'::regclass));
-- prints 1968kb, i.e. near doubling for double the records.
Thus, if you have many highly duplicated large keys and small values, you should probably look at a normalized schema (yes, even EAV).
However, be aware that PostgreSQL has quite a large per-row overhead of over 20 bytes per row, so you may not gain as much as you'd expect by storing huge numbers of short rows instead of something like a hstore.
You can always compromise - keep a lookup table of full key names, and associate it with a short hstore key. So your app essentially compresses the keys in each hstore.

What are the differences between CHECKSUM() and BINARY_CHECKSUM() and when/what are the appropriate usage scenarios?

Again MSDN does not really explain in plain English the exact difference, or the information for when to choose one over the other.
CHECKSUM
Returns the checksum value computed over a row of a table, or over a list of expressions. CHECKSUM is intended for use in building hash indexes.
BINARY_CHECKSUM
Returns the binary checksum value computed over a row of a table or over a list of expressions. BINARY_CHECKSUM can be used to detect changes to a row of a table.
It does hint that binary checksum should be used to detect row changes, but not why.
Check out the following blog post that highlights the diferences.
http://decipherinfosys.wordpress.com/2007/05/18/checksum-functions-in-sql-server-2005/
Adding info from this link:
The key intent of the CHECKSUM functions is to build a hash index based on an expression or a column list. If say you use it to compute and store a column at the table level to denote the checksum over the columns that make a record unique in a table, then this can be helpful in determining whether a row has changed or not. This mechanism can then be used instead of joining with all the columns that make the record unique to see whether the record has been updated or not. SQL Server Books Online has a lot of examples on this piece of functionality.
A couple of things to watch out for when using these functions:
You need to make sure that the column(s) or expression order is the same between the two checksums that are being compared else the value would be different and will lead to issues.
We would not recommend using checksum(*) since the value that will get generated that way will be based on the column order of the table definition at run time which can easily change over a period of time. So, explicitly define the column listing.
Be careful when you include the datetime data-type columns since the granularity is 1/300th of a second and even a small variation will result into a different checksum value. So, if you have to use a datetime data-type column, then make sure that you get the exact date + hour/min. i.e. the level of granularity that you want.
There are three checksum functions available to you:
CHECKSUM: This was described above.
CHECKSUM_AGG: This returns the checksum of the values in a group and Null values are ignored in this case. This also works with the new analytic function’s OVER clause in SQL Server 2005.
BINARY_CHECKSUM: As the name states, this returns the binary checksum value computed over a row or a list of expressions. The difference between CHECKSUM and BINARY_CHECKSUM is in the value generated for the string data-types. An example of such a difference is the values generated for “DECIPHER” and “decipher” will be different in the case of a BINARY_CHECKSUM but will be the same for the CHECKSUM function (assuming that we have a case insensitive installation of the instance).
Another difference is in the comparison of expressions. BINARY_CHECKSUM() returns the same value if the elements of two expressions have the same type and byte representation. So, “2Volvo Director 20” and “3Volvo Director 30” will yield the same value, however the CHECKSUM() function evaluates the type as well as compares the two strings and if they are equal, then only the same value is returned.
Example:
STRING BINARY_CHECKSUM_USAGE CHECKSUM_USAGE
------------------- ---------------------- -----------
2Volvo Director 20 -1356512636 -341465450
3Volvo Director 30 -1356512636 -341453853
4Volvo Director 40 -1356512636 -341455363
HASHBYTES with MD5 is 5 times slower than CHECKSUM, I've tested this on a table with over 1 million rows, and ran each test 5 times to get an average.
Interestingly CHECKSUM takes exactly the same time as BINARY_CHECKSUM.
Here is my post with the full results published:
http://networkprogramming.wordpress.com/2011/01/14/binary_checksum-vs-hashbytes-in-sql/
I've found that checksum collisions (i.e. two different values returning the same checksum) are more common than most people seem to think. We have a table of currencies, using the ISO currency code as the PK. And in a table of less than 200 rows, there are three pairs of currency codes that return the same Binary_Checksum():
"ETB" and "EUR" (Ethiopian Birr and Euro) both return 16386.
"LTL" and "MDL" (Lithuanian Litas and Moldovan leu) both return 18700.
"TJS" and "UZS" (Somoni and Uzbekistan Som) both return 20723.
The same happens with ISO culture codes: "de" and "eu" (German and Basque) both return 1573.
Changing Binary_Checksum() to Checksum() fixes the problem in these cases...but in other cases it may not help. So my advice is to test thoroughly before relying too heavily on the uniqueness of these functions.
Be careful when using CHECSUM, you may get un-expected outcome. the following statements produce the same checksum value;
SELECT CHECKSUM (N'这么便宜怎么办?廉价iPhone售价再曝光', 5, 4102)
SELECT CHECKSUM (N'PlayStation Now – Sony startet Spiele-Streaming im Sommer 2014', 238, 13096)
Its easy to get collisions from CHECKSUM(). HASHBYTES() was added in SQL 2005 to enhance SQL Server's system hash functionality so I suggest you also look into this as an alternative.
You can get checksum value through this query:
SELECT
checksum(definition) as Checksum_Value,
definition
FROM sys.sql_modules
WHERE object_id = object_id('RefCRMCustomer_GetCustomerAdditionModificationDetail');
replace your proc name in the bracket.

Ordering numbers that are stored as strings in the database

I have a bunch of records in several tables in a database that have a "process number" field, that's basically a number, but I have to store it as a string both because of some legacy data that has stuff like "89a" as a number and some numbering system that requires that process numbers be represented as number/year.
The problem arises when I try to order the processes by number. I get stuff like:
1
10
11
12
And the other problem is when I need to add a new process. The new process' number should be the biggest existing number incremented by one, and for that I would need a way to order the existing records by number.
Any suggestions?
Maybe this will help.
Essentially:
SELECT process_order FROM your_table ORDER BY process_order + 0 ASC
Can you store the numbers as zero padded values? That is, 01, 10, 11, 12?
I would suggest to create a new numeric field used only for ordering and update it from a trigger.
Can you split the data into two fields?
Store the 'process number' as an int and the 'process subtype' as a string.
That way:
you can easily get the MAX processNumber - and increment it when you need to generate a
new number
you can ORDER BY processNumber ASC,
processSubtype ASC - to get the
correct order, even if multiple records have the same base number with different years/letters appended
when you need the 'full' number you
can just concatenate the two fields
Would that do what you need?
Given that your process numbers don't seem to follow any fixed patterns (from your question and comments), can you construct/maintain a process number table that has two fields:
create table process_ordering ( processNumber varchar(N), processOrder int )
Then select all the process numbers from your tables and insert into the process number table. Set the ordering however you want based on the (varying) process number formats. Join on this table, order by processOrder and select all fields from the other table. Index this table on processNumber to make the join fast.
select my_processes.*
from my_processes
inner join process_ordering on my_process.processNumber = process_ordering.processNumber
order by process_ordering.processOrder
It seems to me that you have two tasks here.
• Convert the strings to numbers by legacy format/strip off the junk• Order the numbers
If you have a practical way of introducing string-parsing regular expressions into your process (and your issue has enough volume to be worth the effort), then I'd
• Create a reference table such as
CREATE TABLE tblLegacyFormatRegularExpressionMaster(
LegacyFormatId int,
LegacyFormatName varchar(50),
RegularExpression varchar(max)
)
• Then, with a way of invoking the regular expressions, such as the CLR integration in SQL Server 2005 and above (the .NET Common Language Runtime integration to allow calls to compiled .NET methods from within SQL Server as ordinary (Microsoft extended) T-SQL, then you should be able to solve your problem.
• See
http://www.codeproject.com/KB/string/SqlRegEx.aspx
I apologize if this is way too much overhead for your problem at hand.
Suggestion:
• Make your column a fixed width text (i.e. CHAR rather than VARCHAR).
• Pad the existing values with enough leading zeros to fill each column and a trailing space(s) where the values do not end in 'a' (or whatever).
• Add a CHECK constraint (or equivalent) to ensure new values conform to the pattern e.g. something like
CHECK (process_number LIKE '[0-9][0-9][0-9][0-9][0-9][0-9][ab ]')
• In your insert/update stored procedures (or equivalent), pad any incoming values to fit the pattern.
• Remove the leading/trailing zeros/spaces as appropriate when displaying the values to humans.
Another advantage of this approach is that the incoming values '1', '01', '001', etc would all be considered to be the same value and could be covered by a simple unique constraint in the DBMS.
BTW I like the idea of splitting the trailing 'a' (or whatever) into a separate column, however I got the impression the data element in question is an identifier and therefore would not be appropriate to split it.
You need to cast your field as you're selecting. I'm basing this syntax on MySQL - but the idea's the same:
select * from table order by cast(field AS UNSIGNED);
Of course UNSIGNED could be SIGNED if required.

Resources