I have a table in SQL Server with large amount of data - around 40 million rows. The base structure is like this:
Title
type
length
Null distribution
Customer-Id
number
8
60%
Card-Serial
number
5
70%
-
-
-
-
-
-
-
-
Note
string-unicode
2000
40%
Both numeric columns are filled by numbers with specific length.
I have no idea which data type to choose to have a database in the smallest size and having good performance by indexing the customerId column. Refer to this Post if I choose CHAR(8), database consume 8 bytes per row even in null data.
I decided to use INT to reduce the database size and having good index, but null data will use 4 bytes per rows again. If I want to reduce this size, I can use VARCHAR(8), but I don't know, the system has good performance on setting index on this type or not. The main question is reducing database size is important or having good index on numeric type.
Thanks.
If it is a number - then by all means choose a numeric datatype!! Don't store your numbers as char(n) or varchar(n) !! That'll just cause you immeasurable grief and headaches later on.
The choice is pretty clear:
if you have whole numbers - use TINYINT, SMALLINT, INT or BIGINT - depending on the number range you need
if you need fractional numbers - use DECIMAL(p,s) for the best and most robust behaviour (no rounding errors like FLOAT or REAL)
Picking the most appropriate datatype is much more important than any micro-optimization for storage. Even with 40 million rows - that's still not a big issue, whether you use 4 or 8 bytes. Whether you use a numeric type vs. a string type - that makes a huge difference in usability and handling of your database!
All 3 options are case and accent sensitive, and support Unicode.
According to the documentation:
NVarchar sorts and compares data based on the "dictionaries for the associated language or alphabet" (?)
Bin sorts and compares data based on the "bit patterns" (?)
Bin2 sorts and compares data based on "Unicode code points for Unicode data" (?)
To make complex things simple, can I say that the Bin is an improvement of the NVarchar and the Bin2 is an improvement of the Bin; and unless I am restricted to backwards compatibility, it is always recommended to use Bin2 or at least Bin in order to enjoy a better performance?
=========================================================================
I will try to explain my self again.
Have a look:
If Object_ID('words2','U') Is Not Null Drop Table words2;
Create Table words2(word1 NVarchar(20),
word2 NVarchar(20) Collate Cyrillic_General_BIN,
word3 NVarchar(20) Collate Cyrillic_General_BIN2);
Insert
Into words2
Values (N'ھاوتایی',N'ھاوتایی',N'ھاوتایی'),
(N'Συμμετρία',N'Συμμετρία',N'Συμμετρία'),
(N'אבַּג',N'אבַּג',N'אבַּג'),
(N'対称性',N'対称性',N'対称性');
Select * From words2;
All 3 options support all kinds of alphabet, no matter what is the collation.
The question is- what is practical difference between the 3 options? Suppose I want to store private names in different alphabets, which option may I use? I guess I will have to find specific names (Select .. From.. Where..), order names (Select.. From.. Order By..).
All 3 options are case and accent sensitive, and support Unicode.
NVARCHAR is a datatype (like INT, DATETIME, etc.) and not an option. It stores Unicode characters in the UCS-2 / UTF-16 (Little Endian) encoding. UCS-2 and UTF-16 are the identical code points for the U+0000 through U+FFFF (decimal values 0 - 65535) range. UTF-16 handles code points U+10000 and above (known as Supplementary Characters), all of which are defined as pairs of code points (known as Surrogate Pairs) that exist in the UCS-2 range. Since the byte sequences are identical between the two, the only difference is in the handling of the data. Meaning, built-in functions do not know how to interpret Supplementary Characters when using Collations that do not end in _SC, whereas they do work correctly for the full UTF-16 range when using Collations that do end in _SC. The _SC Collations were added in SQL Server 2012, but you can still store and retrieve Supplementary Characters in prior versions; it is only the built-in functions that do not behave as expected when operating on Supplementary Characters.
More directly:
NVARCHAR, being a datatype, is not inherently case or accent (or any other sensitivity) sensitive or insensitive. The exact behavior depends on the collation set for the column, or the database's default collation, or the COLLATE clause, depending on the context of the expression.
While it is an extremely common misconception, binary collations are neither case nor accent -sensitive. It only appears that they are when viewed simplistically. Being "sensitive" means being able to detect differences for a particular sensitivity (case, accent, width, Kana type, and starting in SQL Server 2017: variation selector) while still allowing for differences in other sensitivities and/or underlying byte representations. For more details and examples, please see: No, Binary Collations are not Case-Sensitive.
Collations, while literally being about how characters sort and compare to each other, in SQL Server also imply the Locale / LCID (which determines the cultural rules that override the default handling of those comparisons) and the Code Page used for VARCHAR data.
Non-binary collations are considered "dictionary" sorting / comparisons because they take into account the rules of the particular culture specified by the Collation (specifically the associated LCID). On the other hand, binary collations do not deal with any culture-specific rules and only sort and compare based on the numeric value of each 2-byte sequence. For this reason binary collations are much faster because they don't need to apply a large list of rules, but they also have no way to know that single two-byte Code Point that is a u with an accent is not the same as 2 two-byte sequences which are a u and a separate accent that will render on screen the same as the single two-byte code point, and will compare as being equal when using a non-binary collation.
The difference between _BIN and _BIN2 is sorting accuracy, not performance. The older _BIN collations do a simplistic byte-by-byte sorting and comparison (after the first character, which is seen as a code point and not two bytes, thus it sorts correctly) whereas the newer _BIN2 collations (starting in SQL Server 2005) compare each Code "Unit" (Supplementary Characters are made up of two Code Units, and _BIN2 collations see each Code Unit individually instead of seeing the combination of them as a Code Point). There is a difference in sort order between these two approaches mainly due to SQL Server being "Little Endian" which stores bytes (for a single entity: UTF-16 code unit, INT value, BIGINT value, etc) in reverse order. Hence, code point U+0206 will actually sort after U+0402 when using a _BIN collation:
SELECT *, CONVERT(VARBINARY(20), tmp.[Thing]) AS [ThingBytes]
FROM (VALUES (1, N'a' + NCHAR(0x0206)), (2, N'a' + NCHAR(0x0402))) tmp ([ID], [Thing])
ORDER BY tmp.[Thing] COLLATE Latin1_General_100_BIN;
/*
ID Thing ThingBytes
2 aЂ 0x61000204
1 aȆ 0x61000602 <-- U+0206, stored as 0x06 then 0x02, should sort first
*/
SELECT *, CONVERT(VARBINARY(20), tmp.[Thing]) AS [ThingBytes]
FROM (VALUES (1, N'a' + NCHAR(0x0206)), (2, N'a' + NCHAR(0x0402))) tmp ([ID], [Thing])
ORDER BY tmp.[Thing] COLLATE Latin1_General_100_BIN2;
/*
ID Thing ThingBytes
1 aȆ 0x61000602
2 aЂ 0x61000204
*/
For more details and examples of this distinction, please see: Differences Between the Various Binary Collations (Cultures, Versions, and BIN vs BIN2).
Also, all binary collations sort and compare in exactly the same manner when it comes to Unicode / NVARCHAR data. Code Points are numerical values and there are no linguistic / cultural variations to consider when comparing them. Hence the only purpose in having more than a single, global "BINARY" Collation is the need to still specify the Code Page to use for VARCHAR data.
Suppose I want to store private names in different alphabets, which option may I use?
If you were using VARCHAR fields, then the Collation specific (regardless of binary or non-binary) would determine which characters are available since that is 8-bit Extended ASCII which typically has a range of 256 different characters (unless using a Double-Byte Character Set, in which case it can handle many more, but those are still mostly of a single culture / alphabet). If using NVARCHAR to store the data, since that is Unicode it has a single character set comprised of all characters from all languages, plus lots of other stuff.
So choosing NVARCHAR takes care of the problem of being able to hold the proper characters of names coming from various languages. HOWEVER, you still need to pick a particular cultures dictionary rules in order to sort in a manner that each particular culture expects. This is a problem because Collations cannot be set dynamically. So pick the one that is used the most. Binary collations will not help you here, and in fact would go against what you are trying to do. They are, however, quite handy when you need to distinguish between characters that would otherwise equate, such as in this case: SQL server filtering CJK punctuation characters (here on S.O.).
Another related scenario in which I have used a _BIN2 collation was detecting case changes in URLs. Some parts of a URL are case-insensitive, such as the hostname / domain name. But, in the QueryString, the values being passed in are potentially sensitive. If you compare URL values in a case-insensitive operation, then http://domain.tld/page.ext?var1=val would equate to http://domain.tld/page.ext?var1=VAL, and those values should not be assumed to be the same. Using a case-sensitive Collation would also typically work, but I use Latin1_General_100_BIN2 because it's faster (no linguistic rules) and would not ignore a change of ü to u + combining diaeresis (which renders as ü).
I have more explanations of Collations spread across the following answers (so won't duplicate here as most of them contain several examples):
UCS-2 and SQL Server
SQL Server default character encoding
What is the point of COLLATIONS for nvarchar (Unicode) columns?
Unicode to Non-unicode conversion
NVARCHAR storing characters not supported by UCS-2 encoding on SQL Server
And these are on DBA.StackExchange:
How To Strip Hebrew Accent Marks
Latin1_General_BIN performance impact when changing the database default collation
Storing Japanese characters in a table
For more info on working with Collations, Encodings, Unicode, etc, please visit: Collations Info
nvarchar is a data type, and the "BIN" or "BIN2" collations are just that - collation sequences. They are two different things.
You use an nvarchar column to store unicode character data:
nchar and nvarchar (Transact-SQL)
String data types that are either fixed-length, nchar, or variable-length, nvarchar, Unicode data and use the UNICODE UCS-2 character set.
https://msdn.microsoft.com/en-GB/library/ms186939(v=sql.105).aspx
An nvarchar column will have an associated collation sequence that defines how the characters sort and compare. This can also be set for the whole database.
COLLATE (Transact-SQL)
Is a clause that can be applied to a database definition or a column definition to define the collation, or to a character string expression to apply a collation cast.
https://msdn.microsoft.com/en-us/library/ms184391(v=sql.105).aspx
So, when working with character data in SQL server, you always use both a character data-type (nvarchar, varchar, nchar or char) along with an appropriate collation according to your needs for case-sensitivity, accent-sensitivity etc.
For example, in my work I normally use the "Latin1_General_CI_AI" collation. This is suitable for latin character sets, and provides case-insensitive and accent-insensitive matching for queries.
That means that the following strings are all considered to be equal:
Höller, höller, Holler, holler
This is ideal for systems where there may be words containing accented characters (as above), but you can't be sure you users will enter the accents when searching for something.
If you only wanted case-insensitivity then you would use a "CI_AS" (accent sensitive) collation instead.
The "_BIN" collations are for binary comparisons that treat every distinct character as different, and wouldn't be used for general text comparisons.
Edit for updated question:
Provided that you always use nvarchar (as opposed to varchar) columns then you always have support for all unicode code points, no matter what collation is used.
There is no practical difference in your example query, as it is only a simple insert and select. Also bear in mind that your first "word1" column will be using the database or server's default collation - there's always a collation in use!
Where the differences will occur is if you use criteria against your nvarchar columns, or sort by them. This is what collations are for - they define which characters should be treated as equivalent for comparisons and sorting.
I can't say anything about Cyrillic, but in the case of Latin characters, using the "Latin1_General_CI_AI" collation, then characters such as A a á â etc are all equivalent - the case and the accent are ignored.
Imagine if you have the string Aaáâ stored in your "word1" column, then the query SELECT * FROM words2 WHERE word1 = 'aaaa' will return your row.
If you use a "_BIN" collation then all these characters are treated as distinct, and the query above would not return a row. I can't think of a situation where you'd want to use a "_BIN" collation when working with textual data. Edit 2: Actually I can - storing password hashes would be a good place to use a binary collation, so that comparisons are exact. That's about all.
I hope this makes it clearer.
I couldn't figure out the correct terminology for what I am asking so I apologize if this is in the wrong place or format.
I am rebuilding a database, call it aspsessionsv2. It consists of a single table with over 11 billion rows. Column 1 is a string and has no limits other than under 20 characters. The other columns all contain HEX data... so there isn't any reason for that field to store characters outside of A-F and 0-9. So...
Is there a way I can configure SQL Server to limit the field to those characters?
Will that reduce the overall size of the database?
Will that speed up queries to a database of this size?
What got me to thinking about this was WinRAR. I compressed a 50GB file containing only HEX characters down to 206MB. That blows my mind even though I understand how it works so I am curious if I can do the same "compression" in a way on a SQL Server database.
Thank you!
Been a little bit since I have asked a question. Here is some tech info: Windows Server 2008 R2, SQL Server 2008, 10 Columns, 11 Billion Rows
You could use a blob (binary large object), that would cut the size of the hexadecimal-data fields in half. Often hexadecimal encoding is used to circumvent character encoding issues.
You could also use a Base-64 encoding rather than a base-16 (hexadecimal) encoding; it would use 6 bits per character rather than 4, and only increase the storage relative to a blob 4:3 times, instead of increasing it 2-fold in the case of hexadecimal strings.
If you are using varchar or nvarchar to store strings of characters 0-9 and A-F, then you should really be using varbinary type instead. Each pair of hexadecimal characters represent one byte, so with varbinary each byte of data needs 1 byte on disk, with varchar each byte of data needs 2 bytes on disk, with nvarchar each byte of data needs 4 bytes on disk.
Having varbinary instead of varchar will reduce the overall size of the database and it will speed up queries, because less bytes need to be read from disk.
Hex values are just numbers so you are likely better off storing them as such. For example 123abc would convert nicely to 1194684 and would only require 4 bytes instead of 8 bytes (6 characters + 2 byte varchar overhead). So provided the number isn't going to go above 2147483647 you can store them all as int.
However, if you wanted to restrict the column to only containing the values 0-9 and a-f, then you could use a check constraint, something like this:
ALTER TABLE YourTable
ADD CONSTRAINT CK_YourTable_YourColumn CHECK (YourColumn NOT LIKE '%[^0-9a-z]%')
What would be the best Data Type to use for storing an MSISDN (phone number).
Need to be able to store any phone number in the world.
Does anyone know the maximum possible MSISDN length, including international dialling code?
For example South Africa phone numbers are +27xxxxxxxxx which results in 11 digits excluding the +
The + does not have to be stored.
Thanks in advance
I'd use BIGINT. Please avoid using varchar at all costs. It's a very bad idea to use varchar or char.
Reasons. Varchar/char takes up more space, it's slower to do lookups and cross references and the index is larger too.
When designing tables try and keep them with set row lengths, things will run loads faster. If you have to have some text field it is often best to use char instead of varchar as the overhead cost of varchar is high.
Am working in telecoms for 12 years now designing/optimizing VoIP/SMS platforms. The number one killer when I come in to fix systems is varchars everywhere.
Just my 0.02 worth.
A MSISDN is limited to 15 digits, prefixes not included.
MSISDN in the GSM variant is built up as:
MSISDN = CC + NDC (or NPA ) + SN
CC = Country Code
NDC = National Destination Code
NPA = Number Planning Area
SN = Subscriber Number
You ideally do not have to save the +. It simply represents an exit.
The longest international dialling code would only be used when making calls with a Thuraya, which is 882 16. You can have that saved else where.
If you are planning to combine the International dialling code and MSISDN, you can use a nvarchar(21) or varchar(21).
Im currently developing an application that needs to store a 10 to 20 digit value into the database.
My question is, what datatype should i need to be using? This digit is used as an primary key, and therefore the performance of the DB is important for my accplication. In Java i use this digit as and BigDecimal.
Quote from the manual:
numeric: up to 131072 digits before the decimal point; up to 16383 digits after the decimal point
http://www.postgresql.org/docs/current/static/datatype-numeric.html
131072 digits should cover your needs as far as I can tell.
Edit:
To answer the question about efficiency:
The first and most important question is: what kind of data is stored in that column and how do you use it?
If it's a number then use numeric.
If it's not a number use a varchar.
Never, ever store (real) numbers in character columns!
If you need to sort by that column you won't be satifisfied with what you get if you use a character datatype (e.g. 2 will be sorted after 10)
Coming back to the efficiency question. I assume this is mostly space efficiency you are concerned. You can calculate the space requirements for your values yourself.
The storage requirement for the numeric data type is documented as well:
The actual storage requirement is two bytes for each group of four decimal digits, plus five to eight bytes overhead
So for 20 digits this would be a maximum of 10 bytes plus the five to eight bytes overhead. So max. 18 bytes.
To store 20 digits in a varchar column you need 21 bytes.
So from a space "efficiency" point of view numeric is slightly better. But that should never influence your decision, because the choice of datatypes should be driven by the requirements of the column's content.
From a performance point of view I don't think there will be a big difference either.
Try BIGINT instead of NUMERIC.It should work.
http://www.postgresql.org/docs/current/static/datatype-numeric.html