What should i know about involving basic data types in SQL Server?
In my database i need
Flags/bits, I assume I should use byte
64bit ids/ints
a variable length string. It could be 5 letters it could be 10,000 (for desc but i plan to allow unlimited length usernames)
Is there a TEXT type in SQL Server? I dont want to use varchar(limit) unless i could use something ridiculously high like 128k. How do i specify 1byte - 8byte ints?
For 1), use BIT - it's one bit, e.g. eight of those fields will be stuck into a single byte.
For 2), use BIGINT - 64-bit signed int
For 3), definitely do NOT use TEXT/NTEXT - those are deprecated as of SQL Server 2005 and up.
Use VARCHAR(MAX) or NVARCHAR(MAX) for up to 2 GB of textual information instead.
Here's the list of the SQL Server 2008 data types:
http://msdn.microsoft.com/en-us/library/ms187594.aspx
Flags/bits, I assume I should use byte
Use "bit" which is exactly that: one bit
64bit ids/ints
bigint is 64 bit signed
a variable length string. It could be 5 letters it could be 10,000 (for desc but i plan to allow unlimited length usernames)
varchar(max) is up to 2GB. Otherwise varchar(8000) is the conventional limit
Microsoft even put into a nice handy web page for you
Here is the document.
http://msdn.microsoft.com/en-us/library/aa258271%28SQL.80%29.aspx
Others have already provided good answers to your question. If you are doing any .NET development, and need to map SQL data types to CLR data types, the following link will be quite useful.
http://msdn.microsoft.com/en-us/library/bb386947.aspx
Randy
Related
We are using MS-SQL and Oracle as our database.
We have used hibernate annotations to create tables, in the annotation class file we have declared column definition as
#Column(name="UCAALSNO",nullable=false,columnDefinition="nvarchar(20)")
and this works fine for MS-SQL.
But when it comes to Oracle nvarchar throws an exception as oracle supports only nvarchar2.
How to create annotation file to support datatype nvarchar for both the databases.
You could use NCHAR:
In MSSQL:
nchar [ ( n ) ]
Fixed-length Unicode string data. n defines the string length and must
be a value from 1 through 4,000. The storage size is two times n
bytes. When the collation code page uses double-byte characters, the
storage size is still n bytes. Depending on the string, the storage
size of n bytes can be less than the value specified for n. The ISO
synonyms for nchar are national char and national character.
while in Oracle:
NCHAR
The maximum length of an NCHAR column is 2000 bytes. It can hold up to
2000 characters. The actual data is subject to the maximum byte limit
of 2000. The two size constraints must be satisfied simultaneously at
run time.
Nchar occupies a fixed space, so for very large table there will be a considerable space difference between an nchar and an nvarchar, so you should take this into consideration.
I usually have incremental DB schema migration scripts for my production DBs and I only rely on Hibernate DDL generation for my integration testing in-memory databases (e.g. HSQLDB or H2). This way I choose the production schema types first and the "columnDefinition" only applies to the testing schema, so there is no conflict.
You might want to read this too, which disregards the N(VAR)CHAR(2) additional complexity, so you might consider setting a default character encoding:
Given that, I'd much rather go with the approach that maximizes
flexibility going forward, and that's converting the entire database
to Unicode (AL32UTF8 presumably) and just using that.
Although that you might be recommanded to use VARCHAR2, VARCHAR has been synonym with VARCAHR2 for a long time now.
So quoting a DBA opinion:
The Oracle 9.2 and 8.1.7 documentation say essentially the same thing,
so even though Oracle continually discourages the use of VARCHAR, so
far they haven't done anything to change it's parity with VARCHAR2.
I'd say give it a try for VARCHAR too, as it's supported on most DBs.
I am migrating a Visual FoxPro database to SQL Server. One of the tables has a Char column with length 2147483647. I am wondering if the correct data type to use in SQL Server 2008 R2 is Char(2147483647). Is there an alternative type I can use in SQL Server which will not result in any loss of information?
The following image gives a description of the column as shown within Visual Studio 2008.
Visual FoxPro's native CHAR type only allows up to about 255 characters. What you're seeing is a FoxPro MEMO field, translated to a generic OLE equivalent.
A SQL Server VARCHAR(MAX) is the usual proper equivalent, assuming the MEMO is simply user-entered text in a western dialect and not a multi-linqual or data-blob variation.
Be aware that FoxPro does NOT speak UTF natively, so you may have code-page translation issues.
Hope this helps someone else.
MSDN varchar description states:
Use varchar(max) when the sizes of the column data entries vary considerably, and the size might exceed 8,000 bytes.
The maximum storage for Char is 8000 bytes/characters. varchar(max) on the other hand will use a storage size equal to the actual length of the characters entered plus 2 bytes.
We are migrating some data from sql server to oracle. For columns defined as NVARCHAR in SQL server we started creating NVARCHAR columns in Oracle thinking them to be similar..But it looks like they are not.
I have read couple of posts on stackoverflow and want to confirm my findings.
Oracle VARCHAR2 already supports unicode if the database character set is say AL32UTF8 (which is true for our case).
SQLServer VARCHAR does not support unicode. SQLServer explicitly requires columns to be in NCHAR/NVARCHAR type to store data in unicode (specifically in the 2 byte UCS-2 format)..
Hence would it be correct to say that SQL Server NVARCHAR columns can/should be migrated as Oracle VARCHAR2 columns ?
Yes, if your Oracle database is created using a Unicode character set, an NVARCHAR in SQL Server should be migrated to a VARCHAR2 in Oracle. In Oracle, the NVARCHAR data type exists to allow applications to store data using a Unicode character set when the database character set does not support Unicode.
One thing to be aware of in migrating, however, is character length semantics. In SQL Server, a NVARCHAR(20) allocates space for 20 characters which requires up to 40 bytes in UCS-2. In Oracle, by default, a VARCHAR2(20) allocates 20 bytes of storage. In the AL32UTF8 character set, that is potentially only enough space for 6 characters though most likely it will handle much more (a single character in AL32UTF8 requires between 1 and 3 bytes. You probably want to declare your Oracle types as VARCHAR2(20 CHAR) which indicates that you want to allocate space for 20 characters regardless of how many bytes that requires. That tends to be much easier to communicate than trying to explain why some 20 character strings are allowed while other 10 character strings are rejected.
You can change the default length semantics at the session level so that any tables you create without specifying any length semantics will use character rather than byte semantics
ALTER SESSION SET nls_length_semantics=CHAR;
That lets you avoid typing CHAR every time you define a new column. It is also possible to set that at a system level but doing so is discouraged by the NLS team-- apparently, not all the scripts Oracle provides have been thoroughly tested against databases where the NLS_LENGTH_SEMANTICS has been changed. And probably very few third-party scripts have been.
what is the suitable data type to save string more than 8000 character in SQL database?
For SQL Server 2005+ varchar(max) or nvarchar(max) depending on whether you need to save as unicode or not.
If you don't have this requirement (or anticipate ever needing it) the varchar option is more space efficient.
You don't give a specific version of SQL Server in your question. Pre 2005 it would have been the text datatype.
NB: In answer to your comment use 'varchar(max)' exactly as written. Don't substitute a number in for "max"!
I am currently moving a product from SQL Server to Oracle. I am semi-familiar with SQL Server and know nothing about Oracle, so I apologize if the mere presence of this question offends anyone.
Inferring from this page, http://download.oracle.com/docs/cd/E12151_01/doc.150/e12156/ss_oracle_compared.htm, it would seem that the data type conversion from SQL Server to Oracle should be:
REAL = FLOAT(24) -> FLOAT(63)
FLOAT(p) -> FLOAT(p)
TIMESTAMP -> NUMBER
NVARCHAR(n) -> VARCHAR(n*2)
NCHAR(n) -> CHAR(n*2)
Here are my questions regarding them:
For FLOAT, considering that FLOAT(p) -> FLOAT(p), wouldn't it also mean that FLOAT -> FLOAT(24)?
For TIMESTAMP, since Oracle also has its own version of it, wouldn't it be better that TIMESTAMP -> TIMESTAMP?
Finally, for NVARCHAR(n) and NCHAR(n), I thought the issue would be regarding Unicode. Then, again, since Oracle provides its own version of both, wouldn't it make more sense that NVARCHAR(n) -> NVARCHAR(n) and NCHAR(n) -> NCHAR(n)?
It would be much appreciated if someone were to elaborate on the previous 3 matters.
Thanks in advance.
It appears that Oracle's CHAR and VARCHAR2 (always use VARCHAR2 instead of VARCHAR) already support Unicode - the document you've linked to advises converting to those from the SQL Server NCHAR and NVARCHAR datatypes.
The SQL Server TIMESTAMP isn't actually a timestamp at all - it's some kind of identifier based on the time that's just used to indicate that a row has changed - it can't be converted back into any kind of DATETIME (at least in a way that I know about).
For FLOAT, using 126 bytes would be enormous - since the developer tools automatically map SQL Server's FLOAT to Oracle's FLOAT(53), why not use that amount?
This is more FYI than an answer to your question, but you're potentially going to run into a particularly painful difference between SQL Server and Oracle. In SQL Server, you can define a string column (of whatever flavor) to not allow NULL values, and then insert zero-length (aka "blank") strings into that column, because SQL Server does not consider a blank string to be the same as a NULL.
Oracle does consider a blank string to be the same as a NULL, so Oracle will not let you insert blank values into a NOT NULL column. This obviously causes problems when copying data from a table in SQL Server into its counterpart table in Oracle. You choices for dealing with this are:
Set the offending string column in Oracle to allow NULL values (so not a good idea)
When copying the data, replace the blank strings with something else (I have no idea what you should use here)
Skip the offending rows and pretend you never saw them
I'd love to think that Oracle's choice to consider blank strings to be NULL (where they're alone among the major DBs) was to lock customers into their platform, but this one actually works in the opposite direction. You can move a database from Oracle to something else without the blank=NULL difference causing any problems.
See this earlier question: Oracle considers empty strings to be NULL while SQL Server does not - how is this best handled?
Following is not a direct answer to your question; but it is good to take a look at the sqlteam blog
sqlteam - Datatypes translation between Oracle and SQL Server part 2: number
It has detailed explanation about how to handle numbers, etc.