How many characters will hold if we define varchar in SQL Server? - sql-server

Here is may code
Create table dbo.EXA
(
Name VARCHAR,
ID INT
)
How many character will hold data type VARCHAR as I am not defining the size?
or defining
NAME VARCHAR
it self wrong?

You will get EXACTLY ONE CHARACTER - which is typically not what you want. This is in the case where you define a SQL Server variable, a parameter on a stored procedure, or a table column.
If you don't specify any length in VARCHAR in the context of a conversion using CONVERT or CAST, then the default is 30 characters.
My recommendation would be to ALWAYS explicitly define a length - then it's clear (to you, to T-SQL, to a poor guy who needs to maintain your code in a year) what you wanted/needed

I connected to SQL Server 2005, ran the above and checked the table after completion and it set it automatically to one character.

varchar is equivalent to varchar(1)
I tried this code check this out
Demo

Related

Can sql server give me a warning if I try and insert unicode without the N prefix

ok, so the support team have once again updated a value in the database and forgot the N prefix so replaced it with ???s.
Is there something that can be done on either the database (sqlserver 2012) or sqlserver management studio 2012 that can stop or warn people?
And why does the database automagically change the update to ?s, if it's a nvarchar column and I'm passing in Unicode without N why not have it error?
This is not an issue with the driver being used to connect to SQL Server. It is simply an implicit conversion happening due to using the wrong datatype in the string literal. Everything has a type. A number 2 by itself is, by default, an INT, and not a DECIMAL or FLOAT or anything else. A number 2.0 is, by default, a NUMERIC (same as DECIMAL), and not a FLOAT, etc. Strings are no different. A string expressed as 'something' is 8-bit ASCII, using the Code Page of the database that the query is running in. If you had used '随机字符中国' in a database set to one of the collations that supports those characters in an 8-bit encoding (it would be a Double-Byte Character Set (DBCS)) then it would not have translated to ? since it would have had the character in its Code Page.
CREATE DATABASE [ChineseSimplifiedPinyin] COLLATE Chinese_Simplified_Pinyin_100_CI_AS;
Then, run this:
USE [ChineseSimplifiedPinyin];
SELECT '随机字符中国';
and it will return those characters and not ??????.
And why does the database automagically change the update to ?s, if it's a nvarchar column and I'm passing in Unicode without N why not have it error?
The UPDATE is not being changed. An implicit conversion is happening because you are using the wrong datatype for string literals when not prefixing with the N. This is no different than doing the following:
DECLARE #Test INT;
SET #Test = 2.123;
SELECT #Test;
which returns simply a 2.
Now, it might be possible to set up a Policy to trap implicit conversions, but that would be too far reaching and would likely break lots of stuff. Even if you could narrow it down to implicit conversions going from VARCHAR to NVARCHAR that would still break code that would otherwise work in the current situation: inserting 'bob' into an NVARCHAR field would be an implicit conversion yet there would be no data loss. And you can't trap any of this in a Trigger because that is after-the-fact of receiving the implicitly converted data.
The best way to ensure nobody forgets to insert or update without the N prefix is to create a web app or console app that would be an interface for this (which is probably a good idea anyway since that will also prevent someone from using the wrong WHERE clause or forgetting to use one altogether, both of which do happen). Creating a small .NET web or console app is pretty easy and .NET strings are all Unicode (UTF-16 Little Endian). Then the app takes the data and submits the INSERT or UPDATE statement. Be sure to use a parameter and not dynamic SQL.
Given that the ? character is valid in this field, if it can be determined that multiple ?s would never naturally occur, then you can probably prevent this issue on cases involving more than a single character getting converted by creating an INSERT, UPDATE Trigger that cancels the operation if multiple ?s in a row are present. Using a Trigger as opposed to a Check Constraint allows for a little more control, especially over the error message:
CREATE TRIGGER tr_PreventLosingUnicodeCharacters
ON SchemaName.TableName
AFTER INSERT, UPDATE
AS
BEGIN
SET NOCOUNT ON;
IF (EXISTS (SELECT *
FROM INSERTED ins
WHERE ins.column1 LIKE N'%??%')
)
BEGIN
ROLLBACK; -- cancel the INSERT or UPDATE operation
DECLARE #Message NVARCHAR(1000);
SET #Message =
N'INSERT or UPDATE of [column1] without "N" prefix results in data loss. '
+ NCHAR(13) + NCHAR(10)
+ N'Please try again using N''string'' instead of just ''string''.';
RAISERROR(#Message, 16, 1);
RETURN;
END;
END;
And if 2 ?s can naturally happen, then do the search for ??? and then it is only 1 or 2 character items that might slip by. In either case, this should catch enough erroneous entries so that you only need to fix things on rare occasions (hopefully :).
Is there something that can be done on either the database (sqlserver 2012) or sqlserver management studio 2012 that can stop or warn people?
Not to my knowledge. About the only thing I can think of would be:
ALTER TABLE some_table ADD CONSTRAINT stop_messing_it_up CHECK (NOT column1 LIKE '%?%');
but you can't tell the difference between a question mark that came from prior content-mangling and a real question mark, so that would only be workable if it were also invalid to put a question mark in the database.
why does the database automagically change the update to ?s, if it's a nvarchar column
It doesn't matter what the column is, it's the type of the string literal in the query expression. In SQL Server (only), non-NATIONAL string literals can only contain characters in the locale-specific (“ANSI”) code page, so the data loss occurs before the content gets anywhere near your table:
SELECT '随机字符中国';
??????
SELECT N'随机字符中国';
随机字符中国

Implicit conversion from data type ntext to varchar is not allowed. Use the CONVERT function to run this query

I get the above error in one of my SQL Server 2000 stored procedure. Here I do not use any variables with type ntext. I do not know why I get this error. Can some one help?
It appears that the problem is not with the stored procedure at all. As you said on your comment, the error happens when the input exceeds 8000 characters. SQL Server 2000 doesn't have VARCHAR(MAX), the maximum length for VARCHAR is 8000. So, if you try to pass a longer string to your sp, it need to do a conversion to TEXT, but it can't be an implicit conversion, so you need a parameter of type TEXT. Of course, you would need to change your sp, and there are many operations that can't be done on a column of this datatype, so you may be unable to actually do what you want.

SQL Server Collation / ADO.NET DataTable.Locale with different languages

we have WinForms app which stores data in SQL Server (2000, we are working on porting it in 2008) through ADO.NET (1.1, working on porting to 4.0). Everything works fine if I read data previsouly written in Western-European locale (E.g.: "test", "test ù"), but now we have to be able to mix Western and non-Western alphabets as well (E.g.: "test - ۓےۑ" - these are just random arabic chars).
On the SQL Server side, database has been set with the Latin1_General collation, the field is a nvarchar(80). If I run a SQL SELECT statement (E.g.: "SELECT * FROM MyTable WHERE field = 'test - ۓےۑ'", don't mind about the "*" or the actual names) from Query Analyzer, I get no results; the same happens if I pass the Sql statement to an ADO.NET DataAdapter to fill a DataTable. My guess is that it has something to do with collation, but I don't know how to correct this: do I have to change to collation (SQL Server) to a different one? Or do I have to set the locale on the DataAdaoter/DataTable (ADO.NET)?
Thanks in advance to anyone who will help
Shouldn't you use N when comparing nvarchar with extended char. set?
SELECT * From TestTable WHERE GreekColCaseInsensitive = N'test - ۓےۑ'
Yes, the problem is most likely the collation. The Latin1_General collation does not include the rules to sort and compare non latin characters.
MSDN claims:
If you must store character data that reflects multiple languages, you can minimize collation compatibility issues by always using the Unicode nchar, nvarchar, and ntext data types instead of the char, varchar, text data types. Using the Unicode data types eliminates code page conversion issues.
Since you have already complied with this, you should read further on the info about Mixed Collation Environments here.
Additionally I want to add that just changing a collation is not something done easy, check the MSDN for SQL 2000:
When you set up SQL Server 2000, it is important to use the correct collation settings. You can change collation settings after running Setup, but you must rebuild the databases and reload the data. It is recommended that you develop a standard within your organization for these options. Many server-to-server activities can fail if the collation settings are not consistent across servers.
You can specify a collation on a per column bases however:
CREATE TABLE TestTable (
id int,
GreekColCaseInsensitive nvarchar(10) collate greek_ci_as,
LatinColCaseSensitive nvarchar(10) collate latin1_general_cs_as
)
Have a look at the different binary multilingual collations here. Depending on the charset you use, you should find one that fits your purpose.
If you are not able or willing to change the collation of a column you can also just specify the collation to be used in the query like:
SELECT * From TestTable
WHERE GreekColCaseInsensitive = N'test - ۓےۑ'
COLLATE latin1_general_cs_as
As jfrobishow pointed out the use of N in front of the string you want to use to compare is essential. What does it do:
It denotes that the subsequent string is in Unicode (the N actually stands for National language character set). Which means that you are passing an NCHAR, NVARCHAR or NTEXT value, as opposed to CHAR, VARCHAR or TEXT. See Article #2354 for a comparison of these data types.
You can find a quick rundown here.

Varchar with trailing spaces as a Primary Key in SQL Server 2008

Is it possible to have a varchar column as a primary key with values like 'a ' and 'a', is gives always this error "Violation of PRIMARY KEY constraint" in MS SQL Server 2008.
In Oracle dons't give any error.
BTW I'm not implementing this way I'm only trying to migrate the data from oracle to sql server.
Regards
The SQL-92 standard dictates that for character string comparison purposes, the strings are padded to be the same length prior to comparison: typically the pad character is a space.
Therefore 'a' and 'a ' compare EQUAL and this violates the PK constraint.
http://support.microsoft.com/kb/316626
I could find nothing to indicate this behaviour has changed since then.
You may get away with using varbinary instead of varchar but this may not do what you want either.
You can use a text or ntext column, which one depends on the kind of data you are importing and its length - this will preserve spaces. char will pad spaces, so may not be suitable.
I thought this might have something to do with ANSI_PADDING: but my testing here, indicates that for PKs (possibly UNIQUE INDEXES as well, not tried) this still doesn't help unfortunately.
So:
SET ANSI_PADDING ON
Works for non-PK fields - that is, it preserves the trailing space on the insert, but for some reason not on PKs...
See :
http://support.microsoft.com/kb/154886/EN-US/
use a datatype that doesn't strip trailing spaces.
You might try storing as a varbinary, and then converting to varchar when you select.
You could add another column to your primary key constraint which holds the length of the data in the oracle column. This will allow you to import the data and to reconstruct the oracle data when you need to - with a view that uses the length of the oracle data along with the length in the microsoft table to add back the missing spaces for display in reports etc.

Can SQL Server do a type conversion at run-time

We have a utility written in C which reads columns extracted from a database using a stored procedure and outputs a csv file. Simple huh. However when reading a smallint column, it crashes out and not being the greatest C programmer on the planet, I can't nail it. As a workaround can you change the data type in a stored procedure e.g. could the C program "see" the column as a varchar rather than a smallint at runtime?
This is only a monthly process so the impact of doing the type conversion is not an issue.
Yes, it can. You can do so using either the CAST or CONVERT operator. For instance:
CONVERT(varchar, MyIntColumn) AS MyIntColumn
That will ensure that when the column goes to the client, it goes as a varchar string.

Resources