I am in a sticky situation. Originally my database catalog is case-insensitive. I write happy queries without minding capitalization of my variable names. Everything was good. After my database is migrated to a different host, the collation at server instance level is case sensitive. Now my variable names need spell checked and case checked.
that is alright as I use variables sparingly.
Recently a situation arisen where I need to use temp tables to buffer some results from multiple views before referring them in my main query. In essence:
SELECT * INTO #myview1 FROM vw_myview1
SELECT * INTO #myview2 FROM vw_myview2
and
SELECT *
FROM #myview1 v1
JOIN #myview2 v2 on v1.id = v2.id
All would be good if my database instance had case insensitive collation. But no. In my main queries, I would have column name capitalization all messed up, WorkID, workId, workid, You name it. I have more than 50 of these queries where I need the temp tables workaround. It's insane and error prone to have to fix the capitalization for every instance that refer to columns in the new temp table . Is there anyways I can flip a switch and say ignore column name collations for this temp table?
If you have to insure the temporary table has the column collation you require, there are a couple of ways. It a is PITA, but not so bad. For 50+ tables, it's still a PITA.
In SSMS, use the Tools -> Options dialog to get to "SQL Server Object Explorer". The "Scripting" options are located here. Set the option for "include collation" to true. Then script out the table or view. Convert the script from the table to the temp table syntax you need.
You can also "cheat" and use a new table in the same database rather than a temp table. It's up to you to clean up. If it's large, it will be in the transaction log.
I've seen examples where the insert into uses a subquery for a column name from the source with a where 1=2. This causes the collation to be included. It only creates the table. It seems funky to me and more tedious than scripting it out.
I'm using Microsoft SQL Server Management Studio, after I have modified column type from varchar to int I tried to update the table but it throw the following error
Saving changes is not permitted. The change you have made requires the
following table to be dropped and re-created. You have either made
changes to a table that can't be recreated or enabled the option
prevent saving changes that require the table to be re-created. (list of 3 tables have a relation with this table as fk)
I tried to fix it by Tools >> Options >> Designers and uncheck “Prevent Saving changes that require table re-creation”
from this question
Sql Server 'Saving changes is not permitted' error ► Prevent saving changes that require table re-creation
then the update not throw any error but It throw the following error when open table design
Catastrophic failure (Exception from(E_UNEXPECTED)) (SQLEditors))
after modifying column type
I tried this but also the same error
RegSvr32 msxml3.dll
RegSvr32 msxml6.dll
per comments updating here:
Revert back the changes to what they were before the error and work on a differnt solution to fix the problem.
One alternative is:
Create new column in table that you want the data to be inserted into/converted to
Update that new column from the existing column you have the data in you are trying to convert from.
Verify the data in the new column is correct/works good.
Drop the old column.
Rename the new column to the old columns name.
I am working on a project creating a log table of sorts for failed jobs. Since the step_id can change depending on step order, I wanted to use the step_uid as a unique identifier for a step. With that in mind, is seems that the step_uid in msdb's sysjobsteps table is a nullable column, and I am relying on that column NOT being null as of yet.
Does anyone know why or when that column would ever be null. No examples exist on my current server.
By looking at the source code of the sp_add_jobstep_internal stored procedure, we can realize that the step_uid will allways be filled-in by this procedure.
Moreover, the sp_write_sysjobstep_log stored procedure assumes that step_uid cannot be null (it copies its value into the sysjobstepslogs table, where step_uid is defined as NOT NULL).
I think the step_uid column was defined as nullable only because it did not exist in SQL Server 2000. However, since SQL Server 2005 it seems to be always filled-in.
I have SQL Server 2012 installed that is used for a few different applications. One of our applications needs to be installed, but the company is saying that:
The SQL collation isn't correct, it needs to be: SQL_Latin1_General_CP1_CI_AS
You can just uninstall the SQL Server Database Engine & upon reinstall select the right collation.
What possible reason would this company have to want to change the collation of the database engine itself?
Yes, you are able to set the collation at the database level. To do so, here is an example:
USE master;
GO
ALTER DATABASE <DatabaseName>
COLLATE SQL_Latin1_General_CP1_CI_AS;
GO
You can alter the database Collation even after you have created the database using the following query
USE master;
GO
ALTER DATABASE Database_Name
COLLATE Your_New_Collation;
GO
For more information on database collation Read here
What possible reason would this company have to want to change the collation of the database engine itself?
The other two answers are speaking in terms of Database-level Collation, not Instance-level Collation (i.e. "database engine itself"). The most likely reason that the vendor has for wanting a highly specific Collation (not just a case-insensitive one of your choosing, for example) is that, like most folks, they don't really understand how Collations work, but what they do know is that their application works (i.e. does not get Collation conflict errors) when the Instance and Database both have a Collation of SQL_Latin1_General_CP1_CI_AS, which is the Collation of their Instance and Database (that they develop the app on), because that is the default Collation when installing on an OS having English as its language.
I'm guessing that they have probably had some customers report problems that they didn't know how to fix, but narrowed it down to those Instances not having SQL_Latin1_General_CP1_CI_AS as the Instance / Server -level Collation. The Instance-level Collation controls not just tempdb meta-data (and default column Collation when no COLLATE keyword is specified when creating local or global temporary tables), which has been mentioned by others, but also name resolution for variables / parameters, cursors, and GOTO labels. Even if unlikely that they would be using GOTO statements, they are certainly using variables / parameters, and likely enough to be using cursors.
What this means is that they likely had problems in one or more of the following areas:
Collation conflict errors related to temporary tables:
tempdb being in the Collation of the Instance does not always mean that there will be problems, even if the COLLATE keyword was never used in a CREATE TABLE #[#]... statement. Collation conflicts only occur when attempting to combine or compare two string columns. So assuming that they created a temporary table and used it in conjunction with a table in their Database, they would need to be JOINing on those string columns, or concatenating them, or combining them via UNION, or something along those lines. Under these circumstances, an error will occur if the Collations of the two columns are not identical.
Unexpected behavior:
Comparing a string column of a table to a variable or parameter will use the Collation of the column. Given their requirement for you to use SQL_Latin1_General_CP1_CI_AS, this vendor is clearly expecting case-insensitive comparisons. Since string columns of temp tables (that were not created using the COLLATE keyword) take on the Collation of the Instance, if the Instance is using a binary or case-sensitive Collation, then their application will not be returning all of the data that they were expecting it to return.
Code compilation errors:
Since the Instance-level Collation controls resolution of variable / parameter / cursor names, if they have inconsistent casing in any of their variable / parameter / cursor names, then errors will occur when attempting to execute the code. For example, doing this:
DECLARE #CustomerID INT;
SET #customerid = 5;
would get the following error:
Msg 137, Level 15, State 1, Line XXXXX
Must declare the scalar variable "#customerid".
Similarly, they would get:
Msg 16916, Level 16, State 1, Line XXXXX
A cursor with the name 'Customers' does not exist.
if they did this:
DECLARE customers CURSOR FOR SELECT 1 AS [Bob];
OPEN Customers;
These problems are easy enough to avoid, simply by doing the following:
Specify the COLLATE keyword on string columns when creating temporary tables (local or global). Using COLLATE DATABASE_DEFAULT is handy if the Database itself is not guaranteed to have a particular Collation. But if the Collation of the Database is always the same, then you can specify either DATABASE_DEFAULT or the particular Collation. Though I suppose DATABASE_DEFAULT works in both cases, so maybe it's the easier choice.
Be consistent in casing of identifiers, especially variables / parameters. And to be more complete, I should mention that Instance-level meta-data is also affected by the Instance-level Collation (e.g. names of Logins, Databases, server-Roles, SQL Agent Jobs, SQL Agent Job Steps, etc). So being consistent with casing in all areas is the safest bet.
Am I being unfair in assuming that the vendor doesn't understand how Collations work? Well, according to a comment made by the O.P. on M.Ali's answer:
I got this reply from him: "It's the other way around, you need the new SQL instance collation to match the old SQL collation when attaching databases to it. The collation is used in the functioning of the database, not just something that gets set when it's created."
the answer is "no". There are two problems here:
No, the Collations of the source and destination Instances do not need to match when attaching a Database to a new Instance. In fact, you can even attach a system DB to an Instance that has a different Collation, thereby having a mismatch between the attached system DB and the Instance and the other system DBs.
It's unclear if "database" in that last sentence means actual Database or the Instance (sometimes people use the term "database" to refer to the RDBMS as a whole). If it means actual "Database", then that is entirely irrelevant because the issue at hand is the Instance-level Collation. But, if the vendor meant the Instance, then while true that the Collation is used in normal operations (as noted above), this only shows awareness of simple cause-effect relationship and not actual understanding. Actual understanding would lead to doing those simple fixes (noted above) such that the Instance-level Collation was a non-issue.
If needing to change the Collation of the Instance, please see:
Changing the Collation of the Instance, the Databases, and All Columns in All User Databases: What Could Possibly Go Wrong?
For more info on working with Collations / encodings / Unicode / etc, please visit:
Collations.Info
Does anyone know how the SchemaCompare in Visual Studio (using 2010 currently) determines how to handle [SQL Server 2008R2] database table updates (column data type, optionality, etc)?
The options are to:
Use separate ALTER TABLE statements
Create a new table, copy the old data into the new table, rename the old table before the new one can be renamed to assume the proper name
I'm asking because we have a situation involving a TIMESTAMP column (for optimistic locking). If SchemaCompare uses the new table approach, the TIMESTAMP column values will change & cause problems for anyone with the old TIMESTAMP values.
I believe Schema Compare employs the same CREATE-COPY-DROP-RENAME (CCDR) strategy as VSTSDB described here: link
Should be able to confirm this by running a compare and scripting out the deploy, no?