How can I create backup script from diagram in SQL Server? - sql-server

I use SQL Server 2012 and SQL Server 2008 R2.
I create a script from all object (tables / trigger / stored procedure / function ...) in my database.
I generated this script from SQL Server Management Studio. I can recreate my database with this scrips on the other server. But I miss all diagrams of my database after run my script for create another database.
Therefore, I need create backup script from all diagrams that exist in my database.
I need execute this script on the destination database for recreating all my diagrams.
I found this Link. but i need some thinks that create all script (Insert Command) automatically.

I have found a reasonable solution. The problem is that Management Studio cannot display more that 65535 characters for Non-XML data, and cannot be set to display more than 65535.
See code for documentation :)
Backup script:
-- 1. Read from DB, using XML to workaround the 65535 character limit
declare #definition varbinary(max)
select #definition = definition from dbo.sysdiagrams where name = 'ReportingDBDiagram'
select
'0x' + cast('' as xml).value('xs:hexBinary(sql:variable("#definition") )', 'varchar(max)')
for xml path('')
-- 2. Open the result XML in Management Studio
-- 3. Copy the result
-- 4. Paste this in backup script for #definition variable
Restore script:
declare #definition varbinary(max)
set #definition = 0xD0CF -- Paste 0x0 value from Backup script
-- Create diagram using 'official' Stored Procedure
exec dbo.sp_creatediagram
#diagramname = 'ReportingDBDiagramCopy',
#owner_id = null,
#version = 1,
#definition = #definition

Scripting your database does not include diagrams as they are not server objects in the same way as a table or stored procedure; they exist as data in the sysdiagrams table.
A similar question on SO asked How do you migrate SQL Server Database Diagrams to another Database?
The accepted answer is to copy the contents of the sysdiagrams table to the new database, so you could include the table contents in your script. The answer with the most up-votes has a link to a way of scripting diagrams.
I've tried backing up and then restoring a database to the same server, deleting the diagram I had created (I only had one) and then running the following query:
INSERT INTO database2.dbo.sysdiagrams
(
NAME
,principal_id
,version
,DEFINITION
)
SELECT NAME
,principal_id
,version
,DEFINITION
FROM database1.dbo.sysdiagrams
The diagram was successfully restored, however I did do this on a restored backup, I should really test it with a new database generated from a script.
UPDATE:
I scripted a database and then created a new database from it. When trying to rebuild the diagrams using an INSERT statement I got the error
So although it seems possible it's not trivial to create diagrams in a new database created from a script. Go with the answer given regarding scripting diagrams and modify it for your own needs.
Perhaps you can investigate further and post your own answer :)

Here's a quick & dirty method I use. Since the query window won't display the full varbinary(max) value of the definition field, but the XML editor will, I output the rows to XML as follows:
Run the following query on the server/database that contains the diagrams:
SELECT 'INSERT sysdiagrams(name,principal_id,diagram_id,version,definition) VALUES('''+name+''','
+CONVERT(varchar(2),principal_id)+','+CONVERT(varchar(2),diagram_id)+','+CONVERT(varchar(2),version)+','
+'0x' + CAST('' as xml).value('xs:hexBinary(sql:column("definition"))','varchar(max)') +')'
FROM RCSQL_ClaimStatus.dbo.sysdiagrams
FOR XML PATH
Click on the generated link to open the XML result, and ctrl-a & ctrl-c to copy all rows generated.
Paste that output back into your query window. I usually paste it between a pair of IDENTITY_INSERT's like this:
--TRUNCATE TABLE sysdiagrams
SET IDENTITY_INSERT sysdiagrams ON;
<row>INSERT sysdiagrams(name,principal_id,diagram_id,version,definition) VALUES('ERD1',1,1,1,0xD0CF11E0A1B11AE100000...)</row>
<row>INSERT sysdiagrams(name,principal_id,diagram_id,version,definition) VALUES('ERD2',1,2,1,0xD0CF11E0A1B11AE100000...)</row>
<row>INSERT sysdiagrams(name,principal_id,diagram_id,version,definition) VALUES('ERD3',1,3,1,0xD0CF11E0A1B11AE100000...)</row>
SET IDENTITY_INSERT sysdiagrams OFF;
Remove the row & /row XML tags from your inserts, and run them on the target server. You can truncate the sysdiagrams table if you're replacing all values with new values.

Related

Using variables in TSQL and keep formatting in SQL Server Management Studio

I'm creating some views with a lot of references to tables in another database.
At some point the other database needs to change.
I want to make it easy for the next developer to change the scripts to use another database.
This obviously work like it should:
CREATE VIEW ViewName
AS
SELECT *
FROM AnotherDatabase.SchemaName.TableName;
But when I do:
DECLARE #DB CHAR(100)
SET #DB = 'AnotherDatabase'
GO
CREATE VIEW ViewName
AS
SELECT *
FROM #DB.SchemaName.TableName;
I get the error:
Msg 137, Level 15, State 2, Procedure ViewName, Line 3
Must declare the scalar variable "#DB".
I could do something like:
DECLARE #SQL ...
SET #SQL = ' ... FROM ' + #DB + ' ... '
EXEC (#SQL)
But that goes against the purpose of making it easier for the next developer - because this dynamic SQL approach removed the formatting in SSMS.
So my question is: how do I make it easy for the next developer to maintain T-SQL code where he needs to swap out the database reference?
Notes:
I'm using SQL Server 2008 R2
The other database is on the same server.
Consider using SQLCMD variables. This will allow you to specify the actual database name at deployment time. SQL Server tools (SSMS, SQLCMD, SSDT) will replace the SQLCMD variable names with the assigned string values when the script is run. SQLCMD mode can be turned on for the current query windows from the menu option Query-->SQLCMD mode option.
:SETVAR OtherDatabaseName "AnotherDatabaseName"
CREATE VIEW ViewName AS
SELECT *
FROM $(OtherDatabaseName).SchemaName.TableName;
GO
This approach works best when SQL objects are kept under source control.
When you declare variables, they only live during the execution of the statement. You can not have a variable as part of your DDL. You could create a bunch of synonyms, but I consider that over doing it a bit.
The idea that your database names are going to change over time seems a bit out of the ordinary and conceivably one-time events. However, if you do still require to have the ability to quickly change over to point to a new database, you could consider creating a light utility directly in SQL to automatically generate the views to point to the new database.
An implementation may look something like this.
Assumptions
Assuming we have the below databases.
Assuming that you prefer to have the utility in SQL instead of building an application to manage it.
Code:
create database This;
create database That;
go
Configuration
Here I'm setting up some configuration tables. They will do two simple things:
Allow you to indicate the target database name for a particular configuration.
Allow you to define the DDL of the view. The idea is similar to Dan Guzman's idea, where the DDL is dynamically resolved using variables. However, this approach does not use the native SQLCMD mode and instead relies on dynamic SQL.
Here are the configuration tables.
use This;
create table dbo.SomeToolConfig (
ConfigId int identity(1, 1) primary key clustered,
TargetDatabaseName varchar(128) not null);
create table dbo.SomeToolConfigView (
ConfigId int not null
references SomeToolConfig(ConfigId),
ViewName varchar(128) not null,
Sql varchar(max) not null,
unique(ConfigId, ViewName));
Setting the Configuration
Next you set the configuration. In this case I'm setting the TargetDatabaseName to be That. The SQL that is being inserted into SomeToolConfigView is the DDL for the view. I'm using two variables, one {{ViewName}} and {{TargetDatabaseName}}. These variables are replaced with the configuration values.
insert SomeToolConfig (TargetDatabaseName)
values ('That');
insert SomeToolConfigView (ConfigId, ViewName, Sql)
values
(scope_identity(), 'dbo.my_objects', '
create view {{ViewName}}
as
select *
from {{TargetDatabaseName}}.sys.objects;'),
(scope_identity(), 'dbo.my_columns', '
create view {{ViewName}}
as
select *
from {{TargetDatabaseName}}.sys.columns;');
go
The tool
The tool is a stored procedure that takes a configuration identifier. Then based on that identifier if drops and recreates the views in the configuration.
The signature for the stored procedure may look something like this:
exec SomeTool #ConfigId;
Sorry -- I left out the implementation, because I have to scoot, but figured I would respond sooner than later.
Hope this helps.

Data loss warning adding column in the middle of a table

A new column has been added to a table, but the new column was not added to the end of the table definition (rightmost column), but the middle of the table.
When I try to commit this in Redgate SQL Source Control, I get the warning "These changes may result in data loss"
Will data loss really occurr?
Is there a way preview the change script to confirm that no data will be lost?
Can I copy the script and easily turn it into a Migrations V2 script?
Will I just have to
Edit the table in SSMS and move the new column to the end
or write a migration script?
If so, are there any handy tools to do the repetitive stuff?
Up front disclosure that I work for Red Gate on SQL Source Control.
That change will need to re-create a table. By default SSMS won't let you save that change. However that option must have been disabled in SSMS. It's under Tools->Options->Designers->Table and Database Designers->Prevent saving changes that require a table re-creating.
Given that feature is disabled SQL Source Control has then picked that up as a potential data loss situation, and prompted to see if you want to add a migration script.
If other developers within your team pull this change in through a get latest, then SQL Source Control will let them about any potential data loss with more details, depending on the current state of their local database. If the only change is adding columns to an existing table then this will not drop the data in columns that are unchanged.
If you are deploying to another DB (e.g. staging/UAT/prod) and you have SQL Compare you can use that to see exactly what will be applied to a DB if you try and run this against another non-local database. Choose the create deployment script option and you can sanity check the SQL before running.
As you say adding the column to the end of the table will avoid the need for the rebuild, so is probably the simplest way to avoid this if you don't need to worry about where the column is.
Alternatively you can add a migration script to:
Create a new table with the new structure using a temp name
Copy the existing data to the temp table
Drop the existing table
Rename the new temp table to the original name
You mention Migrations v2, the beta feature that changes how migrations work in order to better support branching and merging and DVCS systems. See http://www.red-gate.com/migrations
Version 1 migration scripts will need some modifications in order to be converted to a v2 migration script. It's a fairly trivial change. We're working on documenting this at the moment, and please reach out to us on the Google Group if you'd like more information on this change. https://groups.google.com/forum/#!forum/red-gate-migrations
I moved the column to the end of the table using SSMS to negate the need for a migration script.
In a similar scenario, where it was not convenient to move the column, this is what I did to convert an SSMS script to a Migrations V2 script.
Undo the change in SSMS (deleted the column)
Redo the change in SSMS, but instead of saving the change direct to the database, I saved the change script
Modified the change script
Trimmed the SSMS transaction & environment wrapper
Added a guard clause: IF COL_LENGTH('MyTable','MyColumn') IS NULL
Wrapped the script in BEGIN TRAN - ROLLBACK TRAN to test the script without dirtying the database
Replaced GO with END BEGIN
Tested within rolled-back transaction
Removed BEGIN TRAN - ROLLBACK TRAN development wrapper
Here is the simple sql query that will help to insert column in database table without data loss.
Lets say CCDetails is the table in which we want to insert column GlobaleNote just before column Sys_CreatedBy:
declare #str1 nvarchar(1000)
declare #tableName nvarchar(1000)
set #tableName='CCDetails'
set #str1 = ''
SELECT #str1 = #str1 + ', ' + COLUMN_NAME
FROM Information_Schema.Columns
WHERE Table_Name = #tableName
ORDER BY Ordinal_Position
set #str1 = right(#str1, len(#str1) - 2)
set #str1 = 'select ' + #str1 +' into '+#tableName+'Temp from '+#tableName+' ; Drop Table '+ #tableName + ' ; EXEC sp_rename '+#tableName+'Temp, '+#tableName
set #str1 = REPLACE(#str1,'Sys_CreatedBy','CAST('''' as nvarchar(max)) As GlobaleNote , Sys_CreatedBy' )
exec sp_executesql #str1

SQL Server equivalent of MySQL Dump to produce insert statements for all data in a table

I have an application that uses a SQL Server database with several instances of the database...test, prod, etc... I am making some application changes and one of the changes involves changing a column from a nvarchar(max) to a nvarchar(200) so that I can add a unique constraint on it. SQL Server tells me that this requires dropping the table and recreating it.
I want to put together a script that will do the table drop, recreate it with the new schema, and then reinsert the data that was there previously all in one go, if possible, just to keep things simple for use when I migrate this change to production.
There is probably a good SQL Server way to do this but I'm just not aware of it. If I was using Mysql I would mysqldump the table and its contents, and use that as my script for applying that change to production. I can't find any export functionality in SQL server that will give me a text file consisting of inserts for all data in a table.
Use SQL Server's Generate Scripts command
right click on the database; Tasks -> Generate Scripts
select your tables, click Next
click the Advanced button
find Types of data to script - choose Schema and Data.
you can then choose to save to file, or put in new query window.
results in INSERT statements for all table data selected in bullet 2.
No need to script
here are two ways
1 use alter table ....alter column.....
example..you have to do 1 column at a time
create table Test(SomeColumn nvarchar(max))
go
alter table Test alter column SomeColumn nvarchar(200)
go
2 dump into a new table while converting the column
select <columns except for the columns you want to change>,
convert(nvarchar(200),YourColumn) as YourColumn
into SomeNewTable
from OldTable
drop old table
rename this table to the same table as the old table
EXEC sp_rename 'SomeNewTable', 'OldTable';
Now add your index

Errors: "INSERT EXEC statement cannot be nested." and "Cannot use the ROLLBACK statement within an INSERT-EXEC statement." How to solve this?

I have three stored procedures Sp1, Sp2 and Sp3.
The first one (Sp1) will execute the second one (Sp2) and save returned data into #tempTB1 and the second one will execute the third one (Sp3) and save data into #tempTB2.
If I execute the Sp2 it will work and it will return me all my data from the Sp3, but the problem is in the Sp1, when I execute it it will display this error:
INSERT EXEC statement cannot be nested
I tried to change the place of execute Sp2 and it display me another error:
Cannot use the ROLLBACK statement
within an INSERT-EXEC statement.
This is a common issue when attempting to 'bubble' up data from a chain of stored procedures. A restriction in SQL Server is you can only have one INSERT-EXEC active at a time. I recommend looking at How to Share Data Between Stored Procedures which is a very thorough article on patterns to work around this type of problem.
For example a work around could be to turn Sp3 into a Table-valued function.
This is the only "simple" way to do this in SQL Server without some giant convoluted created function or executed sql string call, both of which are terrible solutions:
create a temp table
openrowset your stored procedure data into it
EXAMPLE:
INSERT INTO #YOUR_TEMP_TABLE
SELECT * FROM OPENROWSET ('SQLOLEDB','Server=(local);TRUSTED_CONNECTION=YES;','set fmtonly off EXEC [ServerName].dbo.[StoredProcedureName] 1,2,3')
Note: You MUST use 'set fmtonly off', AND you CANNOT add dynamic sql to this either inside the openrowset call, either for the string containing your stored procedure parameters or for the table name. Thats why you have to use a temp table rather than table variables, which would have been better, as it out performs temp table in most cases.
OK, encouraged by jimhark here is an example of the old single hash table approach: -
CREATE PROCEDURE SP3 as
BEGIN
SELECT 1, 'Data1'
UNION ALL
SELECT 2, 'Data2'
END
go
CREATE PROCEDURE SP2 as
BEGIN
if exists (select * from tempdb.dbo.sysobjects o where o.xtype in ('U') and o.id = object_id(N'tempdb..#tmp1'))
INSERT INTO #tmp1
EXEC SP3
else
EXEC SP3
END
go
CREATE PROCEDURE SP1 as
BEGIN
EXEC SP2
END
GO
/*
--I want some data back from SP3
-- Just run the SP1
EXEC SP1
*/
/*
--I want some data back from SP3 into a table to do something useful
--Try run this - get an error - can't nest Execs
if exists (select * from tempdb.dbo.sysobjects o where o.xtype in ('U') and o.id = object_id(N'tempdb..#tmp1'))
DROP TABLE #tmp1
CREATE TABLE #tmp1 (ID INT, Data VARCHAR(20))
INSERT INTO #tmp1
EXEC SP1
*/
/*
--I want some data back from SP3 into a table to do something useful
--However, if we run this single hash temp table it is in scope anyway so
--no need for the exec insert
if exists (select * from tempdb.dbo.sysobjects o where o.xtype in ('U') and o.id = object_id(N'tempdb..#tmp1'))
DROP TABLE #tmp1
CREATE TABLE #tmp1 (ID INT, Data VARCHAR(20))
EXEC SP1
SELECT * FROM #tmp1
*/
My work around for this problem has always been to use the principle that single hash temp tables are in scope to any called procs. So, I have an option switch in the proc parameters (default set to off). If this is switched on, the called proc will insert the results into the temp table created in the calling proc. I think in the past I have taken it a step further and put some code in the called proc to check if the single hash table exists in scope, if it does then insert the code, otherwise return the result set. Seems to work well - best way of passing large data sets between procs.
This trick works for me.
You don't have this problem on remote server, because on remote server, the last insert command waits for the result of previous command to execute. It's not the case on same server.
Profit that situation for a workaround.
If you have the right permission to create a Linked Server, do it.
Create the same server as linked server.
in SSMS, log into your server
go to "Server Object
Right Click on "Linked Servers", then "New Linked Server"
on the dialog, give any name of your linked server : eg: THISSERVER
server type is "Other data source"
Provider : Microsoft OLE DB Provider for SQL server
Data source: your IP, it can be also just a dot (.), because it's localhost
Go to the tab "Security" and choose the 3rd one "Be made using the login's current security context"
You can edit the server options (3rd tab) if you want
Press OK, your linked server is created
now your Sql command in the SP1 is
insert into #myTempTable
exec THISSERVER.MY_DATABASE_NAME.MY_SCHEMA.SP2
Believe me, it works even you have dynamic insert in SP2
I found a work around is to convert one of the prods into a table valued function. I realize that is not always possible, and introduces its own limitations. However, I have been able to always find at least one of the procedures a good candidate for this. I like this solution, because it doesn't introduce any "hacks" to the solution.
I encountered this issue when trying to import the results of a Stored Proc into a temp table, and that Stored Proc inserted into a temp table as part of its own operation. The issue being that SQL Server does not allow the same process to write to two different temp tables at the same time.
The accepted OPENROWSET answer works fine, but I needed to avoid using any Dynamic SQL or an external OLE provider in my process, so I went a different route.
One easy workaround I found was to change the temporary table in my stored procedure to a table variable. It works exactly the same as it did with a temp table, but no longer conflicts with my other temp table insert.
Just to head off the comment I know that a few of you are about to write, warning me off Table Variables as performance killers... All I can say to you is that in 2020 it pays dividends not to be afraid of Table Variables. If this was 2008 and my Database was hosted on a server with 16GB RAM and running off 5400RPM HDDs, I might agree with you. But it's 2020 and I have an SSD array as my primary storage and hundreds of gigs of RAM. I could load my entire company's database to a table variable and still have plenty of RAM to spare.
Table Variables are back on the menu!
I recommend to read this entire article. Below is the most relevant section of that article that addresses your question:
Rollback and Error Handling is Difficult
In my articles on Error and Transaction Handling in SQL Server, I suggest that you should always have an error handler like
BEGIN CATCH
IF ##trancount > 0 ROLLBACK TRANSACTION
EXEC error_handler_sp
RETURN 55555
END CATCH
The idea is that even if you do not start a transaction in the procedure, you should always include a ROLLBACK, because if you were not able to fulfil your contract, the transaction is not valid.
Unfortunately, this does not work well with INSERT-EXEC. If the called procedure executes a ROLLBACK statement, this happens:
Msg 3915, Level 16, State 0, Procedure SalesByStore, Line 9 Cannot use the ROLLBACK statement within an INSERT-EXEC statement.
The execution of the stored procedure is aborted. If there is no CATCH handler anywhere, the entire batch is aborted, and the transaction is rolled back. If the INSERT-EXEC is inside TRY-CATCH, that CATCH handler will fire, but the transaction is doomed, that is, you must roll it back. The net effect is that the rollback is achieved as requested, but the original error message that triggered the rollback is lost. That may seem like a small thing, but it makes troubleshooting much more difficult, because when you see this error, all you know is that something went wrong, but you don't know what.
I had the same issue and concern over duplicate code in two or more sprocs. I ended up adding an additional attribute for "mode". This allowed common code to exist inside one sproc and the mode directed flow and result set of the sproc.
what about just store the output to the static table ? Like
-- SubProcedure: subProcedureName
---------------------------------
-- Save the value
DELETE lastValue_subProcedureName
INSERT INTO lastValue_subProcedureName (Value)
SELECT #Value
-- Return the value
SELECT #Value
-- Procedure
--------------------------------------------
-- get last value of subProcedureName
SELECT Value FROM lastValue_subProcedureName
its not ideal, but its so simple and you don't need to rewrite everything.
UPDATE:
the previous solution does not work well with parallel queries (async and multiuser accessing) therefore now Iam using temp tables
-- A local temporary table created in a stored procedure is dropped automatically when the stored procedure is finished.
-- The table can be referenced by any nested stored procedures executed by the stored procedure that created the table.
-- The table cannot be referenced by the process that called the stored procedure that created the table.
IF OBJECT_ID('tempdb..#lastValue_spGetData') IS NULL
CREATE TABLE #lastValue_spGetData (Value INT)
-- trigger stored procedure with special silent parameter
EXEC dbo.spGetData 1 --silent mode parameter
nested spGetData stored procedure content
-- Save the output if temporary table exists.
IF OBJECT_ID('tempdb..#lastValue_spGetData') IS NOT NULL
BEGIN
DELETE #lastValue_spGetData
INSERT INTO #lastValue_spGetData(Value)
SELECT Col1 FROM dbo.Table1
END
-- stored procedure return
IF #silentMode = 0
SELECT Col1 FROM dbo.Table1
Declare an output cursor variable to the inner sp :
#c CURSOR VARYING OUTPUT
Then declare a cursor c to the select you want to return.
Then open the cursor.
Then set the reference:
DECLARE c CURSOR LOCAL FAST_FORWARD READ_ONLY FOR
SELECT ...
OPEN c
SET #c = c
DO NOT close or reallocate.
Now call the inner sp from the outer one supplying a cursor parameter like:
exec sp_abc a,b,c,, #cOUT OUTPUT
Once the inner sp executes, your #cOUT is ready to fetch. Loop and then close and deallocate.
If you are able to use other associated technologies such as C#, I suggest using the built in SQL command with Transaction parameter.
var sqlCommand = new SqlCommand(commandText, null, transaction);
I've created a simple Console App that demonstrates this ability which can be found here:
https://github.com/hecked12/SQL-Transaction-Using-C-Sharp
In short, C# allows you to overcome this limitation where you can inspect the output of each stored procedure and use that output however you like, for example you can feed it to another stored procedure. If the output is ok, you can commit the transaction, otherwise, you can revert the changes using rollback.
On SQL Server 2008 R2, I had a mismatch in table columns that caused the Rollback error. It went away when I fixed my sqlcmd table variable populated by the insert-exec statement to match that returned by the stored proc. It was missing org_code. In a windows cmd file, it loads result of stored procedure and selects it.
set SQLTXT= declare #resets as table (org_id nvarchar(9), org_code char(4), ^
tin(char9), old_strt_dt char(10), strt_dt char(10)); ^
insert #resets exec rsp_reset; ^
select * from #resets;
sqlcmd -U user -P pass -d database -S server -Q "%SQLTXT%" -o "OrgReport.txt"

Moving all non-clustered indexes to another filegroup in SQL Server

In SQL Server 2008, I want to move ALL non-clustered indexes in a DB to a secondary filegroup. What's the easiest way to do this?
Run this updated script to create a stored procedure called MoveIndexToFileGroup. This procedure moves all the non-clustered indexes on a table to a specified file group. It even supports the INCLUDE columns that some other scripts do not. In addition, it will not rebuild or move an index that is already on the desired file group. Once you've created the procedure, call it like this:
EXEC MoveIndexToFileGroup #DBName = '<your database name>',
#SchemaName = '<schema name that defaults to dbo>',
#ObjectNameList = '<a table or list of tables>',
#IndexName = '<an index or NULL for all of them>',
#FileGroupName = '<the target file group>';
To create a script that will run this for each table in your database, switch your query output to text, and run this:
SELECT 'EXEC MoveIndexToFileGroup '''
+TABLE_CATALOG+''','''
+TABLE_SCHEMA+''','''
+TABLE_NAME+''',NULL,''the target file group'';'
+char(13)+char(10)
+'GO'+char(13)+char(10)
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_TYPE = 'BASE TABLE'
ORDER BY TABLE_SCHEMA, TABLE_NAME;
Please refer to the original blog for more details. I did not write this procedure, but updated it according to the blog's responses and confirmed it works on both SQL Server 2005 and 2008.
Updates
#psteffek modified the script to work on SQL Server 2012. I merged his changes.
The procedure fails when your table has the IGNORE_DUP_KEY option on. No fix for this yet.
#srutzky pointed out the procedure does not guarantee to preserve the order of an index and made suggestions on how to fix it. I updated the procedure accordingly.
ojiNY noted the procedure left out index filters (for compatibility with SQL 2005). Per his suggestion, I added them back in.
Script them, change the ON clause, drop them, re-run the new script. There is no alternative really.
Luckily, there are scripts on the Interwebs such as this one that will deal with scripting for you.
Update: This thing will take long time to do step 2 manually if you are using MS SQL Server manager 2008R2 or earlier. I used sql server manager 2014, so it works well (because the way it export the drop and create index is easy to modify)
I tried to run script in SQL server 2014 and got some issue, I'm too lazy to detect the problems, SO I come up with another solution that not depend on the version of SQL server you are running.
Export your index (with drop and create)
2.Update your script, remove all things related to drop create tables, keep the thing belong to indexs. and Replace your original index with the new index (in my case, I replace ON [PRIMARY] by ON [SECONDARY]
[]5
Run script! And wait until it done.
(You may want to save the script to run in some others environment).

Resources