Script out objects from DB with identity columns replaced by variables to copy object to other environment - sql-server

I have multiple environments for an application, like DEV, TEST, UAT, PROD.
I would need to copy some objects from the database created in UAT environment into PROD environment. The object is stored in the DB spreaded to multiple tables. Most of the tables have PK as IDENTITY (autogenerated). I don't have access to PROD db data (it is sensitive data in general).
What I need is to generate SQL script for inserting the object that does not preserve the values of Ids but uses the Ids assigned in target environment to related records.
Example: let's say object Order composed of [Order] and list of [OrderItem] rows. I would need to select one specific row in [Order] table, specify that also related rows from [OrderItem] should be included and generate script that would insert new row for [Order], get the value of assigned Order.Id, keep it in a variable and use it for inserting [OrderItem] rows. This is trivial example, my object is spreaded to many more tables, but the concept is the same.
Is there any tool for doing this? All scripting out utilities I tried preserve values of Identity columns.

I think you would need to write custom code to achieve the need of first loading to parent table, followed by loading to child table based on SCOPE_IDENTITY() value.
Instead, if you don't have huge number of rows, I would suggest you to follow below steps:
Load data from Production to another environment using SSIS package or another means
Add the foreign key constraint in the child table to have UPDATE CASCADE
ALTER TABLE ChildTable ADD CONSTRAINT FK_OrderDetail_Order FOREIGN KEY([OrderID])
REFERENCES [dbo].[Order] ([OrderID])
ON UPDATE CASCADE
Now, Update the identity value to another value using some logic
set identity_insert Order ON
UPDATE Order SET OrderID = OrderID + 1000000 -- Have some other logic for random generation
set identity_Insert Order OFF
Now, automatically the child table will get updated with new OrderID.
UPDATE
This SO link talks about automating the code generation for INSERT scripts: What is the best way to auto-generate INSERT statements for a SQL Server table?
Also, I would suggest you to first script out parent tables, followed by child tables.
You can identify parent,child tables using the below script. You need to generate scripts now as per the parent, child depency.
select object_name(parent_object_id) as childTable, object_name(referenced_object_id) as parentTable
from sys.foreign_keys
WHERE object_name(parent_object_id) IN 'Your comma separated list of tables'

Related

Transforms vs. Table triggers in SymmetricDS

In the source database we have a table, lets call it TableA. with primary key PK_TableA. This table has a dependent table in source database, lets call it TableB, via a FK - lets call it FK_TableA.
We syncronize TableA from source database to target database, with same table names.
We do NOT syncronize TableB from source database to target database, but it exists in target database with the same name and has the same relation of dependence with TableA.
When a row is deleted from TableA in source database, TableB is updated by modifying all the rows with the deleted FK, setting FK_TableA column to null.
We intend to produce the same behaviour in target database without having to syncronize TableB.
So, on delete of a row from TableA in source database we:
1) want to update, to null, column FK_TableA from TableB in the target database, for the corresponding rows
2) delete the row from TableA in targert database
Is this possible?
What is the best mechanism? Transforms or Table Triggers (maybe with a Sync On Delete Condition)?
Can you please try to explain the way to do it?
Thanks.
Either a load filter or a load transform would work. The load filter is probably simpler for this case. Use sym_load_filter to configure a "before write" BeanShell script that does this:
if (data.getDataEventType().name().equals("DELETE")) {
context.findTransaction().execute("update tableb set fk_tablea = null " +
"where fk_tablea = " + OLD_FK_TABLEA);
}
return true;
The script checks that it's a DELETE statement, then it will run the SQL you need. The values for the table columns on the current row are available as upper case variables. The script returns true so the original delete will also run.
See https://www.symmetricds.org/doc/3.10/html/user-guide.html#_load_filters for more details on how to use load filters.

update data when importing a duplicate record in SQL

I have a unique requirement - I have a data list which is in excel format and I import this data into SQL 2008 R2., once every year, using SQL's import functionality. In the table "Patient_Info", i have a primary key set on the column "MemberID" and when i import the data without any duplicates, all is well.
But some times, when i get this data, some of the patient's info gets repeated with updated address / telephone , etc., with the same MemberID and since I set this as primary key, this record gets left out without importing into the database and thus, i dont have an updated record for that patient.
EDIT
I am not sure how to achieve this, to update some of the rows which might have existing memberIDs and any pointer to this is greatly appreciated.
examples below:
List 1:
List 2:
This is not a terribly unique requirement.
One acceptable pattern you can use to resolve this problem would be to import your data into "staging" table. The staging table would have the same structure as the target table to which you're importing, but it would be a heap - it would not have a primary key.
Once the data is imported, you would then use queries to consolidate newer data records with older data records by MemberID.
Once you've consolidated all same MemberID records, there will be no duplicate MemberID values, and you can then insert all the staging table records into the target table.
EDIT
As #Panagiotis Kanavos suggests, you can use a SQL MERGE statement to both insert new records and update existing records from your staging table to the target table.
Assume that the Staging table is named Patient_Info_Stage, the target table is named Patient_Info, and that these tables have similar schemas. Also assume that field MemberId is the primary key of table Patient_Info.
The following MERGE statement will merge the staging table data into the target table:
BEGIN TRAN;
MERGE Patient_Info WITH (SERIALIZABLE) AS Target
USING Patient_Info_Stage AS Source
ON Target.MemberId = Source.MemberId
WHEN MATCHED THEN UPDATE
SET Target.FirstName = Source.FirstName
,Target.LastName = Source.LastName
,Target.Address = Source.Address
,Target.PhoneNumber = Source.PhoneNumber
WHEN NOT MATCHED THEN INSERT (
MemberID
,FirstName
,LastName
,Address
,PhoneNumber
) Values (
Source.MemberId
,Source.FirstName
,Source.LastName
,Source.Address
,Source.PhoneNumber
);
COMMIT TRAN;
*NOTE: The T-SQL MERGE operation is not atomic, and it is possible to get into a race condition with it. To insure it will work properly, do these things:
Ensure that your SQL Server is up-to-date with service packs and patches (current rev of SQL Server 2008 R2 is SP3, version 10.50.6000.34).
Wrap your MERGE in a transaction (BEGIN TRAN;, COMMIT TRAN;)
Use SERIALIZABLE hint to help prevent a potential race condition with the T-SQL MERGE statement.

How To change the column order of An Existing Table in SQL Server 2008

I have situation where I need to change the order of the columns/adding new columns for existing Table in SQL Server 2008.
Existing column
MemberName
MemberAddress
Member_ID(pk)
and I want this order
Member_ID(pk)
MemberName
MemberAddress
I got the answer for the same ,
Go on SQL Server → Tools → Options → Designers → Table and Database Designers and unselect Prevent saving changes that require table re-creation
2- Open table design view and that scroll your column up and down and save your changes.
It is not possible with ALTER statement. If you wish to have the columns in a specific order, you will have to create a newtable, use INSERT INTO newtable (col-x,col-a,col-b) SELECT col-x,col-a,col-b FROM oldtable to transfer the data from the oldtable to the newtable, delete the oldtable and rename the newtable to the oldtable name.
This is not necessarily recommended because it does not matter which order the columns are in the database table. When you use a SELECT statement, you can name the columns and have them returned to you in the order that you desire.
If your table doesn't have any records you can just drop then create your table.
If it has records you can do it using your SQL Server Management Studio.
Just click your table > right click > click Design then you can now arrange the order of the columns by dragging the fields on the order that you want then click save.
Best Regards
I tried this and dont see any way of doing it.
here is my approach for it.
Right click on table and Script table for Create and have this on
one of the SQL Query window,
EXEC sp_rename 'Employee', 'Employee1' -- Original table name is Employee
Execute the Employee create script, make sure you arrange the columns in the way you need.
INSERT INTO TABLE2 SELECT * FROM TABLE1.
-- Insert into Employee select Name, Company from Employee1
DROP table Employee1.
Relying on column order is generally a bad idea in SQL. SQL is based on Relational theory where order is never guaranteed - by design. You should treat all your columns and rows as having no order and then change your queries to provide the correct results:
For Columns:
Try not to use SELECT *, but instead specify the order of columns in the select list as in: SELECT Member_ID, MemberName, MemberAddress from TableName. This will guarantee order and will ease maintenance if columns get added.
For Rows:
Row order in your result set is only guaranteed if you specify the ORDER BY clause.
If no ORDER BY clause is specified the result set may differ as the Query Plan might differ or the database pages might have changed.
Hope this helps...
This can be an issue when using Source Control and automated deployments to a shared development environment. Where I work we have a very large sample DB on our development tier to work with (a subset of our production data).
Recently I did some work to remove one column from a table and then add some extra ones on the end. I then had to undo my column removal so I re-added it on the end which means the table and all references are correct in the environment but the Source Control automated deployment will no longer work because it complains about the table definition changing.
The real problem here is that the table + indexes are ~120GB and the environment only has ~60GB free so I'll need to either:
a) Rename the existing columns which are in the wrong order, add new columns in the right order, update the data then drop the old columns
OR
b) Rename the table, create a new table with the correct order, insert to the new table from the old and delete from the old as I go along
The SSMS/TFS Schema compare option of using a temp table won't work because there isn't enough room on disc to do it.
I'm not trying to say this is the best way to go about things or that column order really matters, just that I have a scenario where it is an issue and I'm sharing the options I've thought of to fix the issue
SQL query to change the id column into first:
ALTER TABLE `student` CHANGE `id` `id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT FIRST;
or by using:
ALTER TABLE `student` CHANGE `id` `id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT AFTER 'column_name'

Microsoft SQL server: have one auto-incrementing column update another table

I have a table of orders with orderID. I want when I create a new row in orders, and automatically have it add the same orderID to a new row in orderDetails. I got the auto incrementing to work, however whenever I try to link the two, adding cascade delete, it gives me an error.
'order' table saved successfully
'orderDetail' table
- Unable to create relationship 'FK_orderDetail_order'.
Cascading foreign key 'FK_orderDetail_order' cannot be created where the referencing column 'orderDetail.orderID' is an identity column.
Could not create constraint. See previous errors.
Which seems to be because of the fact there is no orderID at row creation. Without these two linked it's pretty hard to link an order to its information.
I am using Microsoft SQL server mgt studio. I learned via command-line MySQL, not SQL, so this whole GUI stuff is throwing me off (and I'm a tad rusty).
Your problem is that 'orderDetail.orderID' should not be an identity column (auto-incrementing). It should be based on the orderId in the Order table. You can do that in a variety of ways. If you are using stored procedures, and making separate calls to the database for the orderDetail records, have the code save the order row first, and return the newly created OrderId value, then use that value on the calls to save orderdetails. If you are making one call to a stored proc that saves the order header record and all order detail records in one call, then in the stored procd, insert the ordfer record forst, use Scope_identity() to extract the newly created orderId into a T-SQL variable,
Declare #orderId Integer
Insert Orders([Order table columns])
Values([Order table column values])
Set #orderId = scope_Identity()
and then use the value in #orderId for all inserts into the OrderDetails table...
Insert OrderDetails(OrderId, [Other OrderDetail table columns])
Values(#orderId , [Other OrderDetail table column values])
You want a AFTER INSERT trigger on the order table - in this, the newly given ID is available as NEW.orderID and can now easily be inserted into orderDetails.
Just do this via the command line. I certainly do.

Best way to move data between tables and generate mapping of old to new identity values

I need to merge data from 2 tables into a third (all having the same schema) and generate a mapping of old identity values to new ones. The obvious approach is to loop through the source tables using a cursor, inserting the old and new identity values along the way. Is there a better (possibly set-oriented) way to do this?
UPDATE: One additional bit of info: the destination table already has data.
Create your mapping table with an IDENTITY column for the new ID. Insert from your source tables into this table, creating your mapping.
SET IDENTITY_INSERT ON for your target table.
Insert into the target table from your source tables joined to the mapping table, then SET IDENTITY_INSERT OFF.
I created a mapping table based on the OUTPUT clause of the MERGE statement. No IDENTITY_INSERT required.
In the example below, there is RecordImportQueue and RecordDataImportQueue, and RecordDataImportQueue.RecordID is a FK to RecordImportQueue.RecordID. The data in these staging tables needs to go to Record and RecordData, and FK must be preserved.
RecordImportQueue to Record is done using a MERGE statement, producing a mapping table from its OUTPUT, and RecordDataImportQueue goes to RecordData using an INSERT from a SELECT of the source table joined to the mapping table.
DECLARE #MappingTable table ([NewRecordID] [bigint],[OldRecordID] [bigint])
MERGE [dbo].[Record] AS target
USING (SELECT [InstanceID]
,RecordID AS RecordID_Original
,[Status]
FROM [RecordImportQueue]
) AS source
ON (target.RecordID = NULL) -- can never match as RecordID is IDENTITY NOT NULL.
WHEN NOT MATCHED THEN
INSERT ([InstanceID],[Status])
VALUES (source.[InstanceID],source.[Status])
OUTPUT inserted.RecordID, source.RecordID_Original INTO #MappingTable;
After that, you can insert the records in a referencing table as folows:
INSERT INTO [dbo].[RecordData]
([InstanceID]
,[RecordID]
,[Status])
SELECT [InstanceID]
,mt.NewRecordID -- the new RecordID from the mappingtable
,[Status]
FROM [dbo].[RecordDataImportQueue] AS rdiq
JOIN #MappingTable AS mt
ON rdiq.RecordID = mt.OldRecordID
Although long after the original post, I hope this can help other people, and I'm curious for any feedback.
I think I would temporarily add an extra column to the new table to hold the old ID. Once your inserts are complete, you can extract the mapping into another table and drop the column.

Resources