[SymmetricDS]: DBCompare tool for source table without primary key - symmetricds

I'm having source and target databases of different structure. My source tables don't have PK, target tables have.
During sync, it will assume all source column as PK if I don't specify anything. So my current settings for the tables to sync correctly is I have transformation on those column PK=0 during LOAD.
When I run dbcompare command, some tables failed to run comparison because it complains source table doesn't have PK.
Is this a known error or can it be resolved ?

Related

Insert rows into SQL Server table with a primary key with SSIS

I have an SSIS package that I use to pass data from an Excel workbook into the SQL Server table:
My Excel file grows constantly with new records, therefore I've defined a primary key in the SQL Server table to avoid inserting duplicates, but basically I'm inserting the whole workbook each time.
I now have a problem, where either the whole package fails completely because it attempts to pass duplicate values into a table with PK, or if I set the Error Output of the destination to "Redirect row", the package gets executed successfully with the following message:
Data Flow Task, SSIS.Pipeline: "OLE DB Destination" wrote 90 rows
but no new rows are actually added to the table.
If I remove the PK constraint and add a trigger to remove duplicates on insert, it would work, but I would like to know the proper way to do this.
To make the design "work," with an error table, change your batch commit size to 1 in the OLE DB Destination. What's happening is that it's trying to commit the 90 rows but as there's at least 1 bad row in there, the whole batch fails.
The better approach will be to add a Lookup Component between the data conversion and the destination. The output of the Lookup will be the "No Match Found" output path and that is what will feed into the OLE DB Destination. The logic is that you're going to attempt to lookup the existing key in the target table. The No Match Found is what it sounds like, the row doesn't exist so therefore, shove it into the table and you won't have a PK conflict*.
* I still got a PK conflict but it's not there. In this case, you have duplicates/repeated keys in your source data and the same issue with regard to batch size is obscuring it. We're adding 2 rows with PK 50. PK 50 doesn't exist so it passes the lookup but the default batch size means both of those rows are going to be inserted in a single shot. Which violates referential integrity and then gets rolled back.

SSIS flat file with joins

I have a flat file which has following columns
Device Name
Device Type
Device Location
Device Zone
Which I need to insert into SQL Server table called Devices.
Devices table has following structure
DeviceName
DeviceTypeId (foreign key from DeviceType table)
DeviceLocationId (foreign key from DeviceLocation table)
DeviceZoneId (foreign key from DeviceZone table)
DeviceType, DeviceLocation and DeviceZone tables are already prepopulated.
Now I need to write ETL which reads flat file and for each row get DeviceTypeId, DeviceLocationId and DeviceZoneId from corresponding tables and insert into Devices table.
I am sure this is not new but its being a while I worked on such SSIS packages and help would be appreciated.
Load the flat content into a staging table and write a stored procedure to handle the inserts and updates in T-SQL.
Having FK relationships between the destination tables, can probably make a lot of trouble with a single data flow and a multicast.
The problem is that you have no control over the order of the inserts so the child record could be inserted before the parent.
Also, for identity columns on the tables, you cannot retrieve the identity value from one stream and use it in another without using subsequent merge joins.
The simplest way to do that, is by using Lookup Transformation to get the ID for each value. You must be aware that duplicates may lead to a problem, you have to make sure that the value is not found multiple times in the foreign tables.
Also, make sure to redirect rows that have no match into a staging table to check them later.
You can refer to the following article for a step by step guide to Lookup Transformation:
An Overview of the LOOKUP TRANSFORMATION in SSIS

SQL DB Diagram Extract

I am using SQL Server 2011 and need to create a visual representation (diagram). The current structure has no relationships (Foreign Keys) between tables and there are tables without any Primary Keys.
I have tried using SQL Database Diagrams but can't add any relationship between tables without the change happening on the DB itself.
I want to draw relationships without it making any changes.
Are there any free DB Diagram software that I can use in order to achieve this? I have tried DbVisualizer but getting same issues as with the diagram within SQL.
In your case I would do the following:
Generate scripts for your database (schema only) (as #Serg suggested already)
You can do this using SSMS: Right click your database - Tasks - Generate Scripts. Select all tables and then under Advanced, select Schema only at the Types of data to script. Save script and run it in some test environment to generate a schema only copy of your database. (If doing this on the same server, you might need to change the script a little, to give the new database another name)
Dynamically try to "guess" the foreign key relationships
Since you have 500+ tables, you could try to make this script work for
you (of course it would need some testing and tuning to adapt it for your
case) but I used it in the following scenario and it worked.
Hopefully you do have a naming convention. In this script it is assumed that the referenced keys are named the same, but this can be configured.
So, I created the following tables:
CREATE TABLE test (testid int identity(1,1) UNIQUE, description varchar(10))
CREATE TABLE test_item (id int identity(1,1) UNIQUE, testid int)
And the following indexes on their primary keys (normally, you should have them too)
CREATE CLUSTERED INDEX [ix_testid] ON [dbo].[test]([testid] ASC)
CREATE CLUSTERED INDEX [ix_testitemid] ON [dbo].[test_item]([id] ASC)
I did not create a foreign key relationship.
Next I ran the script from the Automatically guessing foreign key constraints article and I managed to get the following result:
Executing all the ALTER statements generated from this script you could get your relationships created in your new database - generate the diagram from this database and you are done! :)
ps. I would suggest you test it out step by step, for example with one table first and then adding others and checking the results.
You can try a working demo of my little test here.
Good luck!

Changing columns to identity (SQL Server)

My company has an application with a bunch of database tables that used to use a sequence table to determine the next value to use. Recently, we switched this to using an identity property. The problem is that in order to upgrade a client to the latest version of the software, we have to change about 150 tables to identity. To do this manually, you can right click on a table, choose design, change (Is Identity) to "Yes" and then save the table. From what I understand, in the background, SQL Server exports this to a temporary table, drops the table and then copies everything back into the new table. Clients may have their own unique indexes and possibly other things specific to the client, so making a generic script isn't really an option.
It would be really awesome if there was a stored procedure for scripting this task rather than doing it in the GUI (which takes FOREVER). We made a macro that can go through and do this, but even then, it takes a long time to run and is error prone. Something like: exec sp_change_to_identity 'table_name', 'column name'
Does something like this exist? If not, how would you handle this situation?
Update: This is SQL Server 2008 R2.
This is what SSMS seems to do:
Obtain and Drop all the foreign keys pointing to the original table.
Obtain the Indexes, Triggers, Foreign Keys and Statistics of the original table.
Create a temp_table with the same schema as the original table, with the Identity field.
Insert into temp_table all the rows from the original table (Identity_Insert On).
Drop the original table (this will drop its indexes, triggers, foreign keys and statistics)
Rename temp_table to the original table name
Recreate the foreign keys obtained in (1)
Recreate the objects obtained in (2)

Allowing individual columns not to be tracked in Merge Replication

Using Merge Replication, I have a table that for the most part is synchronized normally. However, the table contains one column is used to store temporary, client-side data which is only meaningfully edited and used on the client, and which I don't have any desire to have replicated back to the server. For example:
CREATE TABLE MyTable (
ID UNIQUEIDENTIFIER NOT NULL PRIMARY KEY,
Name NVARCHAR(200),
ClientCode NVARCHAR(100)
)
In this case, even if subscribers make changes to the ClientCode column in the table, I don't want those changes getting back to the server. Does Merge Replication offer any means to accomplish this?
An alternate approach, which I may fall back on, would be to publish an additional table, and configure it to be "Download-only to subscriber, allow subscriber changes", and then reference MyTable.ID in that table, along with the ClientCode. But I'd rather not have to publish an additional table if I don't absolutely need to.
Thanks,
-Dan
Yes, when you create the article in the publication, don't include this column. Then, create a script that adds this column back to the table, and in the publication properties, under snapshot, specify that this script executes after the snapshot is applied.
This means that the column will exist on both the publisher and subscriber, but will be entirely ignored by replication. Of course, you can only use this technique if the column(s) to ignore are nullable.

Resources