I have two .sdf files and I tried run "Script Database diff" on "SQL Server Compact toolbox for runtime 4.0" to get differences between this two files, I got this as result:
-- This database diff script contains the following objects:
-- - Tables: Any that are not in the destination
-- - (tables that are only in the destination are not dropped)
-- - Columns: Any added, deleted, changed columns for existing tables
-- - Indexes: Any added, deleted indexes for existing tables
-- - Foreign keys: Any added, deleted foreign keys for existing tables
-- ** Make sure to test against a production version of the destination database! **
-- Script Date: 30.01.2023 09:37 - ErikEJ.SqlCeScripting version 3.5.2.90
and nothing happens. My database files are changed, but the differences are not displayed in the database.
I tried to use ExportSQLCe40 with diff param, but then I got a lot "Alter table" lines and nothing more.
Is there any option to get differences between two .sdf database files?
Related
Task:
Automate database deployment (SSDT/dacpac deployment with CI/CD)
The database is a 3rd party database
It also includes our own customized tables/SP/Fn/Views in separate schemas
Should exclude 3rd party objects while deploying the database project(dacpac) to Production
Thanks to Ed Elliott for the AgileSqlClub.DeploymentFilterContributor. Used the dll to filter out the schema successfully.
Problem:
The 3rd party schema objects(Tables) are defined with unnamed constraints(default / primary key) when creating the tables. Example:
CREATE TABLE [3rdParty].[MainTable]
(ID INT IDENTITY(1,1) NOT NULL,
CreateDate DATETIME DEFAULT(GETDATE())) --There is no name given to default constraint
When I generate the script for deployment using sqlpackage.exe, I see following statements in the generated script.
Generated the script using:
"C:\Program Files\Microsoft SQL Server\150\DAC\bin\sqlpackage.exe" /action:script /sourcefile:C:\Users\User123\source\repos\DBProject\DBProject\bin\Debug\DBProject.dacpac /TargetConnectionString:"Data Source=MyServer; Initial Catalog=MSSQLDatabase; Trusted_Connection=True" /p:AdditionalDeploymentContributorPaths="C:\Program Files\Microsoft SQL Server\150\DAC\bin\AgileSqlClub.SqlPackageFilter.dll" /p:AdditionalDeploymentContributors=AgileSqlClub.DeploymentFilterContributor /p:AdditionalDeploymentContributorArguments="SqlPackageFilter=IgnoreSchema(3rdParty)" /outputpath:"c:\temp\script_AfterDLL.sql"
Script Output:
/*
Deployment script for MyDatabase
This code was generated by a tool.
Changes to this file may cause incorrect behavior and will be lost if
the code is regenerated.
*/
...
...
GO
PRINT N'Dropping unnamed constraint on [3rdParty].[MainTable]...';
GO
ALTER TABLE [3rdParty].[MainTable] DROP CONSTRAINT [DF__MainTabl__Crea__59463169];
...
...
...(towards the end of the script)
ALTER TABLE [3rdParty].[MainTable_2] WITH CHECK CHECK CONSTRAINT [fk_518_t_44_t_9];
I cannot alter 3rd party schema due to company restrictions
There are many lines of unnamed constraint and WITH CHECK CHECK constraints generated in the script.
Question:
How can I be able to remove the lines to DROP unnamed Constraint on 3rd party schemas? - Even though the dll excludes 3rd party schema, it still has these unnamed constraints scripted/deployed. Also, it is not Adding them back too !!
How can I be able to skip/remove generating WITH CHECK CHECK CONSTRAINT on 3rd party schemas
Any suggestions will be greatly helpful.
EDIT:
Also, I found another issue. The deployment will not succeed due to Rows were detected. The schema update is terminating because data loss might occur
Output:
/*
The column [3rdParty].[MainTable_1].[Col1] is being dropped, data loss could occur.
The column [3rdParty].[MainTable_1].[Col2] is being dropped, data loss could occur.
The column [3rdParty].[MainTable_1].[Col3] is being dropped, data loss could occur.
The column [3rdParty].[MainTable_1].[Col4] is being dropped, data loss could occur.
*/
IF EXISTS (select top 1 1 from [3rdParty].[MainTable_1])
RAISERROR (N'Rows were detected. The schema update is terminating because data loss might occur.', 16, 127) WITH NOWAIT
GO
Regarding the unnamed constraints, I couldn't find any solution using sqlpackage.exe.
But Redgate SQL Compare has an option to ignore them called IgnoreSystemNamedConstraintAndIndexNames that ignores system generated constraints and generates a much cleaner script.
For example when comparing 2 dacpacs:
SQLCompare /Scripts1:"\unpacked_dacpac_source_folder" /Scripts2:"\unpacked_dacpac_dest_folder" /options:IgnoreSystemNamedConstraintAndIndexNames /scriptFile:"script_result.sql"
You can find more info here:
Handling System-named Constraints in SQL Compare
Ok I have a database with a table LOOKUP (1st), and the same database on another server also with LOOKUP (2nd).
Is there a way I can insert into the 1st database from the second, if duplicate exist then skip else all other values that is present in 2nd should be inserted into 1st. Basically I want the exact same Database!
The think that confuses me is they are on different servers.
Can I export the one to like excel and import it again and replace my database or anything.
You will have to use 2 MERGE queries if you want to make both the databases identical. This is because the first merge will only insert records that are available in DB1 into DB2. But, DB1 will still not contain the records present in DB2 but not in DB1.
I would suggest you to do this task using SSIS.
You can use 2 sources DB1 and DB2 and a LOOKUP transformation on each source (LKP1 and LKP2).
Then you can insert the No Match output of LKP1 into DB2 as destination and No Match output of LKP2 into DB1 as destination.
This will solve the multi-server issue as well because you can create connection to any server in SSIS.
I need to consolidate 20 databases that have the same structure into 1 database. I saw this post:
Consolidate data from many different databases into one with minimum latency
I didn't understand all of this so let me ask like this: There are some table who have primary keys but don't have sourceID, example:
DataBase 1
AgencyID Name
1 Apple
2 Microsoft
Database 2
AgencyID Name
1 HP
2 Microsoft
It's obvious that these two tables cannot be merged like this, it needs aditional column:
DataBase 1
Source AgencyID Name
DB1 1 Apple
DB1 2 Microsoft
Database 2
Source AgencyID Name
DB2 1 HP
DB2 2 Microsoft
If this is the right way of doing this, can these two tables be merged in one database like this:
Source AgencyID Name
DB1 1 Apple
DB1 2 Microsoft
DB2 1 HP
DB2 2 Microsoft
...and is it possible to do it with Transactional replication?
Thanks in advance for the answer, it would be really helpful if I would get the right answer for this.
Ilija
If I understand you correctly you can do that by
creating an DTS/SSIS package.
Here is a basic SSIS tutorial.
or running SQL directly like
INSERT INTO [TargetDatabase].dbo.[MergedAgency]([Source], [AgencyID], [Name])
SELECT CAST('DB1' AS nvarchar(16)), [AgencyID], [Name]
FROM [SourceDatabase1].dbo.[Agency]
INSERT INTO [TargetDatabase].dbo.[MergedAgency]([Source], [AgencyID], [Name])
SELECT CAST('DB2' AS nvarchar(16)), [AgencyID], [Name]
FROM [SourceDatabase2].dbo.[Agency]
Then call either by a recurring SQL Server Job with one Job Step and a Schedule
Don't forget to think about how you detect which row have already been copied to the target database.
I solved the problem. Now I am using Transactional Replication. In "Publication Properties > Article Properties" I have to set "Action if name is in use" flag to "Keep existing object unchanged". Default is "Drop existing object and create a new one".
In SQL 2008 even when I change table scheme these changes are applied to consolidation database.
SQL-Hub (http://sql-hub.com) will let you merge multiple databases with the same schema in to a single database. There is a free licence that will let you do this from the UI though you might need to pay for a license if you want to schedule the process to run automatically. It's much easier to use than replication - though not quite as efficient.
I created a new databases , and added tables using the import data wizard. But the wizard din't create the indexed and constraint's on the table. How can I Import indexes?
If your source is also SQL Server, you should be able to run Tasks -> Generate Scripts and select "Script Indexes" in the list of options for the old database and execute the script on the new database with maybe a change in database name.
Just manually add indexes to your table
Here is an example from MSDN:
This example creates an index on the au_id column of the authors table.
SET NOCOUNT OFF
USE pubs
IF EXISTS (SELECT name FROM sysindexes
WHERE name = 'au_id_ind')
DROP INDEX authors.au_id_ind
GO
USE pubs
CREATE INDEX au_id_ind
ON authors (au_id)
GO
The other way you can do this is open the table in design mode management studio, highlight the field you want to index and look at the options at the top. One of them is for indexes and you can simply manually add it and give it a name right in management studio.
I want to update a static table on my local development database with current values from our server (accessed on a different network/domain via VPN). Using the Data Import/Export wizard would be my method of choice, however I typically run into one of two issues:
I get primary key violation errors and the whole thing quits. This is because it's trying to insert rows that I already have.
If I set the "delete from target" option in the wizard, I get foreign key violation errors because there are rows in other tables that are referencing the values.
What I want is the correct set of options that means the Import/Export wizard will update rows that exist and insert rows that do not (based on primary key or by asking me which columns to use as the key).
How can I make this work? This is on SQL Server 2005 and 2008 (I'm sure it used to work okay on the SQL Server 2000 DTS wizard, too).
I'm not sure you can do this in management studio. I have had some good experiences with
RedGate SQL Data Compare in synchronising databases, but you do have to pay for it.
The SQL Server Database Publishing Wizard can export a set of sql insert scripts for the table that you are interested in. Just tell it to export just data and not schema. It'll also create the necessary drop statements.
One option is to download the data to a new table, then use commands similar to the following to update the target:
update target set
col1 = d.col1,
col2 = d.col2
from downloaded d
inner join target t on d.pk = t.pk
insert into target (col1, col2, ...)
select (d.col1, d.col2, ...) from downloaded d
where d.pk not in (select pk from target)
If you disable the FK constrains during the 2nd option - and resume them after finsih - it will work.
But if you are using identity to create pk that are involves in the FK - it will cause a problem, so it works only if the pk values remains the same.