I have a scheduled SSIS package where it loads the data overnight into the data warehouse. Before loading, it drops the entire database and drops all the tables. But now I had a situation where I don't want to drop one table and want to do an incremental load using merge SQL statement. Because it is dropping the entire database, I won't be able to do that in the current scenario. If I change drop database to delete database, I think, I should be able to do incremental load on the table I want. Are there any possible complications of doing that. Can you foresee any problems if I change drop database to delete database, will I be missing something. Any thoughts highly appreciated. Thanks for your time.
As far as I know with delete database you only delete the rows whereas with drop database you delete all tables incl. the rows. If your logic works, you could do a delete database, then drop all tables except the one you want to keep.
A drop/delete of the database will remove all of the contents of the database. If the requirement is to retain a single table, you'll need to retain the schema and database that holds it as well.
If I'm understanding correctly, you're dropping the target database. Is this a STAGE database for the data warehouse? If so, you'll also have a TARGET (the main tables of the warehouse) that are loaded from STAGE. If this is the case, you should be able to run a MERGE statement from the newly STAGED table to the TARGET table.
Related
when generating a database in PowerDesigner I have the option to drop all Tables/Tablespaces and so on before generating the Database: Option to Select
But this Option isnt there when applying changes to a database. I need this, because some local databases could already have some of the Changes I made to the model and I cant drop all tables because of the data.
I'm not sure I understand the question, when you are ready to drop some tables instead of modifying them, but you don't want to drop all tables.
It seems to me that you could use Database > Apply Model Changes to Database with the Connect to a Data Source to compare the model with an existing database, and just generate the necessary modification scripts.
Maybe you could use a saved Selection to generate an Alter Script for most tables, and another to generate a Drop+Create script for some "transient" tables.
We have a legacy database that has dozens of schemas in it, and we're looking to split that database up into several smaller distinct databases instead.
Is there any way I can create a new database on the same physical server, and then transfer an entire schema over to the new database?
Our tables look like:
Foo.Table1
Foo.Table2
Foo.Table3
...
Bar.Table1
Bar.Table2
...
Xxx.Table1
Xxx.Table2
...
...and I want to move Foo.* to a new database.
Typically this is recommended to be done by some kind of per-table export/import, but that's quite cumbersome with the 150+ tables in the schema.
As far as my trivial research goes the options appear to be:
Export/import each table individually.
Backup the entire database, restore in a different destination and delete everything else (painful, since the entire database is ~900GB).
Deploy the dacpac of the single schema to the new database, and do a cross database initial seeding, aka:
INSERT INTO newDb.Foo.Table1 SELECT * FROM oldDb.Foo.Table1;
INSERT INTO newDb.Foo.Table2 SELECT * FROM oldDb.Foo.Table2;
INSERT INTO newDb.Foo.Table3 SELECT * FROM oldDb.Foo.Table3;
...
All of these options are a lot of effort... is there any other approach that will simply move an entire schema into a new database?
I am not aware of any fully automated way but this can be done relatively simply with the help of Excel.
In SSMS you can use "Object Explorer Details" to easily (with few mouse clicks) script schema of multiple tables.
With the help of system views (sys.tables, sys.columns etc.) and Excel you should be able to generate 'INSERT INTO .... SELECT ...' scripts for all of your tables in minutes.
In Excel (or a similar application) you paste the list of your tables (obtained using sys.tables) and then write a formula to generate a script for each table.
you can create a filegroup for each schema and move the tables of a schema into the related filegroup. after that you backup each filegroup and restore.
I have a SQL Server 2008 database. I need to capture Insert/Update/Delete operations on every table in the DB, take the affected primary key and insert into another table ChangeLog. ChangeLog needs capture the PK, source table, operation type.
I don't want to write triggers for every table. Whats the simplest way to do it?
Use case : I connect to SQL Server from Solr. The change log is used for delta import.
I'd start by taking a look at SQL Server Change Tracking and see whether it'll do what you need. It's built in and simple enough to access:
Change Tracking Overview
You don't want to write a trigger for every table because you think it is hard.
Query for a list of each table
Query for each table's primary key
Create a trigger script for add, update and delete to write to the ChangeLog table using the data for each table and primary key.
It's really not that hard to build this script and apply it to your database. If you can write it for one table, you can automatically build scripts for each table. With an error check (does trigger exist), you can run this as new tables are added.
I need to move my teams database changes from our development environment to our test environment.
I know that Visual Studio can diff two databases and output a script. But for tables that we have added columns it is dropping the table and re-inserting in with the new columns.
It tries to keep the data, but it is not going to work. It will cause FK issues, and when I try to move this to production, I will lose all the statistics on the table.
Is there a way to get it to script the table with an alter script? (So that it alters the table to add the new column?)
I see this happen when columns are added to the middle of a table. If you're doing that, don't.
We recently moved our database to a new server. However, at the time, we did not allow the Code First migrations to create the database. We used another tool to migrate the tables and data. The __MigrationHistory table was not moved during this time. The __MigrationHistory is a system table in our original DB.
I cannot seem to find a way to import or export the __MigrationHistory table so we can allow future migrations to take place.
The only other thought we had, is to have the application recreate the database and migrate the copied data to the new version of the DB. The only issue is we have millions of records to move and it is quite a long process.
I use the following script to move the EF MigrationHistory table from the system tables to the user tables (of a database tree structure):
SELECT * INTO [dbo].[TempMigrationHistory] FROM [dbo].[__MigrationHistory];
DROP TABLE [dbo].[__MigrationHistory];
EXEC sp_rename 'TempMigrationHistory', '__MigrationHistory';
This way I can export the table by choosing standard script/export option under SSMS.
(The complete description of handling this issue is here)