I am using SSMS and cloning tables with same structure by using "script table as->create -> new query window".
My database have around 100 tables and my main task is to to perform data archiving by creating a clone table (same constraint,index,triggers,stats as old table) and importing certain data i want from the old table to new table.
My issue is inside the generated script say I want to clone table A , and in the script, there are sql scripts like { create table for table B} , {create table for table K}, etc along with their index and constraint scripts. Therefore, it makes the whole script very tedious and long.
I just want to focus on table A script so i can clone it and insert the relevant data into it . I know it has something to do with my options setting but I am unsure which options I should set to True for scripting, if i just want to clone table with same constraint,columns,indexes,triggers and stats. Does anyone know why there are unrelated script and how do i fix it ?
Related
I imported data from Power BI into SQL-Server. You can see how is look like imported data.
Additionally I created own database with commands below:
CREATE DATABASE MY_DW
GO
USE MY_DW
GO
Now I want to copy all this table into my base named as MY_DW. So can anybody help me how to solve this problem and copy all tables into my base ?
Please check https://www.sqlshack.com/how-to-copy-tables-from-one-database-to-another-in-sql-server/.
This link suggests various methods to copy the data tables from one database to another.
Thanks,
Rajan
Following approach could resolve your issue:
Imported Database
Generate Scripts
Introduction
Next button
Select the database objects (Tables in your case) to script
Next button
Specify how scripts should be saved
Advanced -> Types of data to script -> Schema and data
Next button
Review your selections
Next button
Script generation would take place and saved which you should run under the database,
MY_DW, you created
Another approach:
Assuming that the databases are in the same server.
The below query will create the table into your database(without constraints).
SELECT * INTO MY_DW.Table_Name
FROM ImportedDB.Table_Name
And the below query will insert the data into your database table.
INSERT INTO MY_DW.Table_Name
SELECT * FROM ImportedDB.Table_Name
Final approach:
Assuming that the databases are in the linked server.
Incase of linked server, four part database object naming convention will be applied like below.
The below query will create the table into your database(without constraints).
SELECT * INTO [DestinationServer].[MY_DW].[dbo].[Table_Name]
FROM [SourceServer].[ImportedDB].[dbo].[Table_Name]
And the below query will insert the data into your database table.
INSERT INTO [DestinationServer].[MY_DW].[dbo].[Table_Name]
SELECT * FROM [SourceServer].[ImportedDB].[dbo].[Table_Name]
My team is creating a high volume data processing tool. The idea is to take a 30,000 line batch file and bulk load it into a table and then process the records use parallel processing.
The part I'm stuck on is creating dynamic tables. We want to create a new physical table for each file that we receive. The tables will be purged from our system by a separate process after they are completed.
The part I'm stuck on is creating dynamic tables. For each batch file we receive I need to create a new physical file with a unique table name.
I have the base structure for the table and I intend to create unique table names using a combination of date/time stamp and a guid (dashes converted to underscore characters).
I could do this easily enough in a stored procedure but I'm wondering if there is a better way.
Here is what I have considered...
Templates in SQL Server Management Studio. This is a GUI tool built into Management Studio (from Management Studio Ctrl+Alt+T) that allows you to define different sql objects including a table and specify parameters. This seems like it would work, however it appears that this is a GUI tool and not something that I could call from a stored procedure.
Stored Procedure. I could put everything into a stored procedure and build my file name and schema into a nvarchar(max) string and use sp_executesql to create the table. This might be the way to accomplish my goal but I wonder if there is a better way.
Stored Procedure with an existing table as a template. I could define a base table and then query sys.columns & sys.dataypes to create a string representing the new table. This would allow me to add columns to the base table without having to update my stored procedure. I'm not sure if this is a better approach.
I'm wondering if any Stack Overflow folks have solved a similar requirements. What are your recommendations.
I'm using a SSIS script task to dynamically import and create staging tables on the fly from csvs as there are so many (30+.)
For example, a table in SQL server will be created called 'Customer_03122018_1305' based on the name of the csv file. How do I then insert into the actual 'real' 'Customer' table?
Please note -there are other tables - e.g. 'OrderHead_03122018_1310' that will need to go into a 'OrderHead' table. Likewise for 'OrderLines_03122018_1405' etc.
I know how to perform the SQL insert, but the staging tables will be constantly changing based on csv date timestamp. I'm guessing this will be a script task?
I'm think of using a control table when I originally import the csv's and then lookup the real table name?
Any help would be appreciated.
Thanks.
You can follow the below process, to dynamically load all the staging tables to the main Customer table by using a FOR loop as stated below,
While creating the staging tables dynamically, store all the staging table names in a separate single variable separated by commas.
Also store the count of staging tables created in another variable.
Use FOR loop container and loop the container by the number of staging tables created.
Inside the FOR loop, use a script task and fetch the value of 1st staging table name into separate variable.
After the script task, inside FOR loop container, add a DataFlow task and inside it, build the OLEDB Source task dynamically by using the variable that is used to store the 1st staging table name in step - 4.
Load the results of from staging table to Actual table.
Remove the staging table name from the variable that is created i step - 1 (which contains all the staging table names separated by comma).
I need to clone a database automatically, then after its cloned I need to rename some column names.
Salesforce is dumping into Redshift, but the column names aren't matching up for another program. Redshift is our single point of truth for everything.
Just create a view with needed column names, why clone and rename when you can have a view on top?
P.S. a view is a query that is saved in the database so you can run the same query later by using just the name an not repeating the whole query. Let's say you have a table called my_original_table with column names that you don't like. Once you run this:
create view my_corrected_table as
select
bad_col_name_1 as good_col_name_1,
bad_col_name_2 as good_col_name_2
from my_original_table;
you will be able to run this:
select * from my_corrected_table
and it will return bad_col_name_1 renamed to good_col_name_1 and so on
We have a large production MSSQL database (mdf appx. 400gb) and i have a test database. All the tables,indexes,views etc. are same eachother. I need to make sure that tha datas in the tables of this two database consistent. so i need to insert all the new rows and update all the updated rows into test db from production every night.
I came up with idea of using SSIS packages to make the data consistent by checking updated rows and new rows in all the tables. My SSIS Flow is ;
I have packages in SSIS for each tables seperately because;
Orderly;
Im getting the timestamp value in the table in order to get last 1 day rows instead of getting whole table.
I get the rows of the table in the production
Then im using 'Lookup' tool to compare this data with the test database table data.
Then im using conditional sprit to get a clue whether the data is new or updated.
If the data is new, i insert this data to the destination
5_2. If the data is updated, then i update the data in the destination table.
Data flow is in the MTRule and STBranch package in the picture
The problem is, im repeating creating all this single flow for each table and i have more than 300 table like this. It takes hours and hours :(
What im asking is;
Is there any way in SSIS to do this dynamically ?
PS: Every single table has its own columns and PK values but my data flow schema is always same. . (Below)
You can look into BiMLScript, which lets you create packages dynamically based on metadata.
I believe the best way to achieve this is to use Expressions. They empower you to dynamically set the source and Destination.
One possible solution might be as follows:
create a table which stores all your table names and PK columns
define a package which Loops through this table and which parses a SQL Statement
Call your main package and pass the stmt to it
Use the stmt as Data Source for your Data Flow
if applicable, pass the Destination Table as Parameter as well (another column in your config table)
This is how I processed several really huge tables: the data had to be fetched from 20 tables and moved to one single table.
You are better off writing a stored procedure that takes the tablename as parameter and doing your CRUD there.
Then call the stored procedure in a FOR EACH component in SSIS.
Why do you need to use SSIS?
You are better off writing a stored procedure that takes the tablename as parameter and doing your CRUD there. Then call the stored procedure in a FOR EACH component in SSIS.
In fact you might be able to do everything using a Stored Procedure and scheduling it in a SQL Agent Job.