I need to clone a database automatically, then after its cloned I need to rename some column names.
Salesforce is dumping into Redshift, but the column names aren't matching up for another program. Redshift is our single point of truth for everything.
Just create a view with needed column names, why clone and rename when you can have a view on top?
P.S. a view is a query that is saved in the database so you can run the same query later by using just the name an not repeating the whole query. Let's say you have a table called my_original_table with column names that you don't like. Once you run this:
create view my_corrected_table as
select
bad_col_name_1 as good_col_name_1,
bad_col_name_2 as good_col_name_2
from my_original_table;
you will be able to run this:
select * from my_corrected_table
and it will return bad_col_name_1 renamed to good_col_name_1 and so on
Related
I'm testing out a trial version of Snowflake. I created a table and want to load a local CSV called "food" but I don't see any "load" data option as shown in tutorial videos.
What am I missing? Do I need to use a PUT command somewhere?
Don't think Snowsight has that option in the UI. It's available in the classic UI though. Go to Databases tab, select a database. Go to Tables tab and select a table the option will be at the top
If the classic UI is limiting you or you are already using Snowsight and don't want to switch back, then here is another way to upload a CSV file.
A preliminary is that you have installed SnowSQL on your device (https://docs.snowflake.com/en/user-guide/snowsql-install-config.html).
Start SnowSQL and perform the following steps:
Use the database where to upload the file to. You need various privileges for creating a stage, a fileformat, and a table. E.g. USE MY_TEST_DB;
Create the fileformat you want to use for uploading your CSV file. E.g.
CREATE FILE FORMAT "MY_TEST_DB"."PUBLIC".MY_FILE_FORMAT TYPE = 'CSV';
If you don't configure the RECORD_DELIMITER, the FIELD_DELIMITER, and other stuff, Snowflake uses some defaults. I suggest you have a look at https://docs.snowflake.com/en/sql-reference/sql/create-file-format.html. Some of the auto detection stuff can make your life hard and sometimes it is better to disable it.
Create a stage using the previously created fileformat
CREATE STAGE MY_STAGE file_format = "MY_TEST_DB"."PUBLIC".MY_FILE_FORMAT;
Now you can put your file to this stage
PUT file://<file_path>/file.csv #MY_STAGE;
You can find documentation for configuring the stage at https://docs.snowflake.com/en/sql-reference/sql/create-stage.html
You can check the upload with
SELECT d.$1, ..., d.$N FROM #MY_STAGE/file.csv d;
Then, create your table.
CREATE TABLE MY_TABLE (col1 varchar, ..., colN varchar);
Personally, I prefer creating first a table with only varchar columns and then create a view or a table with the final types. I love the try_to_* functions in snowflake (e.g. https://docs.snowflake.com/en/sql-reference/functions/try_to_decimal.html).
Then, copy the content from your stage to your table. If you want to transform your data at this point, you have to use an inner select. If not then the following command is enough.
COPY INTO mycsvtable from #MY_STAGE/file.csv;
I suggest doing this without the inner SELECT because then the option ERROR_ON_COLUMN_COUNT_MISMATCH works.
Be aware that the schema of the table must match the format. As mentioned above, if you go with all columns as varchars first and then transform the columns of interest in a second step, you should be fine.
You can find documentation for copying the staged file into a table at https://docs.snowflake.com/en/sql-reference/sql/copy-into-table.html
If you can check the dropped lines as follows:
SELECT error, line, character, rejected_record FROM table(validate("MY_TEST_DB"."MY_SCHEMA"."MY_CSV_TABLE", job_id=>'xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx'))
Details can be found at https://docs.snowflake.com/en/sql-reference/functions/validate.html.
If you want to add those lines to your success table you can copy the the dropped lines to a new table and transform the data until the schema matches with the schema of the success table. Then, you can UNION both tables.
You see that it is pretty much to do for loading a simple CSV file to Snowflake. It becomes even more complicated when you take into account that every step can cause some specific failures and that your file might contain erroneous lines. This is why my team and I are working at Datameer to make these types of tasks easier. We aim for a simple drag and drop solution that does most of the work for you. We would be happy if you would try it out here: https://www.datameer.com/upload-csv-to-snowflake/
I imported data from Power BI into SQL-Server. You can see how is look like imported data.
Additionally I created own database with commands below:
CREATE DATABASE MY_DW
GO
USE MY_DW
GO
Now I want to copy all this table into my base named as MY_DW. So can anybody help me how to solve this problem and copy all tables into my base ?
Please check https://www.sqlshack.com/how-to-copy-tables-from-one-database-to-another-in-sql-server/.
This link suggests various methods to copy the data tables from one database to another.
Thanks,
Rajan
Following approach could resolve your issue:
Imported Database
Generate Scripts
Introduction
Next button
Select the database objects (Tables in your case) to script
Next button
Specify how scripts should be saved
Advanced -> Types of data to script -> Schema and data
Next button
Review your selections
Next button
Script generation would take place and saved which you should run under the database,
MY_DW, you created
Another approach:
Assuming that the databases are in the same server.
The below query will create the table into your database(without constraints).
SELECT * INTO MY_DW.Table_Name
FROM ImportedDB.Table_Name
And the below query will insert the data into your database table.
INSERT INTO MY_DW.Table_Name
SELECT * FROM ImportedDB.Table_Name
Final approach:
Assuming that the databases are in the linked server.
Incase of linked server, four part database object naming convention will be applied like below.
The below query will create the table into your database(without constraints).
SELECT * INTO [DestinationServer].[MY_DW].[dbo].[Table_Name]
FROM [SourceServer].[ImportedDB].[dbo].[Table_Name]
And the below query will insert the data into your database table.
INSERT INTO [DestinationServer].[MY_DW].[dbo].[Table_Name]
SELECT * FROM [SourceServer].[ImportedDB].[dbo].[Table_Name]
I'm using a SSIS script task to dynamically import and create staging tables on the fly from csvs as there are so many (30+.)
For example, a table in SQL server will be created called 'Customer_03122018_1305' based on the name of the csv file. How do I then insert into the actual 'real' 'Customer' table?
Please note -there are other tables - e.g. 'OrderHead_03122018_1310' that will need to go into a 'OrderHead' table. Likewise for 'OrderLines_03122018_1405' etc.
I know how to perform the SQL insert, but the staging tables will be constantly changing based on csv date timestamp. I'm guessing this will be a script task?
I'm think of using a control table when I originally import the csv's and then lookup the real table name?
Any help would be appreciated.
Thanks.
You can follow the below process, to dynamically load all the staging tables to the main Customer table by using a FOR loop as stated below,
While creating the staging tables dynamically, store all the staging table names in a separate single variable separated by commas.
Also store the count of staging tables created in another variable.
Use FOR loop container and loop the container by the number of staging tables created.
Inside the FOR loop, use a script task and fetch the value of 1st staging table name into separate variable.
After the script task, inside FOR loop container, add a DataFlow task and inside it, build the OLEDB Source task dynamically by using the variable that is used to store the 1st staging table name in step - 4.
Load the results of from staging table to Actual table.
Remove the staging table name from the variable that is created i step - 1 (which contains all the staging table names separated by comma).
I am using SSMS and cloning tables with same structure by using "script table as->create -> new query window".
My database have around 100 tables and my main task is to to perform data archiving by creating a clone table (same constraint,index,triggers,stats as old table) and importing certain data i want from the old table to new table.
My issue is inside the generated script say I want to clone table A , and in the script, there are sql scripts like { create table for table B} , {create table for table K}, etc along with their index and constraint scripts. Therefore, it makes the whole script very tedious and long.
I just want to focus on table A script so i can clone it and insert the relevant data into it . I know it has something to do with my options setting but I am unsure which options I should set to True for scripting, if i just want to clone table with same constraint,columns,indexes,triggers and stats. Does anyone know why there are unrelated script and how do i fix it ?
I am looking for some idea if we can generate a script for just one view and run that on the another database to create that view with its datas intact. Please help, thank you
If your destination server is not linked with the source, getting this data out will take a few more steps. I am assuming that you only want to transport the data from the view, but the steps below could be applied to the source table(s), making this view instantiation part unnecessary.
First, since a view does not store data, it only references data, you will need to instantiate the view into a table.
Select *
INTO tblNewTable --this creates a new table from the data selected from the view
FROM dbTest.dbo.Tester;
Next, open SSMS. Right click the database, select tasks, then generate scripts
Then select the newly created table, and next
You will need to select advanced and change the 'types of data to script' to schema and data. It will be schema only by default. Select Next and Finish.
SSMS will export a file, or load a new query window with the code to create a new table, but will also have the insert statements to load the new table exactly as it was on the source server
Use following as an example
use dbNew;
go
create view dbo.ViewTest as
select * from dbTest.dbo.Tester;
Following code will create table using another table. The new Table will contain all the data of the previous table.
Select * into DBName1.SchemaName.NewTableName from DBName2.SchemaName.PreviousTableName
You can use this query to create new table in any database and schema.