Modify snowflake database by fill in value from Alteryx - snowflake-cloud-data-platform

There are some records that are missing values. I would like to modify/fill-in data into snowflake from Alteryx. I would like to modify many records at once.
What is the best way to modify snowflake database from Alteryx:
deleting specific rows and append modified data?
modifying data using sql statement in Alteryx output tool?
clone original table --> create modified table --> replace table?
any other ways?
Sincerely,
knozawa

Use an UPDATE statement in Snowflake. There is no benefit to the other methods that you've suggested. When you run an UPDATE in Snowflake, it is recreating the micro-partitions that those records are contained in regardless. Your other methods would be doing that same operation, but more than once. Stick with UPDATE.

Related

How to perform SCD 1 on snwoflake tables using talend

I am quite new to Talend and I need to perform the scd1 operation on snwoflake tables. Can anyone suggest me the important components I need to use in talend to perform this operation. I tried with tDBSCD but it does not allow snowflake database. Is there is any workaround to perform the task. Any help would be appreciated.
Thanks
You need Source Component-->TMap-->Snowflake output Component.
Connect to your normal source and add another source which should be your snowflake target table as a lookup and in TMap you need to join both based on the business key in the TMAP add the variable the SCD columns calculation and add the two targets outputs one for insert and one for the update, In the update link of the snowflake keep the key column and the add merge in the type. And play the job.

How do I write an Action Query that changes the data in its data source or creates a new table?

I wanted to know how do I write an Action Query that changes the data in its data source of create a new table for SQL Server 2016.
They're two separate actions. You change the data with an UPDATE query. You create a table with a CREATE TABLE statement, then insert data with an INSERT query.
It's not good practice to create tables on the fly, unless you, for some reason, need temporary tables. You may need to elaborate on what you are trying to accomplish.

Does original database change in knime

When using the database manipulation nodes, does the original database data would change or remain the same?
Can anyone answer please?
If you write to or update the database, the original database will be updated. Read or query operations such as row / column filtering, row sampling, groupby and joiner do not change the underlying database

How to run SSIS packages dynamically?

We have a large production MSSQL database (mdf appx. 400gb) and i have a test database. All the tables,indexes,views etc. are same eachother. I need to make sure that tha datas in the tables of this two database consistent. so i need to insert all the new rows and update all the updated rows into test db from production every night.
I came up with idea of using SSIS packages to make the data consistent by checking updated rows and new rows in all the tables. My SSIS Flow is ;
I have packages in SSIS for each tables seperately because;
Orderly;
Im getting the timestamp value in the table in order to get last 1 day rows instead of getting whole table.
I get the rows of the table in the production
Then im using 'Lookup' tool to compare this data with the test database table data.
Then im using conditional sprit to get a clue whether the data is new or updated.
If the data is new, i insert this data to the destination
5_2. If the data is updated, then i update the data in the destination table.
Data flow is in the MTRule and STBranch package in the picture
The problem is, im repeating creating all this single flow for each table and i have more than 300 table like this. It takes hours and hours :(
What im asking is;
Is there any way in SSIS to do this dynamically ?
PS: Every single table has its own columns and PK values but my data flow schema is always same. . (Below)
You can look into BiMLScript, which lets you create packages dynamically based on metadata.
I believe the best way to achieve this is to use Expressions. They empower you to dynamically set the source and Destination.
One possible solution might be as follows:
create a table which stores all your table names and PK columns
define a package which Loops through this table and which parses a SQL Statement
Call your main package and pass the stmt to it
Use the stmt as Data Source for your Data Flow
if applicable, pass the Destination Table as Parameter as well (another column in your config table)
This is how I processed several really huge tables: the data had to be fetched from 20 tables and moved to one single table.
You are better off writing a stored procedure that takes the tablename as parameter and doing your CRUD there.
Then call the stored procedure in a FOR EACH component in SSIS.
Why do you need to use SSIS?
You are better off writing a stored procedure that takes the tablename as parameter and doing your CRUD there. Then call the stored procedure in a FOR EACH component in SSIS.
In fact you might be able to do everything using a Stored Procedure and scheduling it in a SQL Agent Job.

Master Data Services deleting rows from UI

We have duplicate data in entities in Master data services and not in staging tables. how can we delete these? We cannot delete each row because these are more than 100?
Did you create a view for this entity? see: https://msdn.microsoft.com/en-us/library/ff487013.aspx
Do you access to the database via SQL Server Management Studio?
If so:
Write a query against the view that returns the value of the Code field for each record you want to delete.
Write a query that inserts the following into the staging table for that entity: code (from step 1), BatchTag, ImportType of 4 (delete)
Run the import stored proc EXEC [stg].[udp_YourEntityName_Leaf] See: https://msdn.microsoft.com/en-us/library/hh231028.aspx
Run the validation stored proc see: https://msdn.microsoft.com/en-us/library/hh231023.aspx
Use ImportType 6 instead of 4 as the deletion will fail if the Code which you are trying to delete is being referenced by a domain based attribute in other entities if you use ImportType 4. Rest all the steps will remain same as told by Daniel.
I deleted the duplicate data from the transaction tables which cleared the duplicates from the UI also.
MDS comes out-of-the-box with two front-end UIs:
Web UI
Excel plugin
You can use both of them to easily delete multiple records. I'd suggest using the excel plugin.
Are there any Domain-based attributes linked to the entity you're deleting values from? If so, if the values are related to child entity members, you'll have to delete those values first.

Resources