I am creating a new a Azure data factory pipeline. In that I need to to copy one table to Azure Blob and delete the data after copy success. before deleting the data i need to create a view of copied data and compare the data in source database which is going to delete.I need to delete the data from source table only the data in view and source table match.
As I know about Azure Data Factory, it doesn't support you create the view, so you can not do that.
Hope this helps.
Related
I frequently need to validate CSVs submitted from clients to make sure that the headers and values in the file meet our specifications. Typically I do this by using the Import/Export Wizard and have the wizard create the table based on the CSV (file name becomes table name, and the headers become the column names). Then we run a set of stored procedures that checks the information_schema for said table(s) and matches that up with our specs, etc.
Most of the time, this involves loading multiple files at a time for a client, which becomes very time consuming and laborious very quickly when using the import/export wizard. I tried using an xp_cmshell sql script to load everything from a path at once to have the same result, but xp_cmshell is not supported by AzureSQL DB.
https://learn.microsoft.com/en-us/azure/azure-sql/load-from-csv-with-bcp
The above says that one can load using bcp, but it also requires the table to exist before the import... I need the table structure to mimic the CSV. Any ideas here?
Thanks
If you want to load the data into your target SQL db, then you can use Azure Data Factory[ADF] to upload your CSV files to Azure Blob Storage, and then use Copy Data Activity to load that data in CSV files into Azure SQL db tables - without creating those tables upfront.
ADF supports 'auto create' of sink tables. See this, and this
I have staging tables in my SQL Server database, views that transform and combine those tables and final tables that I create from the result data of the views.
I could automatise the process by creating a stored process that would truncate the final table and insert the data from the view.
I want to know if it's possible to do this operation with an Azure Data Factory copy activity using the view as source and the table as sink.
Thank you for your help!
ADF does support SQL server as source as well as sink.
So there are 2 ways:
You can use copy activity with the view as your source and table as the destination
You can use stored procedure activity wherein you have all data ingestion/transformations logics within stored procedure and call the stored procedure
My application Fusion BICC is dumping data into oracle cloud object storage in the form of csv. I need to upload this data in my target database. so I am uploading data in external table and then comparing data of external table and target table using Minus and if data is new I am inserting it and if it exist I am updating them. I need few suggestion.
1) what is the best way to compare record if there is huge data.
2) instead of writing to external table is there any other better way? sqlloader, utl_file etc
3) If any record got deleted in BICC it does not come into csv file. but I have to delete those record if they are not in file. how to tackle that.
other than DBMS_CLOUD is there any package to upload data. I am very new to this. Request you to please suggest me on the same.
Consider BICC is an application which is dumping data in the form of cs file to Oracle cloud. I am interested basically in reading data from cloud storage to DBaaS.
Is it possible to set up a pipeline in Azure Data Factory that performs a MERGE between the source and the destination rather than an INSERT? I have been able to successfully select data from my source on-prem table and insert into my destination, but I'd really like to set up a pipeline that will continually update my destination with any changes in the source. E.g copying new records that are added to the source, or updating any data which changes on an existing record.
I've seen references to the Data Sync framework, but from what I can tell that is only supported in the legacy portal. My V12 databases do not even show up in the class Azure portal.
There is the Stored Proc activity which could handle this. You could use Data Factory to land the data in a staging table then call the stored proc to perform the MERGE. Otherwise Data Factory logic is not that sophisticated so you could not perform a merge in the same way you could in SSIS for example. Custom activities are probably not suitable for this, IMHO. This is also in line with Data Factory being ELT rather than ETL.
I'm trying to copy a database for use in testing/developing, in SQLDeveloper I can only see the user views, the data objects are not accessible for me.
Is there anyway to copy the views only and get a dll that creates some sort of phantom structure for the data objects that are not reachable but referenced in the sql queries for those views? Problem is there are over a thousand such references,
In the example below I cannot reach the header object due too permissions,
Example:
CREATE OR REPLACE FORCE VIEW "TRADE"."EXCHANGE" ("MSGQUE", "MSGQUE2") AS
select msgque, msgque2
from head.msgqueues;
I have tryed to export the views in SQL developer but when I import it in my Oracle test database the views contain error and are unusable because the data object did not get exported in the export.sql file,
Thanks in advance
I recommend using the expdp utility to perform this. You can explicitly say to grab views and tables.
Example parfile:
SCHEMAS=SCOTT
INCLUDE=TABLE:"IN ('DEPT')"
INCLUDE=VIEW
DIRECTORY=datapump
DUMPFILE=dept.dmp
LOGFILE=dept.log
Then you can impdp that parfile into the DB you wish and you will have the TABLE and the VIEW that goes in the schema. You can modify the IN clause to grab whatever naming scheme you would need.