I want to do simple transformation with source as Flat file and target as SQL Data Warehouse.In target Analyser using ODBC Connection I am able to connect the SQL Data Warehouse and connection is successful,but no tables are listed and I am unable to select a table to import.(Sample Tables are created in SQL Data warehouse)
Whether it is possible to create a Source/target as SQL Data warehouse?If so kindly help me to solve the issue.
Thanks & Regards
Prakash
Informatica does offer support for Azure SQL Data Warehouse and are listed as a Partner solution https://azure.microsoft.com/en-gb/documentation/articles/sql-data-warehouse-integrate-solution-partners/
Here is a link to all the configuration required for Informatica Cloud for example: https://kb.informatica.com/proddocs/Product%20Documentation/5/IC_Winter2016_MicrosoftAzureSQLDataWarehouseConnectorGuide_en.pdf
What version are you using Prakash?
If you are trying to select data from SQLDW I would also check that your connection is targeting the specific database you have created. You could be connecting to the master db on the logical server rather than the sqldw database itself.
Related
We have an on-prem SQL Server DB (SQL Server 2017 Comp 140) that is about 1.2 TB. We need to do a repeatable migration of just the data to an on cloud SQL (Paas). The on-prem has procedures and functions that do cross DB queries which eliminates the Data Migration Assistant. Many of the tables that we need to migrate are system versioned tables (just to make this more fun). Ideally we would like to move the data into a different schema of a different DB so we can avoid the use of External tables (worried about performance).
Moving the data is just the first step as we also need to do an ETL job on the data to massage it into the new table structure.
We are looking at using ADF but it has trouble with versioned tables unless we turn them off first.
What are other options that we can look and try to be able to do this quickly and repeatedly? Do we need to change to IaaS or use a third party tool? Did we miss options in ADF to handle this?
If I summarize your requirements, you are not just migrating a database to cloud but a complete architecture of your SQL Server, which includes:
1.2 TB of data,
Continuous data migration afterwards,
Procedures and functions for cross DB queries,
Versioned tables
Point 1, 3, and 4 can be done easily by creating and exporting .bacpac file using SQL Server Management Studio (SSMS) from on premises to Azure Blob storage and then importing that file in Azure SQL Database. The .bacpac file that we create in SSMS allows us to include all version tables which we can import at destination database.
Follow this third-party tutorial by sqlshack to migrate data to Azure SQL Database.
The stored procedures can also be moved using SQL Scripts. Follow the below steps:
Go the server in Management Studio
Select the database, right click on it Go to Task.
Select Generate Scripts option under Task
Once its started select the desired stored procedures you want to copy
and create a file of them and then run script from that file to the Azure SQL DB which you can login in SSMS.
The repeatable migration of data is challenging part. You can try it with Change Data Capture (CDC) but I'm not sure that is what exactly your requirement. You can enable the CDC on database level using below command:
Use <databasename>;
EXEC sys.sp_cdc_enable_db;
Refer to know more - https://www.qlik.com/us/change-data-capture/cdc-change-data-capture#:~:text=Change%20data%20capture%20(CDC)%20refers,a%20downstream%20process%20or%20system.
I'm working on a project where I need to automatically run an insert statement to insert a result set - problem is that I need it to go from a SQL Server over to a DB2 server. I can't create a file or script and then import it or run it on the other side. I need to insert or update the DB2 side from the SQL Server side.
Is this possible? I need this to run all by itself as part of a stored procedure in SQL Server.
You're looking for the linked server feature.
Typically linked servers are configured to enable the Database Engine to execute a Transact-SQL statement that includes tables in another instance of SQL Server, or another database product such as Oracle. Many types OLE DB data sources can be configured as linked servers, including Microsoft Access and Excel. Linked servers offer the following advantages:
The ability to access data from outside of SQL Server.
The ability to issue distributed queries, updates, commands, and transactions on heterogeneous data sources across the enterprise.
The ability to address diverse data sources similarly.
(I believe most of the major RDBMSs have a similar feature)
For the most part, this essentially allows you to treat tables or sources in the other database as if they were part of the SQL Server instance - an INSERT statement should just work "normally".
As mentioned you can use a linked server on the SQL Server side to perform operations between two servers. I haven't done much with running DML on DB2 from SQL Server, but from my experience SSIS performs far better than linked servers for transactions pulling data from DB2 to SQL Server using an OLE DB connection. You can read more about OLE DB connections in SSIS here and you'll want to reference the DB2 documentation for the specific DB2 type (Mainframe, LUW, etc.) that's used for details on setting up the connection there. If you setup the SSIS catalog you can run packages using SQL Server stored procedures, which you can either use directly or execute from an existing user stored procedures.
I'm trying to query my Hortonworks cluster Hive tables from SQL Server. My scenario below:
HDP 2.6, Ambari, HiveServer2
SQL Server 2016 Enterprise
Kerberos configuration for secure logins in HDP
I was reading about the PolyBase service in SQL Server 2016 and I suppose later versions. However, I realize that according to the documentation what this service is going to perform in SQL Server is a bridge to reach out my HDFS and recreate external tables based in this data source.
Otherwise what I'm expecting is query Hive objects like these would be SQL Server objects as well, such as a linked server.
Someone has an example or knows if this is possible within SQL Server and Hive?
Thanks so much
Hive acts more as a job compiler than a database. This means every SQL statement you are writing will be translated into a job for Hadoop, sent over to the cluster and become executed there. From the user perspective it looks like querying a table.
The already mentioned approach by reading the HDFS data source and re-create it in SQL Server is the correct one. Since both, Hive and database server are different technologies, something like a linked server seems to technically not possible for me.
Hive provides nowadays a JDBC interface which could be used to connect to it. But even with Hive JDBC, every query will end up as cluster job for distributed computing, running over the files in HDFS, create a result set and present that to you.
If you want to querying Hive from SQL server, you can download ODBC driver (Microsoft or Hortonsworks) and create a Data Source Name (DSN) for Hive. In Advanced option check Use Native Query. Then just create new linked server in the SQL server with the same name of datasource as Data Source Name in ODBC driver.
Write openquery something like:
select top 100 * from
openquery(HadoopLinkedServer,
'column1, column2 from databaseInHadoop.tableInHadoop')
I have some Access tables with many number of fields. I have migrated each access table to 6 or 7 sql server tables. I am using sql server 2008. Now I want to use Access as the front end so that I can enter the data in access but it will be stored in sql server. I know that I have to make a ODBC connection. But I am not sure of how to create a access form to use it as a front-end. I am sorry if it's a basic question...
You will probably want to start with an empty Access database (since the table structures and any existing forms and reports will not match what you created in SQL server).
First step is to establish an ODBC connection to your SQL Server database. Then you will "link" the tables in SQL Server to your Access database.
Now, you have an Access database with all the tables that you linked from SQL Server. Those tables still "live" in SQL Server and when you edit them in Access the data will be stored in SQL Server.
You can then build Access forms and reports using these tables just as if the tables were native to Access.
The most versatile way is to use ODBC links to your SQL Server tables and views. That approach allows you the flexibility to link to other ODBC data sources, tables in other Jet/ACE database files, create Jet/ACE tables locally in your database, link to Excel spreadsheets, and so forth. You can incorporate a broad range of data sources.
If you choose ADP, you will be limited to OLE DB connection to a single SQL Server instance. And you will be essentially locked in to SQL Server. You would not be able to switch the application to a different client-server database without a major re-development effort.
Regarding deployment overhead with ODBC, although you may find it convenient to use a DSN during development, you should convert your ODBC links to DSN-less connections before deployment. That way your user's won't each require the DSN. See Doug Steele's page: Using DSN-Less Connections
Well you can create an ODBC connection. You can also create an ADODB connection as well. If your objective is to update or modify a SQL database, both connections will do the trick.
Now, I guess you have to get familiar with the corresponding objects. These should be tables, queries, commands, etc .., that will allow you, for example, to build recordsets out of SQL queries ... Once you are clear with that, you can, for example, assign a recordset to a form through the Set myForm.recordset = myRecordset.open ... method.
I am using Enterprise Miner 6.2 and want to create a data source but my option is a SAS Table. How do I go about exporting SQL Server or Excel data into a SAS table?
SAS has many ways of connecting to and/or reading data from disparate sources. I haven't used Enterprise Miner, so I'm not sure which of SAS' methods are available to you directly from within EM, but it's likely there will be someone at your site who has some interface to Base SAS and who can help you/advise what data access products are installed and how you can use them.
For SQL Server data, SAS/Access to SQL Server or SAS/Access to OLE DB will allow you to read directly from SQL Server tables in place. Alternatively, someone could provide you with a dump of the data you need from the SQL Server database.
For Excel data, there are also SAS/Access products, but SAS also has native capabilities to read in the data if saved as, for example, a .csv or .txt file.
To help answer you further, perhaps can you come back with some details about what SAS products/interfaces are available to you?