I must decide whether to get SQL Server Standard license or subscribe to Azure SQL Database for the needs of a small company. Basically what I need is the possibility to develop SSIS packages for data import from Excel and schedule their execution + develop job(s) for sending automated e-mails to customers. As I have zero administration skills I think Azure services would be a better option, but on the other side I cannot find good information on how to develop SSIS directly under the Azure environment. Would I still need SQL Server for that?
For import data from excel to Azure SQL database with SSIS, you can reference this tutorials: Import data from Excel or export data to Excel with SQL Server Integration Services (SSIS)
This article describes the connection information that you have to provide, and the settings that you have to configure, to import data from Excel or export data to Excel with SQL Server Integration Services (SSIS).
You also need download SQL Server Data Tools (SSDT) to help you create the SSIS package. Reference tutorial: Create Packages in SQL Server Data Tools.
All of these need the SQL server environment support. We can not develop the actual SSIS job in Azure without SQL Server.
You don't need SSIS to import data from Excel files to Azure SQL Database. You just need to schedule upload those Excel documents to Azure Storage Acoount and from there you can use OPENQUERY or BULK INSERT to import them to Azure SQL Database.
First create a SCOPED CREDENTIAL with the secret key of the Storage Account.
CREATE DATABASE SCOPED CREDENTIAL UploadInvoices
WITH IDENTITY = 'SHARED ACCESS SIGNATURE',
SECRET = 'sv=2018-03-28&ss=b&srt=sco&sp=rwdlac&se=2019-08-31T02:25:19Z&st=2019-07-30T18:25:19Z&spr=https&sig=KS51p%2BVnfUtLjMZtUTW1siyuyd2nlx294tL0mnmFsOk%3D';
Now create an external data source that maps the Storage Account.
CREATE EXTERNAL DATA SOURCE MyAzureInvoices
WITH (
TYPE = BLOB_STORAGE,
LOCATION = 'https://newinvoices.blob.core.windows.net',
CREDENTIAL = UploadInvoices
);
Import Excel documents using OPENROWSET.
SELECT * FROM OPENROWSET(
BULK 'week3/inv-2017-01-19.csv',
DATA_SOURCE = 'MyAzureInvoices',
FORMAT = 'CSV',
FORMATFILE='invoices.fmt',
FORMATFILE_DATA_SOURCE = 'MyAzureInvoices'
) AS DataFile;
Using BULK INSERT, specify the container on the Storage Account and file description:
BULK INSERT Colors2
FROM 'week3/inv-2017-01-19.csv'
WITH (DATA_SOURCE = 'MyAzureInvoices',
FORMAT = 'CSV');
You can automate this using Azure Automation to schedule execution of a stored procedure using OPENQUERY and BULK INSERT to import Excel files.
Related
I am not able to copy data from ADLS gen2 to SQL Server (its not Azure SQL) using ADF.
What I have done is like this:
Created Data Set: Adls gen2 dataset Src
SQL Server DataSet tgt
But it doesn't allow me to choose tgt as my sink, though it lists down to choose the sink if the data set is either from (Azure SQL or Data Lake).
You will have to create an Integration Runtime and configure the same in your SQL Server Linked Service in ADF.
SQL Server is supported as sink, you can find the details here
As SQL Server is a different compute environment than Azure, you will have to create IR (Integration Runtime) so that Azure and SQL Server can communicate with each other.
Integration Runtime
If you want create on-premise SQL Server as dataset, you must install the Self-hosted integration manually:
A self-hosted integration runtime can run copy activities between a
cloud data store and a data store in a private network. It also can
dispatch transform activities against compute resources in an
on-premises network or an Azure virtual network. The installation of
a self-hosted integration runtime needs an on-premises machine or a
virtual machine inside a private network.
If you're using Data Flow, Data Flow doesn't support self-hosted integration so that we can't use SQL Server as connector:
You must use Copy active instead.
HTH.
Environment : Oracle 12C
In SQL Server, a credential is a record that contains the authentication information (credentials) required to connect to a resource outside SQL Server. This information is used internally by SQL Server. Most credentials contain a Windows user name and password.Here is Microsoft doc about Credentials in SQL Server .In SQL Server, the default is to use the service account credentials to access the resource outside SQL Server.
What is the SQL Server credential equivalent in Oracle ?
There is no direct equivalence as it depends on the resource.
For database links (which are equivalent to SQL Server linked servers), credentials are not stored as a separate object but are considered as part of the database link itself.
For local external jobs, remote external jobs, and remote database used by DBMS_SCHEDULER jobs, you need to use DBMS_CREDENTIAL.CREATE_CREDENTIAL procedure. External procedures are also using DBMS_CREDENTIAL.CREATE_CREDENTIAL.
The view DBA_CREDENTIALS can be used for DBMS_SCHEDULER jobs and external procedures but not for database links.
There may be other resources that are using credentials in a different way.
I am trying to grant a user access to export data from SQL Server into an Excel file using OPENROWSET.
The user is getting the following error:
Cannot initialize the data source object of OLE DB provider
"Microsoft.ACE.OLEDB.12.0" for linked server "(null)".
We can reproduce the issue running the following block of code, which I can run successfully and the user cannot:
INSERT INTO OPENROWSET('Microsoft.ACE.OLEDB.12.0', 'Excel 12.0 Xml; HDR=YES;IMEX=0; Database=\\servername\exportdirectory\exportfile.xlsx', 'Select ExcelColumn from [TabName$]')
SELECT TOP 1 SQLColumn FROM SQLTable
The only difference I can see between the users is that those who can successfully run this command and get the data into Excel are admins on the Windows server hosting both the SQL instance and the target directory.
The user who is unable to run the code has full control permissions on the target file directory where the excel file resides and has sysadmin permissions on the SQL instance.
Is there any way to allow this user to write to this file without granting full server admin rights on the Windows server itself?
According to MS Documentation the user who is executing the command needs Administrator Bulk Operations.
This is a server level permission - bulkadmin. So you have to put any user that is going to do this in this role (at the server level) not necessarily make them a DBA.
https://learn.microsoft.com/en-us/sql/t-sql/functions/openrowset-transact-sql?view=sql-server-2017:
OPENROWSET permissions are determined by the permissions of the user name that is being passed to the OLE DB provider. To use the BULK option requires ADMINISTER BULK OPERATIONS permission.
If you cannot do that (put the user in the BulkAdmin role) you may want to use SSIS to create the spreadsheet.
You are experiencing the "double hop problem". You need to enable impersonation so that the server hosting the share will accept impersonated credentials from the SQL Server. Here is an excerpt from the security considerations section of this page: Import Bulk Data by Using BULK INSERT or OPENROWSET(BULK...) (SQL Server)
SQL Server and Microsoft Windows can be configured to enable an
instance of SQL Server to connect to another instance of SQL Server by
forwarding the credentials of an authenticated Windows user. This
arrangement is known as impersonation or delegation. Understanding how
SQL Server version handle security for user impersonation is important
when you use BULK INSERT or OPENROWSET. User impersonation allows the
data file to reside on a different computer than either the SQL Server
process or the user. For example, if a user on Computer_A has access
to a data file on Computer_B, and the delegation of credentials has
been set appropriately, the user can connect to an instance of SQL
Server that is running on Computer_C, access the data file on
Computer_B, and bulk import data from that file into a table on
Computer_C.
This page might help you get started: Kerberos Constrained Delegation Overview
This year We moved from hosted servers to Azure VM's, we run two production servers (SQL and IIS). A vital component of our business is bulk transfer of data file. We take customers data from our SQL Server and then write it out to a file (XLS, CSV, XML, PDF, Word, etc.) and then either email these files to customers or in most cases, push them into their FTP server. We also have a few import procedures where we retrieve data files. All of this is currently done with SSIS packages.
We're examining a move to Azure Data Factory as a replacement for SSIS so that we can possibly move to either Azure SQL (if we can work out Broker Services limitations) or an Azure SQL Managed Instance.
I've done some preliminary work with ADF but I saw a couple of posts about lack of FTP support. Is it possible to create/deliver files to FTP and retrieve/consume files from FTP using ADF? Also, almost all of these jobs are automated and we use SQL Agent to run the packages. What is the Azure equivalent for scheduling these jobs to run?
There is automation in ADF but the scheduler is per pipeline. Azure Automation is more powerful and can automate more than one pipeline (Azure Data Factory v2), if needed.
Automation with Azure Data Factory (ADF)
You can receive files from FTP into an Azure Data Factory pipeline: Copy data from FTP server by using Azure Data Factory The idea is that you receive a file via FTP to submit to a particular pipeline activity, and that activity pushes data to an Azure data source. It might be possible to reverse the flow, and send data out.
The Azure SQL Database Managed Instance is the most on-premise like database (PaaS) service but SQL Server deployed on an Azure VM still has more functionality.
We have some Web services (written in .NET WCF) that currently hit a Microsoft SQL Server database to retrieve/update data. We now have a requirement to also retrieve/update data from a Microsoft Access database. The Access database is currently being used by a number of legacy systems so we can't really convert it to a Microsoft SQL server database - we're stuck with an Access database.
My question is: Is there a way we can communicate with the Access database "through" Microsoft SQL Server (so that we can issue T-SQL commands to it and MS SQL Server would handle all the underlying mapping to query the Access database?) Or is it better to just communicate with the Access database via ADO.NET by exposing the location of the Access database on a network share? Does anyone have any suggestions we could try out?
Thanks all.
What about keeping the legacy access application, but moving the tables and data out of the Access application?
Access is makes a great front end to SQL server. When you build an application with Access, just like most other development tools, you have to choose what data engine and database system you going to use with Access. Access have native support built in for the JET data engine (now called ACE). Access has native support built in for SQL server. And access 2010 not only has support built in for using SharePoint, but also SQL Auzure.
So you could consider moving the data that Access now uses to SQL server, and very little if any changes need be made to the access application. So Access application function happy if the tables are in a file (mdb or accdb file), or server based like SQL server, and in fact for tables that reside on SharePoint, Access is in fact using web services to update that data on SharePoint. However, in all cases, the standard code, forms, VBA code and even the SQL used need not be changed.
So I don't the solution here is to attempt to attach SQL server to some "file" sitting in a folder but simply have Access attached to the SQL server to update the tables, and thsu no need to "transfer" data between the systems would exist anymore.
Use a linked server:
A linked server allows for access to distributed, heterogeneous queries against OLE DB data sources. After a linked server is created by using sp_addlinkedserver, distributed queries can be run against this server.
If you're only ever running on a 32-bit platform:
EXEC sp_addlinkedserver
#server = N'SEATTLE Mktg',
#provider = N'Microsoft.Jet.OLEDB.4.0',
#srvproduct = N'OLE DB Provider for Jet',
#datasrc = N'C:\MSOffice\Access\Samples\Northwind.mdb';
Or if you have to worry about 64-bit as well:
EXEC sp_addlinkedserver
#server = N'SEATTLE Mktg',
#provider = N'Microsoft.ACE.OLEDB.12.0',
#srvproduct = N'OLE DB Provider for ACE',
#datasrc = N'C:\MSOffice\Access\Samples\Northwind.accdb';