Snowflake ODBC driver vs SQL API - snowflake-cloud-data-platform

What are the main differences between connecting our RoR application to Snowflake using the ODBC driver vs SQL API?
The main use case is for read only access to run various custom queries against a few tables.
We've prototyped both connections. Both work well. ODBC appears to be faster when running simple queries.
One use case is to execute ~10 queries in one request. ODBC requires us to execute 10 separate SQL statements. While the SQL API allows us to submit the queries together, but then requires an additional API call for each statementHandle to get the results. The API calls are fast, but that's still 11 API calls.
Is ODBC the obvious choice here? What if ~10 queries grows to 50-100? What if the result set is 50-100k+ rows of data? I do see how SQL API partitions the results. That might come in handy. Not sure how ODBC handles that offhand.
Other thoughts on security, performance, etc to think about?
Thanks!

First, it is possible to send a batch of statements with ODBC a single request: Executing a Batch of SQL Statements (Multi-Statement Support)
Other than that there are a few differences that might not matter for your use case. For example, the ODBC driver returns results in the Apache Arrow format. While the REST API will return the first partition of the results in JSON and subsequent partitions in gzipped JSON.
The rest api has some limitations largely based on the fact the REST API cannot maintain a session across requests and a few other quirks like not being able to PUT files into a stage (including unstructured).
The ODBC driver (and other official connectors) will always have access to any command on Snowflake because they can maintain a session. So it will ultimately come down to personal preference and what your app needs to do.
Another thing to consider is the ODBC driver has years of development and the SQL API is relatively new. (Though by all accounts, works great)
Limitations of the SQL API from the docs.
The following commands are not supported:
The PUT command (in Snowflake SQL)
The GET command (in Snowflake SQL)
The CALL command with stored procedures that return a table (stored procedures with the RETURNS TABLE clause).
The following commands and statements are supported only within a request that specifies multiple statements:
Commands that perform explicit transactions, including:
BEGIN
COMMIT
ROLLBACK
Commands that change the context of the session, including:
USE
ALTER SESSION
Statements that set session variables.
Statements that create temporary tables and stages (tables and stages that are available only in the current session).

Related

Calling API from Azure SQL Database (as opposed to SQL Server)

So I have an Azure SQL Database instance that I need to run a nightly data import on, and I was going to schedule a stored procedure to make a basic GET request against an API endpoint, but it seems like the OLE object isn't present in the Azure version of SQL Server. Is there any other way to make an API call available in Azure SQL Database, or do I need to put in place something outside of the database to accomplish this?
There are several options. I do not know whether a powershell job as stated in the first comment to your question can execute http requests but I do know at least a couple of options:
Azure Data Factory allows you to create scheduled pipelines to copy/transform data from a variety of sources (like http endpoints) to a variety of destinations (like azure sql databases). This involves no or a little bit of scripting.
Azure Logic Apps allows you to do the same:
With Azure Logic Apps, you can integrate (cloud) data into (on-premises) data storage. For instance, a logic app can store HTTP request data in a SQL Server database.
Logic apps can be triggered by a schedule as well and involves none or little scripting
You could also write an Azure Function that is executed on a schedule and calls the http endpoint and write the result to the database. Multiple languages are supported for writing functions, like c# and powershell for example.
All those options include the possibility to force an execution outside the schedule.
In my opinion Azure Data Factory (no coding) or an Azure Function (code only) are the best options given the need to parse a lot of json data. But do mind that Azure Functions on a Consumption Plan have a maximum execution time allowed of 10 minutes per invocation.

What is the most secure way to allow users to execute BCP export command from SQL?

I am currently working in an environment where the ability to export a table programatically from within a hand-run SQL script would be of great help.
Performing the exports from script will be the first step towards running the entire process from within a stored procedure, therefore I have to be able to initiate the export from SQL.
The organisation currently has the following configuration on most servers -
SQL Server 2005 or 2008
xp_cmdshell - disabled
CLR - enabled
Ultimately, I would like to be able to call a procedure passing the following parameters and have it perform the export.
table name
file path/name (on a network share)
file format
Currently BCP seems like a perfect option in terms of functionality but I am unable to invoke it via the command line due to xp_cmdshell being disabled.
The organisation is quite small and happy work towards a secure solution and my impression so far is that they have a good level of control over their security. They have made a blanket decision to disable xp_cmdshell but if I could present a safe way to allow use of it I think they would be pretty receptive.
In my research I've come across both the 'EXECUTE AS' functionality as well as signing procedures with certificates, but still cant work out if either approach can help me achieve what I want.
Also, if you have another solution that allows me to achieve the same end-result I'm all ears!
As Aaron Bertrand pointed out, the problem is xp_cmdshell disabled.
There are two options you may consider.
Use BULK INSERT. Requires INSERT and ADMINISTER BULK OPERATIONS permissions
Create a SQL Agent job that type is "Operating System(CmdExec)" to run BCP. you may need code to create/update jobs for passing parameters.

CLR function calls a remote SQL Server

I am totally new to SQL Server CLR. I understand that we should use CLR under the condition that business logic is really complicated to implement in SQL.
We have quite a few functions in VB.NET to process data, such as standardizing data. I am trying to implement them through CLR. The functions access a remote server first to get some reference data, then process on the local server.
However no matter how I try, I got Error
System.NullReferenceException: Object reference not set to an instance of an object.
or it returns null from the remote server.
Can we access a remote server in the CLR routine? If yes, how?
You can access remote servers in the .Net CLR but you really shouldn't.
SQL server operates in a cooperative multitasking environment, i.e. threads are trusted to terminate and complete their processing (one way or another) in a timely manner. If you start doing things like calling remote methods (which are liable to long delays) you are likely going to end up starving SQL server worker threads which will ultimately end up with Bad Things happening.
Also see this question
Yes you can. You can use the normal SqlCommand & SqlConnection classes in the .NET framework to do so.=, if the remote servers are SQL Servers, which I assume it is.
If they are web servers, yes you can, use web services.
On a side note. Be very careful what you do in the CLR, because as attractive as CLR looks you only have about 512MB of memory under SQL 2005, and by adding some startup parameters you can push it out to 2Gb. Just be aware.
EDIT:
Based on your comments, I suggest using a linked server, and then re-creating the remote table locally and then joining to it on the local server.
You will have to make sure you re-create indexes and keys on the local box, and for speed -sake, do it after you inserted the records into the table, else building your indexes on an already populated table, will take a long time.

Is there some way to access Sql server from z/OS mainframe and have the result in IBM 3270 terminal emulation?

Is there any way (possibly cheap) to access Microsoft Sql Server from z/OS mainframe (COBOL programs) and have the result in 3270 terminal emulation?
I'm aware that 3270 is a pretty old system, but in bank CED, is still very popular.
It depends on what you are actually trying to do. My reading of your question is that you want to have a mainframe based process access an SQL Server database, and then do something with the result, probably involving a 3270 terminal.
If you can use Unix System Services, you can compile a TDS library like FreeTDS and then use a C program to do what you want with the result. If you want to get more complex, you can run the connection from the native z/OS environment by compiling the code with IBM C, SAS C or Dignus C/C++. I can recommend Dignus and I have used it to build code that interacts with other languages on z/OS. The Dignus headers and runtime library have (from memory) some FreeBSD lineage which helps to simplify porting.
Using this approach you can get a load module that you can call from some other part of your system to do the work, you can link the code with other parts of your system, or you can just submit a job and get the output.
If you want to use Java, you can use something like jTDS and write Java code to do what you need. I haven't used Java on z/OS, so I can't offer specific advice there, but I have used jTDS on other platforms and I've been happy with the result.
Update:
You can export a C function as an entry point to a load module and then call that from Cobol. The C/C++ implementation needs to deal with Cobol data structures; they are well defined and predictable so that isn't a problem. Depending on how flexible you need things to be, you could compile the query into the C code and just have a function which executed a predefined query and had an interface for retrieving the result, or you could have something more complex where the query was provided from the Cobol program.
I have used this approach to provide API functions to Adabas/Natural developers and it worked well. The Dignus compiler has a mechanism for callers to provide a handle to a runtime library so that you can manage the lifetime of the C runtime environment from the calling program.
For a C/C++ developer, that should be fairly straightforward. If your developers are all Cobol developers, things could be a bit more tricky.
The gateway approach is possible, and I'm sure there are gateway products around, but I can't recommend one. I have seen crappy ones that I wouldn't recommend, but that doesn't mean that there isn't a good one somewhere.
For completeness, I'll mention the possibility of implementing the TDS protocol in Cobol. Sounds like cruel and usual punishment, though.
If you have 3270 terminal emulation, what terminals are you using? PC's?
One interesting hack is using a Cisco router to do on the fly 3270 to vanilla TCP conversion, and then writing a simple TCP proxy for your SQL Server procedures
Not as such - 3270 emulators are connecting to an IBM mainframe. In order to get at data from a SQL server database on a mainframe, you would have to write a program running on the mainframe that reads data from the SQL server DB. This would require you to have driver software running on the mainframe. You might be able to find a third party that makes this type of thing, but it is likely to be quite expensive.
If you need to put together a report or something combining data from a mainframe system with external data sources it may be easier to get the data off the mainframe and do the integration elsewhere - perhaps some sort of data mart.
An alternative would be to extract thd data you want from the SQL Server database and upload it to the mainframe as a flat file for processing there.
Here is a possiblity if you are writing COBOL programs that are running in CICS.
First, wrap your SQL Server database stored procedure with a web service wrapper. Look at devx.com article 28577 for an example.
After that, call your new SQL Server hosted web service using a CICS web service call.
Last use standard CICS BMS commands to present the data to the user.
Appliation Development for CICS Web Services
Just get the JDBC driver to access the MS-SQL server. You can then subclass it and use it in your Cobol program and access the database just as if you were using it from Java.
Once you get your results, you can present them via regular BMS functions.
No dirty hacks or fancy networking tricks needed. With IBM Enterprise Cobol, you really can just create a Java class and use it as you would in the Java space.
You might be able to do something I did in the past. I have written DB2 to MS-SQL COBOL programs/functions that make the MS-SQL table/view SELECT only to DB2. It involved creating a running service on a network server that would accept TCP/IP connections only from the mainframe and use the credentials passed as the user ID/PW used to access the MS-SQL table. It would then issue a select against the table/view and pass the field names list back first with the total number of rows. It would then pass each row, as tab delimited fields, back to the mainframe. The COBOL program would save the field names in a table to be used to determine which routine to use to translate each MS-SQL field to DB2. From the DB2 point of view, it looks like a function that returns fields. We have about 30 of these running. I had to create a MS-SQL describe procedure to help create the initial defines of the field transations for the COBOL program. Also had to create a COBOL program to read the describe data and create the linkage and procedure division commands. One COBOL program for each MS-SQL table/view.
Here is a sample function definition.
CREATE FUNCTION
TCL.BALANCING_RECON(VARCHAR(4000))
RETURNS
TABLE(
SCOMPANY CHAR(6),
PNOTENO VARCHAR(14),
PUNIT CHAR(3),
LATEFEES DEC(11,2),
FASB_4110 DEC(11,2),
FASB_4111 DEC(11,2),
USERAMOUNT1 DEC(11,2),
USERAMOUNT2 DEC(11,2),
USERFIELD1 VARCHAR(14)
)
LANGUAGE COBOL
CONTINUE AFTER FAILURE
NOT DETERMINISTIC
READS SQL DATA
EXTERNAL NAME DB2TCL02
COLLID DB2TCL02
PARAMETER STYLE SQL
CALLED ON NULL INPUT
NO EXTERNAL ACTION
DISALLOW PARALLEL
SCRATCHPAD 8000
ASUTIME LIMIT 100
STAY RESIDENT YES
PROGRAM TYPE SUB
WLM ENVIRONMENT DB2TWLM
SECURITY DB2
DBINFO
; COMMIT;
GRANT EXECUTE ON FUNCTION TCL.BALANCING_RECON TO PUBLIC;
To call the function:
SELECT * FROM
TABLE (TCL.BALANCING_RECON(''
)) AS X;
You would put any MS-SQL filter commands between the quotes.
I've not been asked to update any MS-SQL data so I've not jumpped that hurdle yet.
There is also a database in DB2 that keeps track of the ID/PW and server that has the started task running. This is so if the server becomes overloaded, different selects can be pushed to different servers. Response is quick, even for large tables. Timeout is the same as the 60 deadlock timeout. The transport is primarly IP based. DB2 simly sees the data as an external table reference.
As dirty hacks go, have you thought about creating a simple HTTP or TCP server that returns a .csv of the table data that you need?
This means your client only needs a simple HTTP/TCP client to access the data rather than a database client library.
In my company, we use Java for connect to Sql Server.
And a CL call this Java program :)
Very simple...

MS Access Application - Convert data storage from Access to SQL Server

Bear in mind here, I am not an Access guru. I am proficient with SQL Server and .Net framework. Here is my situation:
A very large MS Access 2007 application was built for my company by a contractor.
The application has been split into two tiers BY ACCESS; there is a front end portion that holds all of the Ms Access forms, and then on the back end part, which are access tables, queries, etc., that is stored on a computer on the network.
Well, of course, there is a need to convert the data storage portion to SQL Server 2005 while keeping all of these GUI forms which were built in Ms Access. This is where I come in.
I have read a little, and have found that you can link the forms or maybe even the access tables to SQL Server tables, but I am still very unsure on what exactly can be done and how to do it.
Has anyone done this? Please comment on any capabilities, limitations, considerations about such an undertaking. Thanks!
Do not use the upsizing wizard from Access:
First, it won't work with SQL Server 2008.
Second, there is a much better tool for the job:
SSMA, the SQL Server Migration Assistant for Access which is provided for free by Microsoft.
It will do a lot for you:
move your data from Access to SQL Server
automatically link the tables back into Access
give you lots of information about potential issues due to differences in the two databases
keeps track of the changes so you can keep the two synchronised over time until your migration is complete.
I wrote a blog entry about it recently.
You have a couple of options, the upsizing wizard does a decent(ish) job of moving structure and data from access to Sql. You can then setup linked tables so your application 'should' work pretty much as it does now. Unfortunately the Sql dialect used by Access is different from Sql Server, so if there are any 'raw sql' statements in the code they may need to be changed.
As you've linked to tables though all the other features of Access, the QBE, forms and so on should work as expected. That's the simplest and probably best approach.
Another way of approaching the issue would be to migrate the data as above, and then rather than using linked tables, make use of ADO from within access. That approach is kind of famaliar if you're used to other languages/dev environments, but it's the wrong approach. Access comes with loads of built in stuff that makes working with data really easy, if you go back to use ADO/Sql you then lose many of those benefits.
I suggest start on a small part of the application - non essential data, and migrate a few tables and see how it goes. Of course you back everything up first.
Good luck
Others have suggested upsizing the Jet back end to SQL Server and linking via ODBC. In an ideal world, the app will work beautifully without needing to change anything.
In the real world, you'll find that some of your front-end objects that were engineered to be efficient and fast with a Jet back end don't actually work very well with a server database. Sometimes Jet guesses wrong and sends something really inefficient to the server. This is particular the case with mass updates of records -- in order not to hog server resources (a good thing), Jet will send a single UPDATE statement for each record (which is a bad thing for your app, since it's much, much slower than a single UPDATE statement).
What you have to do is evaluate everything in your app after you've upsized it and where there are performance problems, move some of the logic to the server. This means you may create a few server-side views, or you may use passthrough queries (to hand off the whole SQL statement to SQL Server and not letting Jet worry about it), or you may need to create stored procedures on the server (especially for update operations).
But in general, it's actually quite safe to assume that most of it will work fine without change. It likely won't be as fast as the old Access/Jet app, but that's where you can use SQL Profiler to figure out what the holdup is and re-architect things to be more efficient with the SQL Server back end.
If the Access app was already efficiently designed (e.g., forms are never bound to full tables, but instead to recordsources with restrictive WHERE clauses returning only 1 or a few records), then it will likely work pretty well. On the other hand, if it uses a lot of the bad practices seen in the Access sample databases and templates, you could run into huge problems.
It's my opinion that every Access/Jet app should be designed from the beginning with the idea that someday it will be upsized to use a server back end. This means that the Access/Jet app will actually be quite efficient and speedy, but also that when you do upsize, it will cause a minimum of pain.
This is your lowest-cost option. You're going to want to set up an ODBC connection for your Access clients pointing to your SQL Server. You can then use the (I think) "Import" option to "link" a table to the SQL Server via the ODBC source. Migrate your data from the Access tables to SQL Server, and you have your data on SQL Server in a form you can manage and back up. Important, queries can then be written on SQL Server as views and presented to the Access db as linked tables as well.
Linked Access tables work fine but I've only used them with ODBC and other databases (Firebird, MySQL, Sqlite3). Information on primary or foreign keys wasn't passing through. There were also problems with datatype interpretation: a date in MySQL is not the same thing as in Access VBA. I guess these problems aren't nearly as bad when using SQL Server.
Important Point: If you link the tables in Access to SQL Server, then EVERY table must have a Primary Key defined (Contractor? Access? Experience says that probably some tables don't have PKs). If a PK is not defined, then the Access forms will not be able to update and insert rows, rendering the tables effectively read-only.
Take a look at this Access to SQL Server migration tool. It might be one of the few, if not the ONLY, true peer-to-peer or server-to-server migration tools running as a pure Web Application. It uses mostly ASP 3.0, XML, the File System Object, the Data Dictionary Object, ADO, ADO Extensions (ADOX), the Dictionary Scripting Objects and a few other neat Microsoft techniques and technologies. If you have the Source Access Table on one server and the destination SQL Server on another server or even the same server and you want to run this as a Web Internet solution this is the product for you. This example discusses the VPASP Shopping Cart, but it will work for ANY version of Access and for ANY version of SQL Server from SQL 2000 to SQL 2008.
I am finishing up development for a generic Database Upgrade Conversion process involving the automated conversion of Access Table, View and Index Structures in a VPASP Shopping or any other Access System to their SQL Server 2005/2008 equivalents. It runs right from your server without the need for any outside assistance from external staff or consultants.
After creating a clone of your Access tables, indexes and views in SQL Server this data migration routine will selectively migrate all the data from your Access tables into your new SQL Server 2005/2008 tables without having to give out either your actual Access Database or the Table Contents or your passwords to anyone.
Here is the Reverse Engineering part of the process running against a system with almost 200 tables and almost 300 indexes and Views which is being done as a system acceptance test. Still a work in progress, but the core pieces are in place.
http://www.21stcenturyecommerce.com/SQLDDL/ViewDBTables.asp
I do the automated reverse engineering of the Access Table DDLs (Data Definition Language) and convert them into SQL equivalent DDL Statements, because table structures and even extra tables might be slightly different for every VPASP customer and for every version of VP-ASP out there.
I am finishing the actual data conversion routine which would migrate the data from Access to SQL Server after these new SQL Tables have been created including any views or indexes. It is written entirely in ASP, with VB Scripting, the File System Object (FSO), the Dictionary Object, XML, DHTML, JavaScript right now and runs pretty quickly as you will see against a SQL Server 2008 Database just for the sake of an example.
It takes perhaps 15-20 seconds to reverse engineer almost 500 different database objects. There might be a total of over 2,000 columns involved in this example for the 170 tables and 270 indexes involved.
I have even come up with a way for you to run both VPASP systems in parallel using 2 different database connection files on the same server just to be sure that orders entered on the Access System and the SQL Server system produce the same results before actual cutover to production.
John (a/k/a The SQL Dude)
sales#designersyles.biz
(This is a VP-ASP Demo Site)
Here is a technique I've heard one developer speak on. This is if you really want something like a Client-Server application.
Create .mdb/.mde frontend files distributed to each user (You'll see why).
For every table they need to perform an CRUD, have a local copy in the file in #1.
The forms stay linked to the local tables.
Write VBA code to handle the CRUD from the local tables to the SQL Server database.
Reports can be based off of temp tables created from the SQL Server (Won't be able to create temp tables in mde file I don't think).
Once you decide how you want to do this with a single form, it is not too difficult to apply the same technique to the rest. The nice thing about working with the form on a local table is you can keep a lot of the existing functionality as the existing application (Which is why they used and continue to use Access I hope). You just need to address getting data back and forth to the SQL Server.
You can continue to have linked tables, and then gradually phase them out with this technique as time and performance needs dictate.
Since each user has their own local file, they can work on their local copy of the data. Only the minimum required to do their task should ever be copied locally. Example: if they are updating a single record, the table would only have that record. When a user adds a new record, you would notice that the ID field for the record is Null, so an insert statement is needed.
I guess the local table acts like a dataset in .NET? I'm sure in some way this is an imperfect analogy.

Resources