I have a Python package that I am able to run successfully on an Azure Data Science Virtual Machine. However, when I push it to Azure as a Function, I cannot successfully make a database connection. I was getting an error that the ODBC Driver 13 for SQL Server was not supported, so I changed the driver to ODBC Driver 17 for SQL Server and now I am NOT getting an error, but no data is being returned for a query that I know should return data.
Is there any other reason that data would not be returned? Firewall issues? do I need to add a binding? Do I need to separate out the connection string to feed each part (e.g., Driver, UID, PWD) into pyodbc.connect() separately? Right now I am feeding it in like this:
setting = os.environ("CONNECTIONSTRING")
conn = pyodbc.connect(setting)
This query works fine returning data when I run it on the VM using this code, just not as a Function.
(Note, this is different from my previous post regarding reading the Azure App Setting. That problem has been solved).
There are many parts where this could be breaking.
I'd suggest start by having a Profiler or Extended Events trace on your SQL Server to verify whether a connection is even being established. If not then you need to work through the the various points of connectivity to find out where it breaks. The identity, firewall, NSGs etc might all come into play here.
Once you see a connection then you can play with permissions to ensure that your query then returns your data.
Without a full picture of your infrastructure and settings it is hard to pin it down further.
Turns out it was not a database connectivity issue like I thought it was; it was a code error.
I have a bunch of legacy access based databases that I've been using for years without issue - queries have been running between them for years using ODBC/DAO/ADO. Now suddenly in the last few days, I've started getting the "The database has been placed in a state by user...." error on a bunch of them.
I have tried to narrow the problem down, but it seems to be getting worse. I have tried making a local copy of the database file, opening it, and then on the same machine, trying to create an ODBC connection to it, and get the error. I have also tried running successive queries on the database and still get the same thing (copy of the file on my local machine, so there is only my single connection, basically connect to the database, run a query, close the connection, wait 2 minutes, then try to open a new connection - FAIL - so it is definitely not a multi user limit problem or anything like that.
The issue is consistent across multiple platforms (directly in MS Access (2010 and 2013), with Excel (2010 and 2013) queries to the Access DB, and with Windows Forms VB.net applications trying to query the access DB (through datasets, OLEDB, and ADO)
Until this week all of these applications were working as designed and had been for years- I am the only Dev working on this stuff, so I know that nothing in the programming has changed, so it must be an external issue.
The back end databases reside on a shared server drive (server is running Windows Server 2008) - and we have had no other connection issues to the server or network; it is limited to connections to access database files.
Does anyone know if something has changed lately (in the last week or so) with the ODBC drivers? Maybe an MS update?
Thanks in advance!
It seems that you can fix this issue by buffering the Access binary. Use the Binary.Buffer function in a query that defines your Access database, then reference that query in order to use the binary in a query that pulls each table. Note: I also define parameters for my folder path and file names.
For example:
//myDbBinary
let
Source = Binary.Buffer(File.Contents(DataFolder_param & FileName_param),
[CreateNavigationProperties=true]))
in
Source
// Table1 Query
let
Source = Access.Database(myDbBinary, [CreateNavigationProperties=true]),
_Table1 = Source{[Schema="",Item="Table1"]}[Data]
in
_Table1
The source is this
I am using SQL Server 2014 using FireDAC in Delphi XE7 to connect to the database.
We need an Event to automatically open a form if some Data where changed in a special Table. Therefor we found the TFDEventAlerter which we used to create a Queue and Service for each User.
UserEvent.Names.Add('QUEUE=qUserEvent');
UserEvent.Names.Add('SERVICE=s' + Username);
UserEvent.Names.Add('CHANGE1=usr;SELECT ID FROM dbo.MsgBox WHERE Status = 'A');
So we have got one Queue and a lot of Services that are listening to that Queue. In general this Setup ist working fine.
But if a lot of Users (550 in my case) are connecting to the database and adding new Services to the Queue we got the Problem that we are running into bad Performance enforced by ThreadPool_Starvation as each Service is blocking a Worker-Thread from time to time.
So does anybody know why there is a limitation using Services for the Service Broker in SQL Server 2014?
Is there another way to use the TFDEventAlerter with 500 Users without creating 500 Services? It seems to me, that we are not using the TFDEventAlerter as it is used to be.
When I link the Microsoft Access to SQL Server locally, everything works. The second I go to a computer that is on the network, I am not able to open the form I created on the Access database.
I found out that if I open the link, I will get an error. If I choose where the DB through configurations in Access, I get the error again. If I try this a third time, it connects to the database and it is fully linked--I am able to type data into Access and it will store it in the SQL Server database as well.
My question is: How do I get it to connect to the server on the first try? I have to hit "connect" three times which takes about 5 minutes to log in. That isn't very efficient when the people using this program nothing about computers.
Linking Access to SQL Server uses ODBC. Make sure your ODBC settings are correct.
We have a classic ASP application that simply works and we have been loathe to modify the code lest we invoke the wrath of some long-dead Greek gods.
We recently had the requirement to add a feature to an application. The feature implementation is really just a database operation requires minimal change to the UI.
I changed the UI and made the minor modification to submit a new data value to the sproc call (sproc1).
In sproc1 that is called directly from ASP, we added a new call to another sproc that happens to be located on another server, sproc2.
Somehow, this does not work via our ASP app, but works in SQL Management Studio.
Here's the technical details:
SQL 2005 on both database servers.
Sql Login is authenticating from the ASP application to SQL 2005 Server 1.
Linked server from Server 1 to Server 2 is working.
When executing sproc1 from SQL Management Studio - works fine. Even when credentialed as the same user our code uses (the application sql login).
sproc2 works when called independently of sproc1 from SQL Management Studio.
VBScript (ASP) captures an error which is emitted in the XML back to the client. Error number is 0, error description is blank. Both from the ADODB.Connection object and from whatever Err.Number/Err.Description yields in VBScript from the ASP side.
So without any errors, nor any reproducibility (i.e. through SQL Mgmt Studio) - does anyone know the issue?
Our current plan is to break down and dig into the code on the ASP side and make a completely separate call to Server 2.sproc2 directly from ASP rather than trying to piggy-back through sproc1.
Have you got set nocount on set in both stored procedures? I had a similar issue once and whilst I can't remember exactly how I solved it at the moment, I know that had something to do with it!
You could be suffering from the double-hop problem
The double-hop issue is when the ASP/X page tries to use resources that are located on a server that is different from the IIS server.
Windows NT Challenge/Response does not support double-hop impersonations (in that once passed to the IIS server, the same credentials cannot be passed to a back-end server for authentication).
You should verify the attempted second connection using SQL Profiler.
Note that with your manual testing you are not authenticating via IIS. It's only when you initiate the sql via the ASP/X page that this problem manifests.
More resources:
http://support.microsoft.com/kb/910449
http://support.microsoft.com/kb/891031
http://support.microsoft.com/kb/810572
I had a similar problem and I solved it by setting nocount on and removing print commands.
My first reaction is that this might not be an issue of calling cross-server, but one of calling a second proc from a first, and that this might be what's acting differently in the two different environments.
My first question is this: what happens if you remove the cross-server aspect from the equation? If you could set up a test system where your first proc calls your second proc, but the second proc is on the same server and/or in the same database, do you still get the same problem?
Along these same lines: In my experience, when the application and SSMS have gotten different results like that, it has often been an issue of the stored procedures' settings. It could be, as Luke says, NOCOUNT. I've had this sort of thing happen from extraneous PRINT statements in the code, although I seem to remember the PRINTed value becoming part of the error description (very counterintuitively).
If anything is returned in the Messages window when you run this in SSMS, find out where it is coming from and make it stop. I would have to look up the technical terms, but my recollection is that different querying environments have different sensitivities to "errors", and that a default connection via SSSM will not throw an error at certain times when an ADO connection from a scripting language will.
One final thought: in case it is an environment thing, try different settings on your ASP page's connection string. E.g., if you have an OLEDB connection, try ODBC. Try the native and non-native SQL Server drivers. Check out what connection string options your provider supports, and try any of them that seem like they might be worth trying.
Example code might help :) Are you trying to return two tables from the stored procedure; I don't think ADO 2.6 can handle multiple tables being returned.
I did consider that (double-hop), but what is the difference between a sproc-in-a-sproc call like I am referring to vs. a typical cross-server join via INNER JOIN? Both would be executed on Server1, using the Linked Server credentials, and authenticating to Server 2.
Can anyone confirm that calling a sproc cross-server is different than doing a join on data tables? And why?
If the Linked Server config is a sql account - is that considered a double-hop (since what you refer to is NTLM double-hops?)
In terms of whether multiple resultsets are coming back - no. Both Server1.Sproc1 and Server2.Sproc2 would be "ExecuteNonQuery()" in the .net world and return nothing (no resultsets and no return values).
Try to check the permissions to the database for the user specified in the connection string.
Use the same user name in the connection string to log in to the database while using sql mgmt studio.
create some temporary table to write the intermediate values and exceptions since it can be a effective way of debugging your application.
Can I just check: You made the addition of sproc2? Prior to that it was working fine for ages.
Could you not change where you call sproc2 from? Rather than calling it from inside sproc1, can you call it from the ASP? That way you control the authentication to SQL in the code, and don't have to rely on setting up any trusts or shared remote authentication on the servers.
How is your linked server set up? You generally have some options as to how it authenticates to the remote server, which include logging in as the currently logged in user or specifying a SQL login to always use. Have you tried setting it to always use a specific account? That should eliminate any possible permissions issues in calling the remote procedure...