I have a Python package that I am able to run successfully on an Azure Data Science Virtual Machine. However, when I push it to Azure as a Function, I cannot successfully make a database connection. I was getting an error that the ODBC Driver 13 for SQL Server was not supported, so I changed the driver to ODBC Driver 17 for SQL Server and now I am NOT getting an error, but no data is being returned for a query that I know should return data.
Is there any other reason that data would not be returned? Firewall issues? do I need to add a binding? Do I need to separate out the connection string to feed each part (e.g., Driver, UID, PWD) into pyodbc.connect() separately? Right now I am feeding it in like this:
setting = os.environ("CONNECTIONSTRING")
conn = pyodbc.connect(setting)
This query works fine returning data when I run it on the VM using this code, just not as a Function.
(Note, this is different from my previous post regarding reading the Azure App Setting. That problem has been solved).
There are many parts where this could be breaking.
I'd suggest start by having a Profiler or Extended Events trace on your SQL Server to verify whether a connection is even being established. If not then you need to work through the the various points of connectivity to find out where it breaks. The identity, firewall, NSGs etc might all come into play here.
Once you see a connection then you can play with permissions to ensure that your query then returns your data.
Without a full picture of your infrastructure and settings it is hard to pin it down further.
Turns out it was not a database connectivity issue like I thought it was; it was a code error.
Related
Platform: Google Data Studio
Data Source: MySQL
Connection was working before,
meaning no issues with credentials.
All of a sudden, getting the below error:
All IPs have been whitelisted from the google data studio list of ips.
The only thing that comes to mind is a limitation of GDS to process data.
The data source table has around 200K+ rows.
Not sure what is the limitation for GDS with MySQL.
There's no indication anywhere.
Anyone out there can help to solve this or maybe provide some info would be appreciated.
Thanks
If you use a firewall, be sure to double check the Google ip adresses. They may have added new ips (in my case, the last one was missing).
Check them here !
After doing so, I had to change the Host name of the connection to the database for a url alias (www.yourserver.com <- url pointing on your server), and change it back to the IP to make it work.
Sounds like a the connector cannot establish a new connection.
Cloud SQL Connector:
At the time of writing this, the connector seems unable to establish a new connection once the existing one has timed out and modifying the JDBC url to include query parameters gives you an error when authenticating.
This is probably due to the connector appending it's own parameters.
(Seems to be a possible bug here when a connection no longer exists)
MySQL Connector (with IP Address):
This connector allows you to add query parameters to the JDBC url. Enable SSL and append useSSL=true to the url.
e.g.jdbc:mysql://<ip>/<database>?useSSL=true
This worked as expected and establishes new connections when required.
Example Source Setup
Suffering from this issue too, my experience is that using the MySQL connector instead of the Cloud SQL Connector provides better stability in combination with setting wait_timeout to a value above 12 hours.
This issue has been reported on the official Google Data Studio bug tracker. Please vote them up if you are also suffering from this !
🐛 130205306 MySQL connection does not exist Apr 9, 2019 04:36PM
🐛 118470083 Data source password not stored for MySQL sources. Oct 26, 2018 01:24PM
I am locally running an Oracle 11g database. I have a small program connecting to it in code via OLEDB in VC++ (It only runs some database tests, I'm making sure I have all the basics down before I go into the real thing.) The connection information in code only includes the provider, instance name, user name, and password. All this aspect works fine.
//For example, both these ways of connecting work:
result = dataSource.Open(DATABASE_PROVIDER, DATABASE_NAME,
DATABASE_USER_NAME, DATABASE_USER_PASSWORD);
result = dataSource.OpenFromInitializationString(L"Provider=OraOLEDB.OracleDataSource=orcl;User ID=SYSTEM;Password=admin;");
I now want to send this program to other computers in my network and run it from there, connecting to my database on my local machine.
How would I go about connecting the other computers to my database in a way that the code will understand?
I have been trying to connect locally via IP instead of "localhost", figuring I could then simply use the same code and client. In that regard, I have tried a few things without success:
-I have tried modifying the connection string to change "Data Source" to my IP, but it could not connect.
-I have tried adding some parameters from other connection string examples I had seen, but they were not for Oracle and were ignored.
-I have also tried modifying tnsnames.ora and listener.ora to change local host to an IP address, but I know that didn't work, as it would still connect if I entered rubbish.
Anyone has the knowledge to help out?
For an Oracle db, you should set up a new tnsnames entry on their individual machines that points to your local db. Then use that new tns name as the datasource. You'll also need to make sure that your local db instance is accessible to them in the first place. (This is not enabled by default in Oracle Express, by the way.)
I've also generally had more success using msdaora as the data provider instead of OraOLEDB.
I'm trying to set up mirroring between two sql 2008 databases on different servers in my internal network, as a test run before doing the same thing with two live servers in different locations.
When I actually try and switch the mirroring on the target DB (with
ALTER DATABASE testdb SET PARTNER = N'TCP://myNetworkAddress:5022') I'm getting an error telling me that the server network address can not be reached or does not exist. A little research suggests this is a fairly unhelpful message that pops up due to a number of possible causes, some of which are not directly related to the server existing or otherwise.
So far I've checked and tried the following to solve this problem:
On the target server, I've verified that in SQL Configuration Manager that "Protocols for SQLEXPRESS" (my local installation is labelled SQLEXPRESS for some reason, even though querying SERVERPROPERTY('Edition') reveals that it's 64-bit Enterprise), and Client Protocols for SQL Native Client 10 all have TCP/IP enabled
I'm using a utility program called CurrPorts to verify that there is a TCP/IP port with the same number specified by the mirroring setup (5022) is open and listening on my machine. Netstat verifies that both machines are listening on this port.
I've run SELECT type_desc, port FROM sys.tcp_endpoints; and
SELECT state_desc, role FROM sys.database_mirroring_endpoints to ensure that everything is set up as it should be. The only thing that confused me was the "role" returns 1 .. not entirely sure what that means.
I've tried to prepare the DB correctly. I've taken backups of the database and the log file from the master DB and restored them on the target database with NORESTORE. I've tried turning mirroring on both while leaving them in the NORESTORE state and running an empty RESTORE ... neither seems to make much difference. Just as a test I also tried to mirror an inactive, nearly empty database that I created but that didn't work either.
I've verified that neither server is behind a firewall (they're both on the same network, although on different machines)
I've no idea where to turn next. I've seen these two troubleshooting help pages:
http://msdn.microsoft.com/en-us/library/ms189127.aspx
http://msdn.microsoft.com/en-us/library/aa337361.aspx
And as far as I can tell I've run through all the points to no avail.
One other thing I'm unsure of is the service accounts box in the wizard. For both databases I've been putting in our high-level access account name which should have full admin permissions on the database - I assumed this was the right thing to do.
I'm not sure where to turn next to try and troubleshoot this problem. Suggestions gratefully received.
Cheers,
Matt
I think that SQL Express can only act as a witness server with this SQL feature, you might get better mileage on ServerFault though.
Mike.
Your network settings might be OK. We got quite non-informative error messages in MS SQL - the problem might be an authorization issue and the server still will be saying "network address can not be reached".
By the way, how the authentication is performed? A MSSQL service (on server1) itself must be runned as a valid db user (on server2, and vice versa) in order to make the mirroring work.
I'm getting this error when running an SSIS package through SQL Agent
Failed to acquire connection "ORACLE ADO.NET". Connection may not be configured correctly or you may not have the right permissions on this connection.
When I log on as the SQL Agent User and run the ssis package directly it is fine. When I then execute it through the SQL agent job, it fails.
I've read around extensively on this topic, and it seems a lot of the advise concerns how you are logged in, configuring of proxy accounts, etc, etc, etc, none of which has been helpful.
I am logging onto an Oracle database with an ADO.NET conncetion. The connection string is as follows (datasource, userid and password have been changed):
Data Source=DATASOURCE;User ID=userid;Password=password;Persist Security Info=True;Unicode=True;
I'm loading this from a registry setting using package configuration. To check that I am getting the correct string, I am writing it into a temporary log table. I am definately getting the string I need from the correct registry setting.
I've tested the oracle login credentials though PL/SQL developer, and it lets me login just fine.
As far as I can tell, as I'm using an explicit user name and password for the Oracle connection it just shouldn't matter who the SSIs pacakge is run as. The only point of failure that Ican see would be the reading of the information from the registry, but that seems fine.
I'm really quite baffled, I must confess, and would appreciate any help some of the splendid experts here can offer.
Many thanks,
James
Ok, tracked this one down after quite a lot of pain.
It was working fine on one environment, but not another, so I fired up Process Monitor (http://technet.microsoft.com/en-us/sysinternals/bb896645.aspx) and ran a package through the SQL Agent job, comparing which system entities were hit on each enviroment.
On the failing environment, at the point of the bulk transfer operation, the package attempted to get the Oracle 11 client DLL, and then hung.
I knew that this was installed, and, moreoever, the DLL path was a system environment setting. After further investigation it was revealed that the server had not been rebooted since the Oracle Client install and the SQL Server Agent process had not bee recycled.
Yes, can you believe it, the old helpdesk fix "Can you reboot your computer?" worked.
Sigh!
We had issues at a client with running packages connecting to Oracle before stored on our sql server instance. The work around we found was to change the package property, protection level, to "Dont save Sensitive Data" and for security purposes, we encrypted the username and password in the package configuration that was decrypted by a udf in sql server. Of course, before you try the whole encryption part, I would recommend putting the username and password in the package configuration without encrypting the values to see if changing the protection level setting is the solution to your specific problem. I hope this helps.
I was getting this error when tnsnames.ora file did not have a valid entry for the environment
We have a classic ASP application that simply works and we have been loathe to modify the code lest we invoke the wrath of some long-dead Greek gods.
We recently had the requirement to add a feature to an application. The feature implementation is really just a database operation requires minimal change to the UI.
I changed the UI and made the minor modification to submit a new data value to the sproc call (sproc1).
In sproc1 that is called directly from ASP, we added a new call to another sproc that happens to be located on another server, sproc2.
Somehow, this does not work via our ASP app, but works in SQL Management Studio.
Here's the technical details:
SQL 2005 on both database servers.
Sql Login is authenticating from the ASP application to SQL 2005 Server 1.
Linked server from Server 1 to Server 2 is working.
When executing sproc1 from SQL Management Studio - works fine. Even when credentialed as the same user our code uses (the application sql login).
sproc2 works when called independently of sproc1 from SQL Management Studio.
VBScript (ASP) captures an error which is emitted in the XML back to the client. Error number is 0, error description is blank. Both from the ADODB.Connection object and from whatever Err.Number/Err.Description yields in VBScript from the ASP side.
So without any errors, nor any reproducibility (i.e. through SQL Mgmt Studio) - does anyone know the issue?
Our current plan is to break down and dig into the code on the ASP side and make a completely separate call to Server 2.sproc2 directly from ASP rather than trying to piggy-back through sproc1.
Have you got set nocount on set in both stored procedures? I had a similar issue once and whilst I can't remember exactly how I solved it at the moment, I know that had something to do with it!
You could be suffering from the double-hop problem
The double-hop issue is when the ASP/X page tries to use resources that are located on a server that is different from the IIS server.
Windows NT Challenge/Response does not support double-hop impersonations (in that once passed to the IIS server, the same credentials cannot be passed to a back-end server for authentication).
You should verify the attempted second connection using SQL Profiler.
Note that with your manual testing you are not authenticating via IIS. It's only when you initiate the sql via the ASP/X page that this problem manifests.
More resources:
http://support.microsoft.com/kb/910449
http://support.microsoft.com/kb/891031
http://support.microsoft.com/kb/810572
I had a similar problem and I solved it by setting nocount on and removing print commands.
My first reaction is that this might not be an issue of calling cross-server, but one of calling a second proc from a first, and that this might be what's acting differently in the two different environments.
My first question is this: what happens if you remove the cross-server aspect from the equation? If you could set up a test system where your first proc calls your second proc, but the second proc is on the same server and/or in the same database, do you still get the same problem?
Along these same lines: In my experience, when the application and SSMS have gotten different results like that, it has often been an issue of the stored procedures' settings. It could be, as Luke says, NOCOUNT. I've had this sort of thing happen from extraneous PRINT statements in the code, although I seem to remember the PRINTed value becoming part of the error description (very counterintuitively).
If anything is returned in the Messages window when you run this in SSMS, find out where it is coming from and make it stop. I would have to look up the technical terms, but my recollection is that different querying environments have different sensitivities to "errors", and that a default connection via SSSM will not throw an error at certain times when an ADO connection from a scripting language will.
One final thought: in case it is an environment thing, try different settings on your ASP page's connection string. E.g., if you have an OLEDB connection, try ODBC. Try the native and non-native SQL Server drivers. Check out what connection string options your provider supports, and try any of them that seem like they might be worth trying.
Example code might help :) Are you trying to return two tables from the stored procedure; I don't think ADO 2.6 can handle multiple tables being returned.
I did consider that (double-hop), but what is the difference between a sproc-in-a-sproc call like I am referring to vs. a typical cross-server join via INNER JOIN? Both would be executed on Server1, using the Linked Server credentials, and authenticating to Server 2.
Can anyone confirm that calling a sproc cross-server is different than doing a join on data tables? And why?
If the Linked Server config is a sql account - is that considered a double-hop (since what you refer to is NTLM double-hops?)
In terms of whether multiple resultsets are coming back - no. Both Server1.Sproc1 and Server2.Sproc2 would be "ExecuteNonQuery()" in the .net world and return nothing (no resultsets and no return values).
Try to check the permissions to the database for the user specified in the connection string.
Use the same user name in the connection string to log in to the database while using sql mgmt studio.
create some temporary table to write the intermediate values and exceptions since it can be a effective way of debugging your application.
Can I just check: You made the addition of sproc2? Prior to that it was working fine for ages.
Could you not change where you call sproc2 from? Rather than calling it from inside sproc1, can you call it from the ASP? That way you control the authentication to SQL in the code, and don't have to rely on setting up any trusts or shared remote authentication on the servers.
How is your linked server set up? You generally have some options as to how it authenticates to the remote server, which include logging in as the currently logged in user or specifying a SQL login to always use. Have you tried setting it to always use a specific account? That should eliminate any possible permissions issues in calling the remote procedure...