Transport Level Error on Ubuntu with the SQL Server extension in VS Code (and Azure Data Studio) - sql-server

When trying to execute a long query I'm getting some sort of timeout apparently? I'm not sure about what's happening. The error message I receive is the following:
A transport-level error has occurred when receiving results from the server. (provider: TCP Provider, error: 35 - An internal exception was caught).
The strange thing is that I have no problem when trying to run a simpler version of the query, it works fine. Also, another thing I noticed is that when running the same long query on Windows through SQL Server Management Studio does actually work, so maybe there are some hidden configurations on Linux or inside VS Code that are different from SQL Server?
I've also been having the same issue when using Azure Data Studio, which seems to use the same VS Code core.
I've tried to add the below configurations after finding them in a post, but they don't seem to change anything (I don't know if they are legit VS Code configurations actually):
"Trusted_Connection": false,
"Pooling": false
By the way, I've also posted this issue on Github, but haven't yet been answered.

Related

ODBC Connection on SSIS package fails only when it goes through a loop

We've been recently trying to migrate out of SQL Server 2016 to SQL Server 2019 on our servers, that includes upgrading all the SSIS Packages we have on our catalog.
The migration wizard had no issues and migrated all packages with no errors, and on the surface everything seems OK. Even tried a test run on Visual Studio, and everything worked. But once we deployed all the packages on the catalog and tried a run via there, we started getting the following error:
none
Error: 0xC0014020 at Load ODI_PaymentDevice, ODBC Source [14]: SQLSTATE: HY010, Message: [Microsoft][ODBC Driver Manager] Function sequence error;
Error: 0xC0209029 at Load ODI_PaymentDevice, ODBC Source [14]: SSIS Error Code DTS_E_INDUCEDTRANSFORMFAILUREONERROR. The "ODBC Source.Outputs[ODBC Source Output]" failed because error code 0xC020F450 occurred, and the error row disposition on "ODBC Source" specifies failure on error. An error occurred on the specified object of the specified component. There may be error messages posted before this with more information about the failure.
Error: 0xC0047038 at Load ODI_PaymentDevice, SSIS.Pipeline: SSIS Error Code DTS_E_PRIMEOUTPUTFAILED. The PrimeOutput method on ODBC Source returned error code 0xC0209029. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing. There may be error messages posted before this with more information about the failure.
This is only on the packages that have an connection with Hive, using the ODBC Connector on SSIS, and then ODBC Data Source on a Data Flow
Based on the error code, it could be narrowed down to the ODBC Connection we have to our Hadoop-Hive cluster, the connection for sure works as we tested it on the Windows' ODBC Sources tool, and works as well on Visual Studio. We've researched a lot about this error and the different soulutions to this. So we tried a lot of different things.
Deleting the data source and then creating a new one (To update metadata)
Running in 32 Bit mode
Updating Microsoft's Hive ODBC driver
Switching to a different vendor's driver (CDATA)
Switching to an ADO.NET connection instead
Played around with the driver's configuration, almost all combinations possible
After trying all of this to no avail, we tried again on Visual Studio, and to our surprise, it also started to fail there too.
After trying a few different things, we could reproduce again the conditions in which the package worked, and it is the strangest thing, we could not find anyone with a similar issue on the internet so far.
So, as stated before, the connection works, and the package itself also does, BUT, we have a For Each Loop Container, that iterates through dates, to load data for the last X dates we have, so if there is any kind of loop container (For Each loop, for example) that contains a query against our ODBC source, it fails on the second loop around 100% of the time.
So that is the reason it worked on Visual Studio, because it only ran once (had only one date to process as test), but when deployed, it had to fetch real data, with a bunch of different dates.
To confirm that this is indeed the issue, deployed the package, and updated the table with dates to load, to have only available 1 day. And the package ran through. Also ruling out any parameter issue on the deployment/server/catalog.
After this discovery we tried a few different things:
Passing NULL on every column to see if there is some issues with metadata between loops
Also activated LOG_TRACE on the Hive ODBC driver, to have a very detailed log of what is happening, we see the query going out for the second loop on the log, and it also appears on TEZ (our Hive execution engine) but very briefly, only fractions of a second. And then it cancels itself, so the query is arriving the cluster, but somehow SSIS drops the connection by itself.
As mentioned before, we couldn't find anything like this before, and we cannot think of any other options to solve the issue without having to directly change the packages or not upgrading to 2019 at all, which is not ideal knowing that it is already outside of the mainstream support cycle.
Anyone has an idea how this might be solved or what may be causing this issue?
I have faced a very similar issue (if not the same) with the SSIS ODBC Source Component inside a For Loop for transferring records in batches from a remote PostgreSQL server to a database on MS SQL Server 2019. My Visual Studio is 2019 and the MS SQL Server is 2019 as well. The very weird thing was that the package was running as expected in VS (Debugging and Without Debugging), then it was working quite well through the SQL Job Agent of the SQL Server installed on my machine, but when deployed on the production SQL Server (the same version and psqlodbc driver installed there) the package was running successfully for the first iteration of the For Loop component and then unexpectedly was crashing, showing in the logs the same errors you have posted above: SQLSTATE: HY010, Message: [Microsoft][ODBC Driver Manager] Function sequence error;.....etc. After many hours spent on this without any success, I finally fixed it and now it is working like a charm; hence decided to share how I figured that out, so hopefully it may be of help to you or anyone facing that challenge.
What I found out is that for some reason the problem was happening inside the ODBC Source Component, but could not do much as it is like a black box. I fixed the problem by switching to a Script Source Component, so that I took control over the connection in the C# code. Here below, I also share the code:
#region Namespaces
using System;
using System.Data;
using System.Data.Odbc;
using Microsoft.SqlServer.Dts.Pipeline.Wrapper;
using Microsoft.SqlServer.Dts.Runtime.Wrapper;
#endregion
...........
[Microsoft.SqlServer.Dts.Pipeline.SSISScriptComponentEntryPointAttribute]
public class ScriptMain : UserComponent
{
...........
public override void CreateNewOutputRows()
{
string connectionString = this.Connections.PostgreSQLODBCConn.ConnectionString;
using (OdbcConnection conn = new OdbcConnection(connectionString))
{
using (OdbcCommand cmd = conn.CreateCommand())
{
cmd.CommandText = "SELECT * FROM fn_transfer_records(500000);";
cmd.CommandType = CommandType.Text;
DataTable dt = new DataTable();
conn.Open();
using (OdbcDataAdapter adapter = new OdbcDataAdapter(cmd))
{
adapter.Fill(dt);
foreach (DataRow row in dt.Rows)
{
Output0Buffer.AddRow();
Output0Buffer.col1 = (Int32)row["col1"];
Output0Buffer.col2= (double)row["col2"];
}
}
}
}
}
}
The PostgreSQLODBCConn used in the code above is the name of the Connection added to the connections collection of the Script Component added through the visual editor of the component when you double click on it.
Hope this would be of help...
Ensure your SQL Server Target version is set to SQL Server 2019. This can be found from the Project properties. This error is typical of a mismatched target server, as the issue is only present during deployment, and not during development.

Classic ASP - SQL Errors Stop Code, Never Show

In 20 years of coding I have never come across this problem, and it has got me scratching my head to the point of insanity.
The platform in question is Windows 2016 Server (10.0.14393) with SQL Server 2017 on the Azure infrastructure. When I get a 'normal' ASP error, it shows the error just fine like so:
Microsoft VBScript compilation error '800a03ea'
Syntax error
/tasks/DBTest.asp, line 4
Set Conn =
----------^
However, if the error is in the SQL statement or when trying to work with a value retrieved from it, it shows nothing. No error, no warning, zip. The SQL server I connect to has no relation to this fault - I connected to SQL 2017 as well as SQL 2012 (on a different machine where errors show just fine) - in both cases the same silence results. What's worse - the code just stops executing.
As you can imagine this is beyond frustrating when trying to debug or ascertain any kind of reason for failure. Of course all ASP error features are set correctly in IIS and as mentioned normal ASP errors show up just fine.
The issue seems to be that the moment the IIS server runs into an error originating from SQL it just stops all processing and makes no mention of the cause.
Ok so the issue turned out to be that you can NOT have Server-side and client-side debugging both set to true. If you do this, very confusingly, client-side only shows simple ASP errors, and anything involving an object (not just SQL, but any!) will be hidden from view. Very logical eh...

Entity Framework Initializer throws A network-related or instance-specific error on one machine

We discovered very strange behavior with Entity Framework last night. It was probably the result of my misguided approach to code first against an existing database, I solved it by turning off the initializer.
Database.SetInitializer<DataContext>(null);
But I would still like to know how such behavior was possible and if it's something we need to be worried about for future EF deployments. Maybe EF code first is not stable across all environments.
We have an existing database that supports the functionality of an existing legacy app. I added new functionality to the app with "code first" and ended up needing to share the same database. So we created the new table with a SQL script and turned on EF Migrations. This all worked fine for me locally, and also worked against the SQL Server instance on my colleague's machine.
However when he tried to run it all from his machine, against the same database, the EF call always timed out with the error:
A network-related or instance-specific error occurred while
establishing a connection to SQL Server. The server was not found or
was not accessible. Verify that the instance name is correct and that
SQL Server is configured to allow remote connections. (provider: Named
Pipes Provider, error: 40 - Could not open a connection to SQL Server
We even hard coded a different connection string to see if it would build its own separate database. Same error.
We finally figured out it was the Initializer because while inspecting the DbContext at a break point in the debugger, we triggered the initializer. If we paused execution and expanded the DbContext in a watch window, the database would initialize and the process would continue without error!
It's also worth noting that things were working on his machine until EF noticed model changes. That's when we enabled migrations and updated the table via script and permanently broke the Initializer on his machine.
This is one of the most bizarre things I've experienced. What could cause the EF initializer to hang and throw false network errors about an otherwise functional database...but then work from the debugger? And only on one machine!

Intermittent SQL Exception - network-related or instance-specific error

We have a very strange intermittent issue which has started coming up over the last month or so whereby some connections to mssql server fail with the error:
System.Data.SqlClient.SqlException: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server)
The error does not bring down the site, nor does it require a db restart - if you simply rerun the same query will work the second time. This means a lot of users will hit an error every now and then and have to refresh the error page for things to work.
Now, my initial knee-jerk reaction was this could be due to:
Resource related issue - so I started running SQL profiler and perfmon, but did not find any issues with the serve struggling to keep up with the number of connections / sec. I've been looking at MSSQL:SQL Errors, MSSQL:Wait Statistics, MSSQL:Exec Statistics, MSSQL:Locks. Does anyone have any guidance on other stats I should be poking and prodding here?
Unclosed DB connections - I ruled this one out after going through all the data-tier code. We have all the fail safes in place to stop this from happening.
Connection / Network related issue: our SQL server sits on a separate server (MS SQL Server Standard 2008) to our application server (running ASP.Net on IIS7) - both servers run on xlarge Amazon EC2 instances with all security policies configured (as per Amazons direction). Anyone got guidance on how to test the connectivity between the two servers or if this could be the issue?
Is it a possible issue with the IIS connection string? I have not tested this but should we be fully qualifying the server with the computer name we are connecting to (just thought of it)? We use a connection string in the format: server=xxxxx;Database=xxxx;uid=xxxx;password=xxx;
Your thoughts and insight is very much appreciated!
Thanks in advance
Solved. After testing almost every possible performance metric and examining every piece of code, I discovered that the error was caused by a bit of deprecated database code. The main issue was being caused by code using:
SqlConnection.ClearPools;
For future reference, any other developers looking to debug their code and manage connection pools, an excellent resource can be found here: http://www.codeproject.com/KB/dotnet/ADONET_ConnectionPooling.aspx
Try changing the connection string to the FQDN+port
server=xxxxx.domain.tld,1234;
Note: you don't need any instance name if you use port
On our global corporate intranet... we had a similar issue that happened to remote clients: more often if they were further away, never in the same building as the server.
After some poking around, chatting to the DBAs and MS, it was said to be caused by timing/Kerberos/too many firewalls etc. Adding FQDN+port removed all our issues.
It may be solved by switching to TCP/IP instead of Named Pipes, if you can.
Perhaps you can test this by changing the server name to the server IP address.
I use server=tcp:servername in my connection string to force TCP.
KB313295
It seems like connection are not being closed correctly, and after some time you can't open any more new connections. As the total allowed connections to database is a constant digit.
If you are using C#/VB.net
Are you using "Using" statements to open the connections ?
using (System.Data.SqlClient.SqlConnection con = new SqlConnection("YourConnection string"))
{
con.Open();
}

Problem running scripts against SQL Server

We have some scripts that we run as part of our unit tests.
This worked fine until today.
We have tried running scripts with both windows and sql authentication.
We have no problems logging in using sql manager
Anybody have any ideas why we get the following error:
Shared Memory Provider: No process is on the other end of the pipe.
Sqlcmd: Error: Microsoft SQL Native Client : Communication link failure.
Thanks
Shiraz
EDIT
Thanks for the replys. The actual appears to be a password problem, which used up all the connections. The process was not listening because there were no available connections.
Look in SQL Server Configuration Manager and make sure the protocols you are using to connect to it are setup correctly. I suggest you enable "Shared Memory" and "TCP/IP".
Ask around, try and determine what was changed on your environment--by who, and how--that caused a working process to stop working. If succesful, you will (a) have a strong lead on discovering the details of what's going wrong, and (b) be in a position to ensure it doesn't recur. (Just solving the tech side might not prevent it from happening again...)

Resources