I have packages deployed on a sql server 2008R2 and recently, we migrated to a new server machine, deployed with sql server 2012. I configured packages to project deployment mode and for 10 days, all packages are working smoothly, with the execution times in the same range of older server.
Since last two days, packages started to fail. I checked in detail and found that, they are taking longer time than usual, and fail due to "Protocol error in TDS stream, communication link failure and remote host forcibly closed the connection".
When I tried to run the package through ssdt, they can run successfully, but I see data transfer movement slower than I used to see, and so package execution time is much longer.
I am not sure, what has changed. I have searched the internet for the possible reason and checked the server memory and packet size, and tried match with the older server, which did not solve the problem. I suspect, SSIS logging may have causes this, but not sure how to check it?
Please help to identify the cause of this problem.
**Edit: I enabled logging in ssdt and could see that majority of time is used in rows transfer steps only. Since my package have look ups, I thought that look ups might make it slower somehow. So copied the main query to ssms and run as a normal query on this server.
About 13L rows were returned in 12 minutes. Then I run the same query on the old server, there it returned 13L rows in less than a minute. So, possibly it proves the problem somehow is related with data transfer and not specific to packages itself.
Can Someone help please.**
Just check the solution connection, it should be ‘RetainSameConnection’ property to 'true'. This can be done both in the SSIS package under connection manager properties and in the job step properties (Configuration > Connection Managers).
Link: http://www.sqlerudition.com/what-is-the-retainsameconnection-property-of-oledb-connection-in-ssis/
Related
I'm trying to update a SQL Server project in Visual Studio 2019 by using the SSDT schema comparison. My source is a running database server, the destination is the VS SQL Server project.
When the comparison is done and I click "Update", I get the message
Source schema drift detected. Press Compare to refresh the comparison
No matter how many times I refresh the comparison, I always get the same result.
I have tried various connection tweaks (read-only intent, asynchronous processing, multiple active result sets) in the hopes that I can make the comparison run faster and update the project before the drift happens, but to no avail. I have also tried reducing the types of objects included in the comparison, but have not been able to reduce it enough to prevent drift from being detected.
I think the biggest issue I have is that aside from the "schema drift detected" message, I feel like I'm shooting in the dark. By that I mean that I have no idea what is causing SSDT to detect drift, and therefore I can't work around it.
I tried running the SQL Profiler to capture what SSDT is doing so I could find where SSDT is detecting drift. However, I haven't been able to find any query that gives different results when run multiple times within a short period.
So in conclusion, my questions are:
What does SSDT look at to determine when the database schema has drifted?
How can I update my SQL Server project when it always detects schema drift?
I also struggled for months to find the cause of the same error. I was already thinking about flashing Windows 10 on my laptop. I won't list the dead ends anymore. In my final desperation, I copied the SQL Server database and VS project to another machine, and there the comparison worked without a bone. The suspicion arose that maybe the error is not in VS, but rather that my SQL server is confusing VS.
I have a SQL Server 2012. I put the latest update on it (SP4) and wonder of wonders, compare and sync started working perfectly right away. Of course, now before every update I pray a little so that I don't encounter the "Source schema drift detected" message.
I have been unsuccessfully fighting this annoying error for MANY SSDT versions.
Searching for it you will see multiple places where it is claimed to be fixed, WHICH IS FALSE, as it is happening right now with VS 2022 SSDT.
In my case, it ONLY happens when comparing against ONE out of the 5 database servers I regularly use the tool with.
The only workaround I have found that usually works is to REBOOT the destination database server (NOT just cycle the SQL Server Service) and then run the SSDT compare QUICKLY!
As the server that this happens on is an integration server running on a VM in my local network, I can bounce the server, but in other scenarios this would be a show-stopper.
IMO the most onerous things about this issue is that you cannot even generate the script to copy / paste into SSMS, which is how I often use the tool.
This issue has not been fixed for YEARS and is very intermittent, so I have no hope of seeing it actually fixed - I hope this workaround is helpful to someone.
I have googled and read many questions/answers, but only one question has ever sounded exactly the same and it did not have an answer.
The situation:
My group has several SQL Servers that are running SQL Server 2017. They are configured virtually identically.
These servers are build boxes, meaning they pull data from a data ware house, or an extract file, run some ETL processing and then push to a prod box. SSIS packages are deployed on the box where the DB resides.
Just over a month ago (with no updates having occurred), one of these servers started having an issue where all the jobs that ran an SSIS package would "hang" on the step that ran the package. Any other step runs fine. But a job step that runs a package (all jobs do this), will not even start the package. The package shows no indication in the executions that anything has even tried to start it.
If the user executes the deployed package it will run successfully.
The only thing that will "fix" the issue is restarting the agent service.
I created a simple job to run a simple package every 5 mins. It had been running for about a week, the last time it ran was 4/11/2021 at 2:40am, the 2:45 run hung. I could find nothing in the event logs that occurred at that time. The server was rebooted as a normal scheduled process at 3:15 and was online by 3:25 because that is the next time it tried to run and it again just hung. So even a server reboot did not fix the issue.
I am at my wits end, since there is no error (the job hangs and the package does not even start) there is no logging that I can find that is showing any issues, I am at a loss as to what might cause this.
Thanks in advance.
Take a look at the SSISDB catalog database on each/all the servers involved. Has it grown exponentially and needs the history etc. cleared down or settings changed? How big are the transaction logs for those databases etc.?
I have a SQL Server Report instance that is acting strange. I can do everything but overwrite an existing report. I can open a current report, run it, and save it as a different report no problem, but any attempt to save an existing one times out after 2 minutes. Looking at the Report Server logs I can confirm it is a timeout that it is returning.
Has anyone experienced this before, and more importantly, figured out how to solve this?
Ended up going through a direct connection rather than the shared connection. There was a contention with one of the meta tables being locked, and this proved to be not a solution, but at least a work-around.
We have a client deployment of our software that is showing intermittent SQL server connection failures, and we are struggling to understand them.
Our system consists of a SQL Server DB (2012) and 14 identical engines, each installed on a Windows 2012 VM. Each of these was created from the same template so they should be identical. The engines consist of a Windows service that connects to the DB on startup by reading a single row from a table. If the connection fails they will wait a few seconds and try again, until they get a connection.
In this particular case, the VMs were all rebooted due to a Windows Update. (The SQL server had the update/reboot about 12 hours before). They came online within a few minutes of each other. 12 of the engines started up without any problem. Two of them, however, failed to connect to the DB with:
"The underlying provider failed on Open."
Those two engines then started to poll, and continued to get this error for many hours. The rest of the engines had started up and were fine. We have a broker service too that was accessing the DB throughout and showed no connection issues.
When the client noticed this issue, they restarted the engine services on the two problem VMs, and the two engines connected to the DB just fine.
We are trying to understand what could have happened here. I guess my main questions are:
What could be an explanation of why 12 connections succeed and two fail? There's absolutely no difference as far as we know between the engines. The query itself is very simple.
Why did the connection continue to fail for those two engines until the service was restarted? This suggests to me that there is some process-level failed state that is only cleared when restarting the services. I've looked at the code to see if it was reusing the connections. It uses Entity Framework to read the single table row, and we create a fresh DbContext each time. I don't understand how this could go wrong.
We noted that there was a CheckDb operation proceeding on the DB around the time the services were coming up, and we wondered if this could be related to the issue. However, the client says that this runs every night and hasn't caused problems in the past. And it wouldn't explain why the engines didn't come back up again.
Thanks in advance for any help.
I'm working on a project for a client that involves a pretty huge ETL process to move data from MSSQL Server to Postgres. We are using SSIS 2014 with the ODBC drivers provided on postgresql.org, and have setup an ODBC DSN. We are only using built in OLEDB Sources and ODBC Destinations, and we are running into an issue that I have not been able to find referenced anywhere else online.
The exact issue is that SSIS seems to open multiple connections for each ODBC destination data flow component, even with connection pooling enabled. This can result in 50+ idle connections being opened, which are not killed until the process is completed. Previewing data from Visual Studio causes connection leaks as well which will only be killed upon restarting Visual Studio. We have temporarily resolved this by increasing the maximum connections to 1000, but we are hoping to fix the underlying problem if possible.
I've done a decent amount of experimentation, and the issue seems to be SSIS related as opposed to an error in the driver.
Has anyone else run into a similar issue and know how to resolve this?
EDIT: Thought this wasn't going to be a big issue, but now I realize that ODBC connections are leaking when the SSIS package is run from Integration Services as well. I've played with it a bit by making some empty packages and adding only a single ODBC source and also a package which only accesses the database from C# script task; only the ODBC source/destinations are causing leaking and not script components, so it seems like a bug in SSIS and not my script tasks or the postgres driver :O
Anyone have any idea how to resolve this besides rewriting the whole package to not use ODBC sources/destinations (or some other weird thing like killing all connections afterwards with a shell script)?
Please find the statement timeout
must give statement timeout in secs on odbc destination properties, default value 0 is there it means infinity in SSIS