SQL Server service not accepting connections - sql-server

I am having a strange problem that happens randomly on a server. Some mornings, our client will call in and say their website is not working with the following error message:
The underlying provider failed on Open.
The temporary fix I found for this was to manually go in and restart the SQL Server service. Once this is done it works just fine until the next random time it happens. So my question is, does anyone know what exactly is happening? If so, how can I prevent this in the future?
I have tried searching everywhere for this with the only explanation saying that updates were being applied to the service and it wasn't restarted properly. But I couldn't find any fixes. Thanks in advance

This error:
'FCB::Open failed: could not open file (LDF file) for the file number 2. OS error: 32( The process cannot access the file because it's being used by another process)
is quite troubling and should not be occurring, unless you just restarted your SQL Service. It would easily cause the problems that you are seeing. I would take this to GoDaddy.

If you are getting this error through your Nagios check.
Make sure you deselect "autoclose" as an option on the database.
You can select the option in: Database properties -> Options
Here select at Auto Close: false

Related

SSIS Package Error single UPDATE in a execute SQL task

I am trouble shooting an error in a package.
Update MYTABLE for MYCOLUMN (REF to task name):Error: Executing the query "..." failed with the following error: "Invalid column name 'MYCOLUMN'.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.
I have verified the table and column exists, the length of the field is way excessive than what it needs that is 14 where it is declared as varchar(250).
I have verified the script works on the server in SSMS outside of the context of the package.
I have verified the connection and database in the package is as I expect.
Is there away to verify on the server. I did try to look at the Connection Managers tab on the package configuration itself i.e. in the Integration Services Catalogs->SSISDB->solutionfolder->..->package.dtsx->Configure context menu but it is empty.
Any ideas on how to troubleshoot?
Just to add more context the package contains 27 other tasks, 9 tasks in a row linked to this task but all set to on completion, all seem to be doing stuff independent of the other. 1 task is a loop doing stuff and the rest are single independent tasks. So I don't know at this stage if it is a cascading connection issue perhaps however; I am just reading what the log says.
I kicked off the package at 9:54am, the timestamp on the error log says 11:45am so nearly 2 hours into running is this log reported.
I would suggest the below things to troubleshoot the issue.
I would suggest you to just have this task and disable all other
tasks to troubleshoot the issue. So that you can focus on this issue
specifically. That will tell you whether connection is working fine
without issues.
I would suggest you to edit the task and see whether parameters are
set properly. Different providers have different way of setting
parameters. Again check whether parameters are proper. Execute SQL
Task
one more thing, may be you are pointing the package to different
connection than the one you used for SSMS. So, it is working in SSMS
and in the connection being used in the package is not having schema
changes yet done.
I finally figure it out before I read the previous offered suggestion so will give some credit if I can! FYI: We have a lot of dev servers. I clicked on the overview hyperlink in the All Execution logs and it said another server. Also I found the connection on the job calling the package not the package itself so I have learnt something today. Anyhow the job said one server but the overview said another so I again I was back to square one scratching my head.
Then I decided to open the connection manager on the job and select the field and make no change rather then cancelling I clicked ok not thinking about it and noticed the field changed to bold face. So I am assuming if you make a manual change on the server in SSMS to anything it shows up in bold which is kind of useful. So I can only assume this is a MS SSMS or SSIS or VS deployment bug. That it does not overwrite, the previous connection although the SSMS interface says otherwise. Perhaps somebody can share some light. Having not checked the server before I made a change and deployed it I have no idea if the previous settings were changed manually by someone or the connection in the package was changed and deployed. Anyhow checking the job history shows it had been failing for awhile so it wasn't me so whoever and whenever a change was done by a previous developer didn't figure it our either or didn't bother or did not know how, or didn't observe it. Anyhow it is pointing to the correct server now!!!

SSRS 2012: "The report execution has expired or cannot be found. (rsExecutionNotFound)"

I am using SQL Server Reporting Services 2012 and received this error without any known cause: The report execution eqaiekfzmk2snc55y0zrow55 has expired or cannot be found. (rsExecutionNotFound).
While I have found other posts describing problem through Google searches, the resolutions did not help me:
Restarting SQL Server, SQL Server Agent, and SQL Server Reporting services
Increasing the Execution Timeout through SQL Server Management Studio when connected to the Reporting server
Adding rs:ClearSession to the URL querystring (and trying IE, Chrome, and Firefox)
Redeploying after each troubleshooting step and retesting
I looked in the Reporting Services log file folder C:\Program Files\Microsoft SQL Server\MSRS11.MSSQLSERVER\Reporting Services\LogFiles but I see the datestamp is over two months old and I could see nothing related to the symptom.
I looked looked in ExecutionLog3 and did not see anything related to the symptom. use ReportServer; select * from ExecutionLog3;
To find out what did work, I verified that:
The query and results are sound, as seen in Management Studio
I can preview the report in Data Tools on the server
I can view the report when remoting into the server
I only see the error when viewing the page from outside the server. This is a relatively lightweight query and result set, so I cannot believe that this problem has anything to do with execution timeouts.
I changed the name of the file and redeployed. I am able to see that report now, but this isn't a true resolution because I still don't know what is truly causing the problem and how to fix it. If the symptom appears again, I can't keep changing the filename and redeploy.
Is there a way to get a better idea of what is happening? A specific log file or a property I need to change?
Update:
I thought I had this problem worked out, but apparently not. I found nothing useful in the error logs: only a restatement of the same error message visible in the browser. When I redeploy (using SQL Server Data Tools), the error goes away... for a few hours or until the next day, when I need to redeploy to make the error go away.
I know this is an old question but I had this problem recently and it turned out to be a bad session cookie. The cookies session-id matched the guid in the error message and once I deleted the cookie all worked fine after that. The report at one point had been configured to cache a temporary copy
but that had since been turned off (however, the problem existed before that had been turned off so it may not be relevant).
Hopefully this answer will help someone else save the hour I spent figuring it out in my environment :)
This might help someone.
In my case, The report url had trailing spaces (a silly mistake) which caused this.
I've added &rs:Command=ClearSession to the end of my url and works fine with me.
As stated in a different answer you can clear the session which usually resolves this issue.
If you have a question mark in your URL already then add the following to the end.
&rs:Command=ClearSession
If you do not have one then you need to add the following to the end.
?rs:Command=ClearSession
I just had this problem, it was for an existing report that had been working correctly. However, the Report Builder had been open for some time in another window while I was working on something else, and I hadn't saved my work (I was applying a filter, and didn't want to save my changes with my test filter). It occurred to me that since the report HAD been working, but it had been sitting idle, it might have gone stale. I opened the Dataset Properties, clicked Query Designer, then "Run Query". The Query Designer then got a fresh request from the data source. I closed the Dataset Properties window and clicked "Run", and my report was again displayed.
For me, I had no trailing space.
Some people had luck with clearing Session.Keys of "Microsoft.Reporting.WebForms.ReportHierarchy"
I solved it by Session.Clear in the global.asax
For us, the error appeared trying to run a report on an SSRS 2016 server using Internet Explorer 11. The user had created a bookmark that linked directly to the report. What may have happened: IE preserves cookies and temporary internet files for favorites to "help them load faster". The user may have initially ran the report, then created the bookmark to the report which contained session information.
To fix: Delete the bookmark, then cleared browser history in IE (CTL+SHIFT+DEL) being sure to uncheck "Preserve Favorites website data".

Problem running scripts against SQL Server

We have some scripts that we run as part of our unit tests.
This worked fine until today.
We have tried running scripts with both windows and sql authentication.
We have no problems logging in using sql manager
Anybody have any ideas why we get the following error:
Shared Memory Provider: No process is on the other end of the pipe.
Sqlcmd: Error: Microsoft SQL Native Client : Communication link failure.
Thanks
Shiraz
EDIT
Thanks for the replys. The actual appears to be a password problem, which used up all the connections. The process was not listening because there were no available connections.
Look in SQL Server Configuration Manager and make sure the protocols you are using to connect to it are setup correctly. I suggest you enable "Shared Memory" and "TCP/IP".
Ask around, try and determine what was changed on your environment--by who, and how--that caused a working process to stop working. If succesful, you will (a) have a strong lead on discovering the details of what's going wrong, and (b) be in a position to ensure it doesn't recur. (Just solving the tech side might not prevent it from happening again...)

How to solve "An attempt to attach an auto-named database for file..." SQL error?

I've got a local .mdf SQL database file that I am using for an integration testing project. Everything works fine on the initial machine I created the project, database, etc. on, but when I try to run the project on another machine I get the following:
System.Data.SqlClient.SqlException : A connection was successfully established with the server, but then an error occurred during the login process. (provider: Shared Memory Provider, error: 0 - No process is on the other end of the pipe.)
I figure while I am investigating this problem I would also ask the community here to see if someone has already overcome this.
The exception occurs when I instantiate the new data context. I am using LINQ-to-SQL.
m_TransLogDataContext = new TransLogDataContext ();
Let me know if any additional info is needed. Thanks.
I'm going to answer my own question as I have the solution.
I was relying on the automatic connection string which had an incorrect "AttachDbFilename" property set to a location that was fine on the original machine but which did not exist on the new machine.
I'm going to have to dynamically build the connection string since I want this to run straight out of source control with no manual tweaking necessary.
Easy enough.
That because your application have more than one setting to database, try to "Find All" on your solution by search your connection name
likes
I'm using "EnergyRetailSystemConnectionString" or you can search by your database name

Why do I get this error "[DBNETLIB][ConnectionRead (recv()).]General network error" with ASP pages

Occasionally, on a ASP (classic) site users will get this error:
[DBNETLIB][ConnectionRead (recv()).]General network error.
Seems to be random and not connected to any particular page. The SQL server is separated from the web server and my guess is that every once and a while the "link" goes down between the two. Router/switch issue... or has someone else ran into this problem before?
Using the same setup as yours (ie separate web and database server), I've seen it from time to time and it has always been a connection problem between the servers - typically when the database server is being rebooted but sometimes when there's a comms problem somewhere in the system. I've not seen it triggered by any problems with the ASP code itself, which is why you're seeing it apparently at random and not connected to a particular page.
I'd seen this error many times. It could be caused by many things including network errors too :).
But one of the reason could be built-in feature of MS-SQL.
The feature detects DoS attacks -- in this case too many request from web server :).
But I have no idea how we fixed it :(.
SQL server configuration Manager
Disable TCP/IP , Enable Shared Memory & Named Pipes
Good Luck !
Not a solution exactly and not the same environment. However I get this error in a VBA/Excel program, and the problem is I have a hanging transaction which has not been submitted in SQL Server Management Studio (SSMS). After closing SSMS, everything works. So the lesson is a hanging transaction can block sprocs from proceeding (obvious fact, I know!). Hope this help someone here.
open command prompt - Run as administrator and type following command on the client side
netsh advfirewall set allprofiles state off
FWIW, I had this error from Excel, which would hang on an EXEC which worked fine within SSMS. I've seen queries with problems before, which were also OK within SSMS, due to 'parameter sniffing' and unsuitable cached query plans. Making a minor edit to the SP cured the problem, and it worked OK afterwards in its orginal form. I'd be interested to hear if anyone has encountered this scenario too. Try the good old OPTION (OPTIMIZE FOR UNKNOWN) :)

Resources