I have a SQL Server Report instance that is acting strange. I can do everything but overwrite an existing report. I can open a current report, run it, and save it as a different report no problem, but any attempt to save an existing one times out after 2 minutes. Looking at the Report Server logs I can confirm it is a timeout that it is returning.
Has anyone experienced this before, and more importantly, figured out how to solve this?
Ended up going through a direct connection rather than the shared connection. There was a contention with one of the meta tables being locked, and this proved to be not a solution, but at least a work-around.
Related
I have packages deployed on a sql server 2008R2 and recently, we migrated to a new server machine, deployed with sql server 2012. I configured packages to project deployment mode and for 10 days, all packages are working smoothly, with the execution times in the same range of older server.
Since last two days, packages started to fail. I checked in detail and found that, they are taking longer time than usual, and fail due to "Protocol error in TDS stream, communication link failure and remote host forcibly closed the connection".
When I tried to run the package through ssdt, they can run successfully, but I see data transfer movement slower than I used to see, and so package execution time is much longer.
I am not sure, what has changed. I have searched the internet for the possible reason and checked the server memory and packet size, and tried match with the older server, which did not solve the problem. I suspect, SSIS logging may have causes this, but not sure how to check it?
Please help to identify the cause of this problem.
**Edit: I enabled logging in ssdt and could see that majority of time is used in rows transfer steps only. Since my package have look ups, I thought that look ups might make it slower somehow. So copied the main query to ssms and run as a normal query on this server.
About 13L rows were returned in 12 minutes. Then I run the same query on the old server, there it returned 13L rows in less than a minute. So, possibly it proves the problem somehow is related with data transfer and not specific to packages itself.
Can Someone help please.**
Just check the solution connection, it should be ‘RetainSameConnection’ property to 'true'. This can be done both in the SSIS package under connection manager properties and in the job step properties (Configuration > Connection Managers).
Link: http://www.sqlerudition.com/what-is-the-retainsameconnection-property-of-oledb-connection-in-ssis/
I'm a beginner and I am wondering if anyone who uses Splunk to monitor SQL server has successfully set up tracking for memory dumps.
As you may know, when a memory dump occurs in SQL Server, a file is created in the root of the SQL server log directory as a .mdmp or .dmp. All we would like to do is be able to keep track of when this memory dump happens and on what server, as indicated by the existence of these files. However, as far as I know, Splunk would not be able to track these files, since it would be scanning a folder looking for new .dmp files, and not indexing a log file that is then searched on.
We have indexes set up for wineventlog, perfmon, and mssql, but to my knowledge, a SQL server memory dump event is not actually logged in any of the related sources types like the general SQL server error log (a related event might, but it would not indicate itself as being related to a memory dump). I might be wrong about this though, and perhaps someone can correct me that this is logged somewhere common that Splunk would be able to consume.
I have also considered that there is a view (sys.dm_server_memory_dumps) that records these events, but we only know of two ways to get that into splunk. One is to set up a sql agent job that would query that table and output it as a file that splunk can then ingest, or to use the sql db connection plugin with splunk, but this has the issue that as far as I know it doesn't use a connection pool, which is a problem for us.
I am wondering how the community has approached this problem, any input appreciated, thank you!
I'm writing a program for a small office (<5 client). All the computers are located at the office and I have a server too.
I want to install SQL Server on the server, and install my program on every client computer, and they will update data on the server.
Do I need to worry about conflicts? Do I need to write another program or service to run on the server to handle the clients request? Or is my program alone and the SQL Server service is enough?
What things I need to take into consideration in implementing this?
I'm new to this, so any additional help would be useful!
Thanks
SQL Server will generally handle this without problems. But from a functional point of view there may be things to consider, such as two people opening the same item, both making a change, and both saving their change at different points in time.
Without countermeasures, the last person to save 'wins'. If that is OK, then all is OK, but you should at least discuss it and document it.
If it is not OK then you might need e.g. a timestamp column, and then saving an item could be disallowed if the timestamp on the server was changed in between opening the item and saving the item.
Another approach is 'locking' or 'checking out' items, which has its own advantages and disadvantages.
I'm using SQL Server 2012 and I'm stuck with a strange problem.
I tried to restore a database snapshot to a database. Usually it doesn't take much time to restore, but now it took 5 minutes and is still restoring, so I stopped query execution. It was trying to stop the query execution for more that 5 minutes, so i closed SSMS using task manager.
Then I tried to kill the restore process using KILL.
Now I am able to can connect to that server, but the list of databases is not opening. I mean: whoever is connected to this server, they are not able to get the databases. When I checked the sysprocesses table, it is showing lastwaittype as LCK_M_S.
None of my users can see databases. Looks like I kind of messed up. I cannot restart SQL Server as others are connected to the server.
How do I solve this? Please help.
EDIT:
i tried this approach and when i checked with
select db_name(dbid), * from sysprocesses where blocked <> 0
i got two records,
do you think because of these two rest of the process are getting locked up.
I'm guessing there still is some hidden lock on the sysdatabases table in the master database. This could very well be caused by the KILL of the restore command.
The article here might help you:
http://ellisweb.net/2012/02/clearing-out-a-mysterious-table-lock-lck_m_s-in-sql-server-2008/
It basically advises you to identify where the hidden lock is coming from, and then issuing a KILL on that process ID.
Try restarting the instance. Can't hurt if your users can't see any of the databases anyway.
On our admin of our company's production site, we have a little query dumping tool, and I unknowingly, in trying to get data from a database, different than the main one, used the use database command.
And here's the kicker, it then made every coldfusion page with it's query instantly fail.
since it somehow caches that use database command.
Has anyone else heard of this weird bug?
How can we stop this behavior?
If i use a "use database" command, I want that to only exist as far as the current query i am running, after i am done, to go back to the normal database usage.
This is weird and a potentially damaging problem.
Any thoughts?
I imagine that this has something to do with connection pooling. When you call close, it doesn't close the connection, it just puts it back into the pool. When you call open, it doesn't have to open a new connection, it just grabs an existing one from the pool. If you change the database that the connection is pointing to, ColdFusion may be unaware of this. This is why some platforms (MySQL on .Net for instance) reset the connection each time you retrieve it from the pool, to ensure that you are querying the correct database, and to ensure that you don't have any temporary tables and other session info hanging around. The downside of this kind of behaviour, is that it has to make a round trip to the database, even when using pooled connections, which really may not be necessary.
Kibbee is on the right track, but to extend that a little further with three possible workarounds:
Create a different DSN for use by that one query so the "USE DATABASE" statement would only persist for any queries using that DSN.
Uncheck "Maintain connections across client requests" in the CF admin
Always remember to reset the database to the one you intend to use at the end of the request. It kinda goes without saying that this is a very dangerous utility to have on your production server!
It's not a bug nor is it really unexpected behavior - if the query is cached, then everything inside the cfquery block is going along for the ride. Which database platform are you using?