I need to determine if a specific application is running from a SQL Server 2005 job. The issue is that one of our applications we use to send data will hang, causing problems with any subsequent jobs that invokes it. If I can also obtain the CPU time, I can determine if it's likely a hung process.
A list of running applications would be good, but being able to lookup a specific executable name with the CPU time would be fantastic!
Any application launched by a job step will show as being run by the same logon account as the SQL Server Agent. Use a specific service account for the SQL Server Agent that won't be used for any other services. This willallow you to monitor the applications launched from by a job using Task Manager, Performance Monitor, etc.
Try opening the SQL Server Activity Monitor. You can also get some of the information from the stored proc sp_who2.
Have the job run an external script (batch file, KSH script) instead of a TSQL script.
I think the best approach is to run SQL Server Profiler as well as performance monitor and wait for the specified job to run. Then import the perfmon stats into profiler. You can do this from SQL Server profiler by going to File–> Import Performance Data… and point it to your Performance Monitor logs.
You should be able to choose the Process(all) counter to give you a list of all running processes, as well as getting CPU time for the processes. You can then correlate this to the application name and/or hostname from the Profiler logs to see whats going on.
I use the (free) replacement to task manager "Process Explorer" to get a better look at exe's and their dependencies.
Might be worth monitoring your issue with this.
http://technet.microsoft.com/en-us/sysinternals/bb896653.aspx
Related
I want to run sql profiler to see the performance of my database Sql Server 2008, but I'm afraid that running the profiler in the same machine it will affect the performance of the server, and I don't want to slow down ther server.
A long time ago I heard from a DBA than he run the profiler from his laptop connected to the sql server, in a way that it does not affect the performance of the server.
Si bassically my question is How to run Sql profiler from an external computer without causing slow performance of the sql server?
Any database being profiled has to do work in order for profiling to be possible - there is no way around it. Generally speaking, observation of a system always induces a load to that system. However, SQL Server Profiler and other similar tools also do ADDITIONAL work outside of the target db, and this additional work can be offloaded to another computer.
To offload what you can, you just run SQL Server Profile from ANY machine that is not the database server. When you start a New Trace, you tell it to connect to the database on whatever server the database is running on. That's all there is to it. Your target db will incur some additional load (unavoidable), but you will be offloading as much work as you can to whatever machine you run Profiler on.
If you are able to connect to the computer from external computer,then there would be no issues running profiler remotely as well..
So basically my question is How to run Sql profiler from an external computer without causing slow performance of the sql server?
When you run profiler for long periods of time ,it affects performance,since it has to keep track of all events in memory and log it before discarding ..So running profiler for long periods of time is not recommended..
You also can use extended events starting from SQL2008(very light weight relative to profiler) to track events similar to Profiler ..
http://www.sqlteam.com/article/introduction-to-sql-server-2008-extended-events
Profiler can be initiated from any computer with appropriate permissions and access, but it ALWAYS runs on the actual SQL Server instance. There is no way around this. You can minimize the operations that are logged and filter by a specific user to mitigate performance issues, but that's about it.
The DBA in question may have run a server side trace, which can be less impactful, but it's still inititated ON the appicable instance.
I am a DBA and I am not aware of any performance issues by running SQL Server Profiler on the server itself. That said when you run SQL Server Profiler It loads just like SSMS where you can select which server to use.
If you have a query that is running so long that its killing SQL resources yes running it at all will still use up resources but regardless of where the source of the profiler is.
See screenshot of SS Profiler
If you are concerned about performance on the SQL Server instance don't run Profiler in production during peak hours.
If you want to minimize the impact of SQL Trace then it is the best to use the server-side tracing:
https://msdn.microsoft.com/en-us/library/cc293613.aspx
Like that you can record the SQL commands into a trace file and SQL Profiler is closed. When you are done with the SQL command collection, you can copy the trace file and open it using SQL Profiler in some other computer. It is much better than runing SQL Trace directly through SQL Profiler (which is called the client-side tracing).
I'm maintaining a legacy server app that generate DMO files from SQL Server views.
Sometimes the server crashes because SQL Server consumes all cpu resources.
Using the SQL Server monitor I see that the problem is in SQLDMO connections that are consuming all cpu time and blocking the server.
I don't understand the reason of that because the dmo connection is with TRANSACTION LEVEL READ UNCOMMITTED and these SQLs never finish, during weeks. The only solution is to shutdown the server.
I would suggest looking into the code why these connections are not closed. I'm guessing there's no proper closing at the end or something along those lines.
If that is not an option, you could consider running a scheduled job that kills off these specific jobs every so often if they ran for longer than say, 24 hours.
We have an application which has a SQL Server 2000 Database attached to it. After every couple of days the application hangs, and we have to restart SQL Server service and then it works fine. SQL Server logs show nothing about the problem. Can anyone tell me how to identify this issue? Is it an application problem or a SQL Server problem?
Thanks.
Is it an application problem or a SQL Server problem?
Is it possible to connect to MS SQL Server using Query Analyzer or another instance of your application?
General tips:
Use Activity Monitor to find information about concurrent processes, locks and resource utilization.
Use Sql Server Profiler to trace server and database activity, to capture and save data to a table or file to analyze it later.
You can use Dynamic Management Views (\Database name\Views\System Views folder (in the Management Studio)) to get more detailed information about MS SQL Server internals.
If you have the problems with perfomance (not your case) - you can use Perfomance Monitor and Data Collector Sets to gather perfomance information
Hard to predict the issue, I will suggest you to check your application first.Check what all operations you are performing against data base, are you taking care of connection pooling, unused open connections can create issues.
Check if you can get any log from your application. Without any log information hardly we can suggest anything.
Read this
Application may be hanging due to Deadlock
check the SP runs at that time using Profiler
and check the table manipulation(use nolock),
check the buffer size and segregate the DB into two or three module.
I have created SSIS packages to move data from AS400 to SQL Server which are scheduled daily.some of the packages in sql agent are taking longer duration more than 9 hours to complete.IF I run same package in Business intelligence studio manually, it is completing in less than 4 hours.Due to this problem my schedule packages are not competing on time.please help me to sort out this issue. I am unable to understand why there is a difference in task completion duration between manual interaction and schedule jobs.
My environment is windows server 2003 with sql server 2005 with SP3.please help me to sort out this issue.
The best way to get around this problem is to watch the scheduled task by using some debug statements and messages. For example, have some insert statements in the stored procedures the SSIS package is invoking. This way u will get to know what control is taking more time than expected. First try to isolate the control that is making the difference.
Also, you can invoke the package from command prompt using:-
dtexec /f filename.dtsx
This will print out all the messages in the console at each step as well.
Use SSIS logging in the package to log to a database table. Set logging to record start and end of tasks. By running the package in BIDS and comparing it to the logging when it is run on the server you will see which tasks are taking too long. See http://msdn.microsoft.com/en-us/library/ms138020.aspx for more info on SSIS logging (in sql 2008)
Might it be that the SQL server is less powerful than your client or has more load when you execute the package?
Business intelligence Studio the package is executed on your local client with it's CPU and RAM (I think).
Check what version of DTSEXEC you are using. May be you are using 32-bit version at one place and 64-bit at the other one.
I'm currently experiencing some problems on my DotNetNuke SQL Server 2005 Express site on Win2k8 Server. It runs smoothly for most of the time. However, occasionally (order once or twice an hour) it runs very slowly indeed - from a user perspective it's almost like there's a deadlock of some description when this occurs.
To try to work out what the problem is I've run SQL Profiler against the SQL Express database.
Looking at the results, some specific questions I have are:
The SQL trace shows an Audit Logon and Audit Logoff for every RPC:Completed - does this mean Connection Pooling isn't working?
When I look in Performance Monitor at ".NET CLR Data", then none of the "SQL client" counters have any instances - is this just a SQL Express lack-of-functionality problem or does it suggest I have something misconfigured?
The queries running when the slowness occur don't yet seem unusual - they run fast at other times. What other perfmon counters or other trace/log files can you suggest as useful tools for my further investigation.
Jumping straight to Profiler is probably the wrong first step. First, try checking the Perfmon stats on the server. I've got a tutorial online here:
http://www.brentozar.com/perfmon
Start capturing those metrics, and then after it's experienced one of those slowdowns, stop the collection. Look at the performance metrics around that time, and the bottleneck will show up. If you want to send me the csv output from Perfmon at brento#brentozar.com I can give you some insight as to what's going on.
You might still need to run Profiler afterwards, but I'd rule out the OS and hardware first. Also, just a thought - have you checked the server's System and Application event logs to make sure nothing's happening during those times? I've seen instances where, say, the antivirus client downloads new patches too often, and does a light scan after each update.
My spidey sense tells me that you may have SQL Server blocking issues. Read this article to help you monitor blocking on your server to check if its the cause.
If you think the issues may be performance related and want to see what your hardware bottleneck is, then you should gather some cpu, disk and memory stats using perfmon and then co-relate them with your profiler trace to see if the slow response is related.
no
nothing wrong with that...it shows that you're not using the .NET functionality embed in SQL Server.
You can check http://www.xsqlsoftware.com/Product/xSQL_Profiler.aspx for more detailed analysis of profiler trace. It has reports that show top queries by time or CPU (Not one single query, but sum of all execution of a single query).
Some other things to check:
Make sure your datafiles or log files
are not auto-extending.
Make sure your anti-virus is set to
ignore your sql data and log
files.
When looking at the profiler output, be sure the check the queries that finished just prior to your targets,
they could've been blocking.
Make sure you've turned off Auto-close on the database; re-opening after closing takes some
time.