What is XE_FILE_TARGET_TVF wait type in Azure SQL Database? - sql-server

I'm experiencing periodical Azure SQL Database connection slow downs. As recommended in Wait statistics, or please tell me where it hurts article I ran sys.dm_db_wait_stats (analogue of sys.dm_os_wait_stats for Azure SQL Database) aggregation script which show me that longest waits are of type XE_FILE_TARGET_TVF. Average wait time is 54 seconds.
XE_FILE_TARGET_TVF didn't mentioned in documentation as in any other online resources that I know. I suspect that "XE" means "Extended Events", "TVF" is "table-valued functions", "FILE_TARGET" is probably indicator that something is being written to some file.
So, what kind of wait it is?

Use Azure SQL Database Query Performance Insight for troubleshooting SQL Azure DBs. To begin with, wait stats alone are not a correct performance troubleshooting approach. Read How to analyse SQL Server performance for a more thorough approach, learn to analyze also where the CPU is spent, not only where the elapsed time is spent waiting. Aggregate wait stats are often misleading, filtering out the 'benign' wait stats is a chasing the red lights game.
Unfortunately not everything is actionable in SQL Azure DB environment. Start with the Query Performance Insight to see if you can correlate the performance issues with queries issued by your app. Use the SQL Database Index Advisor to get index recommendations for your workload.
If you cannot find the application problems and suspect this is caused by the platform, you will have to open a support case. Twitting #AzureSupport is very effective in getting help.
To answer your question: a lot of Azure SQL DB monitoring relies on Extended Events and they store data to files. This specific wait stat is unlikely to be related to the cause of your performance problems.

I believe that it's a async process so possibly nothing to worry around and could be a red herring. However I think this is a fabric related task behind the DB. Does this event seem to be consistently causing you issues?

Related

How to debug slowdown on a SQL Azure server?

We've had a SQL Azure cloudapp/database in production for a long time and while its performance has been a little volatile, over the last few days it has suddenly dropped drastically. Our application is unresponsive because SQL queries and stored procedures that used to take 5-10 seconds are now taking 90 seconds or more.
What are the things I should check, given that we already do regular index rebuilds/reorgs, clear down large tables when we're finished, etc.
We're still on the "Web" service tier and are planning to move soon to the newer S2 perhaps but we need to tackle this issue.
1) How many active connections does your SQL Azure DB have during slow times? Things get wierd once you get into 150+ range on a shared plan.
If you have a ton of connections open, that means you're not properly clearing them in your app somewhere.
2) Does your DB have any blocking queries? DBs with alot of blocking (deadlocking) queries may behave much slower, if you need access to locked resources
3) You should really consider switching to a dedicated SQL Azure plan. It is very quick to do and no action is required on the app-dev side. http://azure.microsoft.com/blog/2014/07/08/azure-update-sql-database-easy-upgrade-to-new-service-tiers-performance-improvements-pitr-for-basic-and-automated-export-for-all-service-tiers/
4) If neither helps, contact support. This could be an issue on their end
5) Once immediate problems are resolved, consider active monitoring of your SQL Azure db's (link in my profile signature)
http://www.developer.com/services/how-to-identify-performance-bottlenecks-on-azure-sql-database.html
You could also have a device in your network that is slowing down the performance. You might want to run some network tests to see if the problem is internal or external. For instance, someone might have changed some firewall or security settings on a rollout and messed it up a bit or a device might be ready to fail.

Performance tuning for progress 4gl oracle data server

I have read in perfomance tuning articles that if we skip-schema-check, it will improve data server perfomance, in turn will speed up query execution. I am using oracle databse for financial enterprise application and want to increase query performance. How much perfomance increase can be achieved using skip schema check ?
I am assuming that you are having a performance problem with your server. I would follow the performance tuning process on your Oracle, turning on oracle trace files, and running your application for a while, then then analyzing the results. Note that your system will probably run VERY slowly while the trace is on and could generate a lot of files.
Analyzing the trace files will show you which statements are using the most elapsed and system time, and that will show you which queries are causing the most delay, and if they are "application" queries, or Progress querying the schema.
Do you have the oracle SQL optimizer running? There is always a risk that the Progress Oracle Dataserver is generating queries that Oracle isn't optimizing well. When you've found the worst offending SQL queries in Oracle, it will also tell you how Oracle is approaching the query and whether it is using sensible indexes. The optimiser is very good most of the time, but just occasionally gets it very wrong, and requires a "hint" to set it right. If you are using the optimiser make sure that the statistics are being kept up-to-date as that can really throw the optimiser in the wrong direction.

SQL Server Transactional Replication Over VPN

I have transactional replication running between two servers over a dedicated VPN connection. The databases are fairly large, so I initially use the backup and restore method to get the initial snapshot over to the subscriber machine and then let it apply the incremental transactions from there.
Everything runs fine until the VPN line gets flaky (which it does occassionally) at which point the replication process is prone to locking up. When I look on the subscriber side, there are a few SQL processes which appear to be hung and have locks held on the subscriber database and tables. The crazy thing is that those processes are coming from the replication service. I can assure you (from trial and error) that no other processes are locking this database except for replication itself.
So why would the replication process trip over its own feet like that? Why would it get hung just because of a loss of network connectivity? Any suggestions for somehow making it more reliable?
I have heard of issues like this over vpn connections. There is a post here that might help you.
Another option, if you have persistent problems, and depending on your requirements for speed and functionality, might be to use log shipping. In my humble opinion this can provide a more resilient way of moving data - at least from a networking perspective.
With SQL Server 2005 they allow you to replicate using a web service. This might not allow you to ditch the VPN but since web services are less connection driven that might help fix the problem. I haven't tried this myself so I don't know what the results may be.
As for the locks we've had a scare thinking alot of things were locked but it turned out that the replication monitor was just locking on its self so make sure you don't have that open when looking at the locks. That doesn't sound like your problem though.
I'll ask some questions and maybe they can give you some ideas as I don't have a clue here either.
Is there a way for the replicator to test for connectivity before attempting to start copying? Is there a way to put a connectivity test into whatever script you're using to perform replication? Is there a way to have the script bail in case of failure?

Understanding SQL Profiler trace

I'm currently experiencing some problems on my DotNetNuke SQL Server 2005 Express site on Win2k8 Server. It runs smoothly for most of the time. However, occasionally (order once or twice an hour) it runs very slowly indeed - from a user perspective it's almost like there's a deadlock of some description when this occurs.
To try to work out what the problem is I've run SQL Profiler against the SQL Express database.
Looking at the results, some specific questions I have are:
The SQL trace shows an Audit Logon and Audit Logoff for every RPC:Completed - does this mean Connection Pooling isn't working?
When I look in Performance Monitor at ".NET CLR Data", then none of the "SQL client" counters have any instances - is this just a SQL Express lack-of-functionality problem or does it suggest I have something misconfigured?
The queries running when the slowness occur don't yet seem unusual - they run fast at other times. What other perfmon counters or other trace/log files can you suggest as useful tools for my further investigation.
Jumping straight to Profiler is probably the wrong first step. First, try checking the Perfmon stats on the server. I've got a tutorial online here:
http://www.brentozar.com/perfmon
Start capturing those metrics, and then after it's experienced one of those slowdowns, stop the collection. Look at the performance metrics around that time, and the bottleneck will show up. If you want to send me the csv output from Perfmon at brento#brentozar.com I can give you some insight as to what's going on.
You might still need to run Profiler afterwards, but I'd rule out the OS and hardware first. Also, just a thought - have you checked the server's System and Application event logs to make sure nothing's happening during those times? I've seen instances where, say, the antivirus client downloads new patches too often, and does a light scan after each update.
My spidey sense tells me that you may have SQL Server blocking issues. Read this article to help you monitor blocking on your server to check if its the cause.
If you think the issues may be performance related and want to see what your hardware bottleneck is, then you should gather some cpu, disk and memory stats using perfmon and then co-relate them with your profiler trace to see if the slow response is related.
no
nothing wrong with that...it shows that you're not using the .NET functionality embed in SQL Server.
You can check http://www.xsqlsoftware.com/Product/xSQL_Profiler.aspx for more detailed analysis of profiler trace. It has reports that show top queries by time or CPU (Not one single query, but sum of all execution of a single query).
Some other things to check:
Make sure your datafiles or log files
are not auto-extending.
Make sure your anti-virus is set to
ignore your sql data and log
files.
When looking at the profiler output, be sure the check the queries that finished just prior to your targets,
they could've been blocking.
Make sure you've turned off Auto-close on the database; re-opening after closing takes some
time.

Reducing the overhead of a SQL Trace with filters

We have a SQL 2000 server that has widely varied jobs that run at different times of day, or even different days of the month. Normally, we only use the SQL profiler to run traces for very short periods of time for performance troubleshooting, but in this case, that really wouldn't give me a good overall picture of the kinds of queries that are run against the database over the course of a day or week or month.
How can I minimize the performance overhead of a long-running SQL trace? I already know to:
Execute the trace server-side (sp_ create_trace), instead of using the SQL Profiler UI.
Trace to a file, and not to a database table (which would add extra overhead to the DB server).
My question really is about filters. If I add a filter to only log queries that run more than a certain duration or reads, it still has to examine all activity on the server to decide if it needs to log it, right? So even with that filter, is the trace going to create an unacceptable level of overhead for a server that is already on the edge of unacceptable performance?
Adding Filters does minimize the overhead of event collection and also prevents the server from logging transaction entries you don't need.
As for whether the trace is going to create an unacceptable level of overhead, you'll just have to test it out and stop it if there are additional complaints. Taking the hints of the DB Tuning Advisor with that production trace file could improve performance for everyone tomorrow though.
You actually should not have the server process the trace as that can cause problems: "When the server processes the trace, no event are dropped - even if it means sacrificing server performace to capture all the events. Whereas if Profiler is processing the trace, it will skip events if the server gets too busy." (From SQL 70-431 exam book best practices.)
I found an article that actually measures the performance impact of a SQL profiler session vs a server-side trace:
http://sqlblog.com/blogs/linchi_shea/archive/2007/08/01/trace-profiler-test.aspx
This really was my underlying question, how to make sure that I don't bog down my production server during a trace. It appears that if you do it correctly, there is minimal overhead.
It’s actually possible to collect more detailed measurements than you can collect from Profiler – and do it 24x7 across an entire instance -- without incurring any overhead. This avoids the necessity of figuring out ahead of time what you need to filter… which can be tricky.
Full disclosure: I work for one of the vendors who provide such tools… but whether you use ours or someone else’s… this may get you around the core issue here.
More info on our tool here http://bit.ly/aZKerz

Resources