I am trying to identify the scan result and statistics for one of my failed query in Snowflake. But Query profile of the query seems Blank.
Can somebody please help and let me know if there is a way to see QueryProfile of failed queries?
If a query fails during execution, you should generally see the profile. I can think of two scenarios where you wouldn't:
The query failed during compilation, meaning it never started running on the warehouse
The query failed in a way that made it impossible to save the profiling information
Another point is that sometimes with very complex execution profiles, the query profile may take a bit to load. Either way, there are no special hidden controls that would make the profile visible if it's not currently being shown.
Related
PowerQuery tends to load previews quickly but the very same data takes much longer to load into the PowerBI model when we click 'Close and Apply'. Is there some difference in the way these two things are done? They both depend only on a SQL server stored procedure and I cannot share any screenshots here due to confidentiality of company data. I am hoping someone else has had this issue and/or understands how PowerBI data loads work and can explain this difference.
Tried multiple data loads and varying the timeout period. I expected that lengthening the timeout period would make a difference but the load failed.
** I posted this question earlier today and got a pretty hostile reply and a down-vote so I deleted that one and tried to re-phrase it and repost.
PQ can work differently depending on the circumstances (flat files, query folding, transformations involved). PQ typically works by streaming data rather than loading an entire dataset in order to be more efficient with memory. Given you only preview 1000 records in the preview window, if no aggregations or sorts are happening, only 1000 records will be streamed so you can get a quick preview. When loading an entire dataset, then all records will need to go through the transformation steps rather than just the first 1000. This is a really in-depth topic and videos like the following give you some insight: https://www.youtube.com/watch?v=AIvneMAE50o
The following articles also explain this in detail:
https://bengribaudo.com/blog/2018/02/28/4391/power-query-m-primer-part5-paradigm
https://bengribaudo.com/blog/2019/12/10/4778/power-query-m-primer-part12-tables-table-think-i
https://bengribaudo.com/blog/2019/12/20/4805/power-query-m-primer-part13-tables-table-think-ii
We are having an issue where we need event relations for people(s), and are having problems with this very large group of people having almost 400 total event relations in this one week we are testing on... When trying to grab this large groups event relations, it will take forever and possibly time out. However, if you try again right after a timeout it goes in a couple seconds and is great. I was thinking this was salesforce just chaching the soql query/information and so it could act very quickly the second time. I tried to kind of trick it into having this query cached and ready by having a batch job that ran regularly to query every members event relations so when they tried to access our app the timeout issue would stop.
However, this is not even appearing to work. Even though the batch is running correctly and querying all these event relations, when you go to the app after a while without using it, it will still timeout or take very long the first time then be very quick after that.
Is there a way to successfully keep this cached so it will run very quickly when a user goes and tries to see all the event relations of a large group of people? With the developer console we saw that the event relation query was the huge time suck in the code and the real issue. I have been kind of looking into the Platform Cache of salesforce. Would storing this data there provide the solution I am looking for?
You should look into updating your query to be selective by using indexes in the where cause and custom indexes if necessary.
How Does Using "sp_updatestats" boosts the system. I am using Medium instance of Azure for one of my project and facing regular timeout issues while fetching data. But when I run this query in DB it works incredibly fast and data loads in flash of seconds.
Can anybody explain me the irony of this?
Am I doing something wrong in my code?
Can this be because of Unclosed connections?
I am using PetaPoco, EF and ADO.net as DB Access Techniques
PLEASE HELP, THANKS IN ADVANCE #SickOfTimeOuts
sp_updatestats procedure rebuild statistics information on all statistic objects in the database. If you're loading lots of data on a regular basis, then it's perfectly ok to expirience these slowdowns and to fix them with update statistic info after that. But if not, then your problem most likely has nothing to do with stale statistics, but rather with parameter sniffing.
Identify the statements that run slowly, and add OPTION (RECOMPILE) to them.
Don't know if you're doing anything wrong with your code, probably not, but I didn't see any of it :)
Sorry for the long introduction but before I can ask my question, I think giving the background would help understanding our problem much better.
We are using sql server 2008 for our web services as the backend and from time to time it takes too much time for responding back for the requests that supposed to run really fast, like taking more than 20 seconds for a select request that queries a table that has only 22 rows. We went through many potential areas that could cause the issue from indexes to stored procedures, triggers etc, and tried to optimize whatever we can like removing indexes that are not read but write frequently or adding NOLOCK for our select queries to reduce the locking of the tables (we are OK with dirty reads).
We also had our DBA's reviewed the server and benchmarked the components to see any bottlenecks in CPU, memory or disk subsystem, and found out that hardware-wise we are OK as well. And since the pikes are occurring occasionally, it is really hard to reproduce the error on production or development because most of the time when we rerun the same query it yields response times that we are expecting, which are short, not the one that has been experienced earlier.
Having said that, I almost have been suspicious about I/O although it is not seem to be a bottleneck. But I think I was just be able to reproduce the error after running an index fragmentation report for a specific table on the server, which immediately caused pikes in requests not only run against that table but also in other requests that query other tables. And since the DB, and the server, is shared with other applications we use and also from time to time queries can be run on the server and database that take long time is a common scenario for us, my suspicion regarding occasional I/O bottleneck is, I believe, becoming a fact.
Therefore I want to find out a way that would prioritize requests that are coming from web services which will be processed even if there are other resource sensitive queries being run. I have been looking for some kind of prioritization I described above since very beginning of the resolution process and found out that SQL Server 2008 has a feature called 'Resource Governor' that allows prioritization of the requests.
However, since I am not an expert on Resource Governor nor a DBA, I would like to ask other people's experience who may have used or is using Resource Governor, as well as whether I can prioritize I/O for a specific login or a specific stored procedure (For example, if one I/O intensive process is being run at the time we receive a web service request, can SQL server stops, or slows down, I/O activity for that process and give a priority to the request we just received?).
Thank you for anyone that spends time on reading or helping out in advance.
Some Hardware Details:
CPU: 2x Quad Core AMD Opteron 8354
Memory: 64GB
Disk Subsystem: Compaq EVA8100 series (I am not sure but it should be RAID 0+1 accross 8 HP HSV210 SCSI drives)
PS:And I can almost 100 percent sure that application servers are not causing the error and there is no bottleneck we can identify there.
Update 1:
I'll try to answer as much as I can for the following questions that gbn asked below. Please let me know if you are looking something else.
1) What kind of index and statistics maintenance do you have please?
We have a weekly running job that defrags indexes every Friday. In addition to that, Auto Create Statistics and Auto Update Statistics are enabled. And the spikes are occurring in other times than the fragmentation job as well.
2) What kind of write data volumes do you have?
Hard to answer.In addition to our web services, there is a front end application that accesses the same database and periodically resource intensive queries needs to be run to my knowledge, however, I don't know how to get, let's say weekly or daily, write amount to DB.
3) Have you profiled Recompilation and statistics update events?
Sorry for not be able to figure out this one. I didn't understand what you are asking about by this question. Can you provide more information for this question, if possible?
first thought is that statistics are being updated because of the data change threshold is reached causing execution plans to be rebuilt.
What kind of index and statistics maintenance do you have please? Note: index maintenance updates index stats, not column stats: you may need separate stats updates.
What kind of write data volumes do you have?
Have you profiled Recompilation and statistics update events?
In response to question 3) of your Update to the original question, take a look at the following reference on SQL Server Pedia. It provides an explanation of what query recompiles are and also goes on to explain how you can monitor for these events. What I believe gbn is asking (feel free to correct me sir :-) ) is are you seeing recompile events prior to the slow execution of the troublesome query. You can look for this occurring by using the SQL Server Profiler.
Reasons for Recompiling a Query Execution Plan
I'm having the trouble finding the wording, but is it possible to provide a SQL query to a MS SQL server and retrieve the results asynchronously?
I'd like to submit the query from a web request, but I'd like the web process to terminate while the SQL server continues processing the query and dumps the results into a temp table that I can retrieve later.
Or is there some common modifier I can append to the query to cause it to background process the results (like "&" in bash).
More info
I manage a site that allows trusted users to run arbitrary select queries on very large data sets. I'm currently using a Java Daemon to examine a "jobs" table and run the results, I was just hopeful that there might be a more native solution.
Based on your clarification, I think you might consider a derived OLAP database that's designed for those types of queries. Since they seem to be strategic to the business.
This really depends on how you are communicating with the DB. With ADO.NET you can make a command execution run asynchronously. If you were looking to do this outside the scope of some library built to do it you could insert a record into a job table and then have SQL Agent poll the table and then run your work as a stored procedure or something.
In all likelihood though I would guess your web request is received by asp.net and you could use the ADO.NET classes.
See this question
Start stored procedures sequentially or in parallel
In effect, you would have the web page start a job. The job would execute asynchronously.
Since http is connectionless, the only way to associate the retrieval with the query would be with sessions. THen you'd have all these answers waiting around for someone to claim them, and no way to know if the connection (that doesn't exist) has been broken.
In a web page, it's pretty much use-it-or-lose-it.
Some of the other answers might work with a lot of effort, but I don't get the sense that you're looking for an edge-case, high-tech option.
It's a complicated topic to be able to execute a stored procedure and then asynchronously retrieve the result. It's not really for the faint of heart and my first recommendation would be to reexamine your design and be certain that you in fact need to asynchronously process your request in the data tier.
Depending on what precisely you are doing you should look at 2 technologies... SQL Service Broker which basically allows you to queue requests and receive responses asyncrhonously. It was introduced in SQL 2005 and sounds like it may be the best bet from the way you phrased your question.
Take a look at the tutorial for same database service broker conversations on MSDN: http://msdn.microsoft.com/en-us/library/bb839495(SQL.90).aspx
For longer running or larger processing tasks I'd potentially look at something like Biztalk or Windows Workflow. These frameworks (they're largely the same, they came from the same team at MS) allow you to start an asynchronous workflow that may not return for hours, days, weeks, or even months.