I'm using the Snowsight query editor.
I have a worksheet as a tile on a dashboard. Is it possible to execute all queries in the worksheet without clicking into the worksheet and highlighting all of the queries?
Unfortunately only opening up the tile and highlighting all statements to manually run will work. At this time when you use the dashboard level run button it only runs one query per tile - whichever was the last query ran.
(Confirmed with Snowsight PM)
Related
Snowflake is caching already executed queries, such that subsequent executions of the same query will finish much faster. As far as I know, this only works if the query is exactely the same.
In my app (a interactive dashboard which uses Snowpark-Python 1.0 to fire snowflake queries), this caching does not seem to work. Each time the (same snowpark) query is fired, snowflake does run the query again:
Depending on whether the warehouse cache is active are not (blue vs green bar), the execution time is several 100ms up to 2s. The result is not read from the cache.
I think the cause is that the generated SQL does contain random components (column and table names suffixes) which are different for each query:
How can I make use of the snowflake cache using Snowpark-generated queries?
I have SSRS report that has a single data source - SSAS Tabular cube.
The report has 15 parameters that gets their values from the queries (datasets).
When a user opens the report, each parameter is populated, but each query execution is serialized (confirmed by Profiler / Execution Log). Each execution takes up to 70ms. As a result, it takes 1,000-1,200 ms just to open a report.
Is there a way to populate the report parameters in parallel?
Note that
when the report is running (user clicks "View Report"), all charts datasets are being executed in parallel, so SSAS/SSRS is definitely able to execute the queries in parallel.
"Use single transaction when processing the queries" checkbox is not checked for the data source.
SSRS/SSAS versions: 2016, latest SP/CU, Ent & Dev
UPDATE:
if I change the data source to SQL Server, the issue persists, SSRS is not executing the queries (for Report Parameters) in parallel.
Could you double check whether the datasets were executed in parallel or not? By default, datasets in a report are executed in parallel no matter they are generated from a single data source or multiple data sources. In your scenario, since the datasets use the same data source, and the “Use single transaction when processing the queries” option is not checked for the data source, the datasets should execute in parallel.
I'm using SQL Server and SSRS 2012. Intermittently when running reports on live environments, changing a single
parameter can cause the entire report to lock up, show the loading icon, and not allow other parameter changes for minutes at a time.
I found a similar ticket on microsoft connect that said it was fixed in a cumulative update for 2008 R2, but I'm experiencing it in SSRS 2012. I'm not sure what to do. Because it's intermittent, it's difficult to replicate, and I haven't been able to find any solutions for this online.
EDIT: This is only when changing the parameter, the loading occurs before I get the chance to hit 'View Report.' It can occur with several of the parameters, and most of them have dependencies. It can be on the parent or the child parameter.
I have also checked the execution log - the time taken to retrieve and process the parameters from shared data sets is much less than the time the 'loading' box stays on the screen. Max data retrieval time is 20secs total, loading box lasts for minutes at a time.
Do you mean when you re-run the report after changing a parameter or just changing the parameter without hitting View Report? If you are just changing the parameter, is the parameter used to refrsh othr related parameters? Basically we need to determine if the issue is with a query that's executing.
If it is then it could be a parameter sniffing issue where the query optimizer has used previous parameters to build a query plan that it not suitable. You can test this quickly by adding OPTION (RECOMPILE) to the end of the affected dataset query (assuming it's just a SQL script).
I'm trying to evaluate the relative performance of using a WHERE... IN clause in my SP vs UNIONs.
I've tried looking at the execution time and using SET STATISTICS TIME ON but everything just comes back as taking 0ms all the time.
So I'm trying to use SQL Server Profiler. I selected the TSQL_SPs template but even before I run the SP the trace is filling up with garbage. How do I tell it to only capture relevant data for a specific SP?
In Sql profiler, when you are creating a new trace, you can change the trace properties. Click on Events selection tab in trace properties and go to column filters.
Then in textData, click on Like and add some unique word from your SP that you need and then run the trace. This way your trace will give you data of your SP.
You can play around with column filters according to your need.
I have a reporting solution with several reports. Up to now, I have been able to add a dataset based on a SPROC with no problems. However, when I try to add the lastest dataset, and use a SPROC for its query type, when I click on Refresh Fields I get the following error:
Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
I have tested the database connection in Data Source properties>Edit>Test Connection, and it's working fine.
I have increased the timeout to 100 in the following areas:
The connection string for the datasource, which is - Connect
Timeout=100
Tools>Options>Database Tools>Query and View
Designers. Cancel long running query is set to 100.
Tools>Options>Database Tools>Table and Database Designers>Checked
'Override Connection String time-out value for table designer
updates. Transaction time-out after is set to 100
The SPROC runs fine in the SQL database. It takes about 55 seconds.
Any other ideas?
Thanks.
UPDATE: I now can't add any dataset with a SPROC. Even thought the SPROCs are all working fine in SQL!!!!!!
If you are using Report Builder, you can increase timeout also in your DataSet.
I have also face same problem for adding the newly added column in stored procedure.
From the following way overcome this issue.
Alter the stored procedure as comment all the query except that final select command.
Now that new column has been added, then un-comment that quires in sp.
The thing to remember with your report is that when it is ran, it will attempt to run ALL the datasets just to make sure the are runnable, and the data they are requesting can be returned. So by running the each proc seperately you are in fact not duplicating with SSRS is trying to do...and to be honest don't bother.
What you could try is running sp_who while the report is running, or even just manually go through the procedures to see what table they have in common. Since your proc takes 52 seconds to return its dataset I'm going to assume its doing some heavy lifting. Without the queries nobody will be able to tell what the exact problem is.
I suggest using NO LOCK to see if that resolves your issue. If it does then your procs are fighting for data and blocking each other...possibly in a endless loop. Using NO LOCK is NOT a fix. Read what it does and judge for yourself however.
My solution was to go to the Dataset Properties for the given problem dataset, paste the query in the Query field, click Refresh Fields, and click Ok.