Snowflake Query Profile Interface - snowflake-cloud-data-platform

Have been played around with the Snowflake Query Profile Interface but missing information about the parallelism in query execution. Using a Large or XLarge Warehouse it is still only using two servers to execute the query. Having an XLarge Warehouse a big sort could be divided in 16 parallel execution threads to fully exploit my Warehouse and credits. Or?
Example: Having a Medium Warehouse as:
Medium Warehouse => 4 servers
Executing the following query:
select
sum(o_totalprice) "order total",
count(*) "number of orders",
c.c_name "customer"
from
orders o inner join customer c on c.c_custkey = o.o_custkey
where
c.c_nationkey in (2,7,22)
group by
c.c_name
Gives the following Query Plan:
Query Plan
In the execution details I cannot see anything about the participating servers:
enter image description here
Best Regards
Jan Isaksson

In an ideal situation snowflake will try to split your query and let every core of the warehouse to process a piece of the query. For example, if you have a 2XL warehouse, you have 32x8 = 256 cores(each node in a warehouse has 8 cores). So, if a query is submitted, in an ideal situation snowflake will try to divide it into 256 parts and have each core process a part.
In reality, it may not be possible to parallize to that extent and that is because either the query itself cannot be broken down like that(for example, if you are trying to calculate let's say a median) or if the data itself is preventing it to parallelize(for example if you are trying to run a window function on a column which is skewed) it to that extent.
Hence, it is not always true that if you move to a bigger warehouse your query performance will improve linearly.

I tested your query starting with smallest compute size and the up. The linear scaling (more compute resource results in improved performance) stops around medium size, at which point there is no added benefit of performance improvement. This indicates your query is not big enough to take advantage of more compute resource and size s is good enough, especially considering cost optimization.

Related

Query profiling - Very high Synchronization rate?

While doing some query optimizing work, one of our queries has a very high "synchronization" time in a table scan step (up to 98% of the total query execution time). The query joins 3 tables, one fact table with 100B+ rows and two small dimension tables. The virtual warehouse is a S size.
The Snowflake docs are very limited on explaining what "synchronization" is: " various synchronization activities between participating processes." Analyzing the Query History — Snowflake Documentation
Does anyone know what this synchronization component is and how to improve it?

Snowflake multi-cluster warehouse performance vs single warehouse with large warehouse size

I am very new to Snowflake and while working with snowflake I had conflict between the below 2 options.
Single warehouse with size X-Large (16 credits / hour)
Multi-cluster (with max clusters=2 & min clusters=2) with size Large (8 credits / hour)
Considering the above 2 options
Is there any advantage I can get by choosing 2nd option in terms of performance?
Note: I know the advantages of multi-cluster over a single warehouse. Please share your answer for this specific scenario (when min = max).
So the things that happen in running a query are.
belong I am going to just use single to mean the single instance and 'multi` to mean the multi instance cluster, of which when we run a query it is only ever on one instance.
Reading\Writing IO from your storage layer:
Here a single has twice the IO over the multi thus if your query is IO saturated the single is the better choice.
Parallel steps:
So if you have a GROUP BY over a high cardinality columns, both the single and multi should be equally good. If you have a low cardinality but billions of rows, the smaller instance might give better results as those complex steps cannot be broken over the larger cluster size of the single instance. But this is most likely lost in the wash if you have many concurrent queries.
Many queries / Noisy neighbour:
If you have hundreds of queries hitting in waves the larger single instance is worse at starting those queries, as it just has less concurrent tasks at once, and a single very large query which can flush caches, or just dominate cluster, means you stop handling normal/small queries. Where-as having the mutli cluster allow if only one "super heavy" query comes in, you only stall half your normal queries.
Other thoughts
It also really depends on your load patterns, at my last job, we had auto-scaling cluster of SMALL instances used to used to answer our read queries of dashboards, reports, and we allowed it to run a little over provisioned, so things were snappy.
Where-as our data loading ran on second auto-scaling cluster of MEDIUM instances, and which we overloaded on purpose, as we were trying to load data the fastest/cheapest. And in non-peak times we programmatically reduced the auto-scalling MAX to almost starve the load. But would do some expensive reprocessing on a LARGE instance via those saved credits in "the middle of the night" and also our loading tasks had the ability to spin up exclusive LARGE+ size warehouses to do one off rebuilds, as this was all IO bound work, and thus the smaller the window of "outage" the better the system was, and the IO scale linear, so the total cost was the same.
Which is all to say, "what is best" really depends on what you are doing, your budget, and the trade offs you are prepared for. The golden thing about snowflake it is not like a classic DB where you have to pick the size and get it right, pick one, and watch it, if it's struggling change it on the fly. We did this a number of times when a release of our code or snowflake changed the performance of some critical SQL, we would jump in, and double or triple the instance count, or size, to get past the situation, while trying to fix or work around SF issues, or awaiting SF to roll a release back. for a couple hours generally spending more credits is not budget braking. This flexibility also means you can just experiment, "what happens if we trying 4x smaller instance.." "oh nothing... look we just saved heaps of money"..
If you have min=max=2 then you permanently have 2 warehouses running (as long as they are not suspended). If you configure your multi-cluster warehouse like this then you lose a lot of the advantages but for your specific use case it might make sense, I suppose
Based on your comment, here is my answer:
In both scenarios, you will have the same resources to process your queries. The important difference would be about running single heavy queries. As you may know, a single query can not spawn to multiple clusters (yet), so when you run a query in your multi-cluster warehouse, it will be processed on one of the Large warehouses (and use max 8 nodes).
If you run the same query on your single XL warehouse, it can be executed by (max) 16 nodes. So if you will run heavy queries which requires more memory and CPU, using a single XL warehouse would be better for you.
About concurrency, there is a parameter named "MAX_CONCURRENCY_LEVEL". Its default value is 8, and it limits maximum number of concurrent executions per warehouse. If you do not change it, your single XL warehouse will execute a maximum of 8 queries concurrently, while your multi-cluster warehouse can execute 16 queries concurrently.
https://docs.snowflake.com/en/sql-reference/parameters.html#max-concurrency-level
You may increase this parameter, and provide same concurrency on both single XL and multi-cluster L warehouse. But in this case, you should be careful when you runn heavy and light queries together. Because one query may use most of the resources of the warehouse, and your light queries may have less resources and take a longer time. So I would recommend using a multi-cluster warehouse, if you will have "relatively" light/concurrent queries.

WITH in Snowflake/SQL: Comparison to Temporary Tables

Other than coder preference, is there any reason to use a series of WITH(SELECT . . . ) statements/clauses in Snowflake rather than create several temporary tables. Are there performance or other issues that would lead one to prefer one method over the other?
I have some real world experience with this. We followed a hybrid approach in our ETLs
CTEs keeps results in memory, hence saves I/O cost while processing huge volume of data. Based on our experience processing an ETL involving 10 billion rows, CTE took 2 hours while table approach took 4.5 hours.But we should carefully choose our weapon, CTEs will not perform well in all scenarios. In the below scenarios, you must do some testing before using CTE
When your ETL query has more than 7-8 steps
Query is a combination of large and small tables
Volume of data is low and there is an opportunity to reuse the transformation logic
Please also remember CTEs cannot take advantage of meta data and hence row count estimations for joins and scans will be inaccurate and the optimizer will not be able to optimize the query path.

Does Snowflake execute a "single" single query on multiple nodes or a single node in a Cluster?

When a "single" query is executed on a Snowflake cluster, will it use (if available) as many as the nodes in parallel to execute the query, or just one single node in the cluster?
I am specifically looking for scaling strategy on how to speed up the following query
INSERT INTO x SELECT FROM y
Most of the time, Snowflake will try to run the query in parallel and use all nodes in the cluster, but in rare cases, it may run on only a partition of nodes. For example, if the data source is so small, if there's one file to ingest with COPY command, or you are calling a JavaScript stored procedure for processing data.
Here is a simple demonstration. The following query will run on only 1 node, no matter how many nodes the cluster has:
create or replace table dummy_test (id varchar) as
select randstr(2000, random()) from table(generator(rowcount=>500000));
Because the data source is a generator (which can not be read in parallel). You may try to run it on various sized warehouses and you will see that it will complete around 55 seconds (in case there is no other workload in warehouse).
As Simeon and Mike mentioned, a query can be executed in one cluster in multi-cluster warehouses. Multi-cluster warehouses are for increasing concurrency.
In the context of a multi-cluster warehouse, just a single node.
So large questions are better run on larger sized node, and large volumes of queries run best against clusters of correctly sized nodes (from a average wait time) but of course this costs more.. but if you had a fixed pool of queries, that total cost should be the same running them on a wider cluster, just less wall clock-time.
This is a good read also on the topic of scaling

How can I tell *IF* AND *WHAT* partitions are being accessed - sql server 2008 db engine

Setup
Cost of Threshold for Parallelism : 5
Max Degree of Parallelism : 4
Number of Processors : 8
SQL Server 2008 10.0.2.2757
I have a query with many joins, many records.
The design is a star. ( Central table with fks to the reference tables )
The central table is partitioned on the relevant date column.
The partition schema is split by days
The data is very well split across the partition schema - as judged by comparing the sizes of the files in the filegroups assigned to the partition schema
Queries involved have the predicate set over the partitioned column. such as ( cs.dte >= #min_date and cs.dte < #max_date )
The values of the date parameters are a day apart # midnight so, 2010-02-01, 2010-02-02
The estimated query plan shows no parallelism
a) This question is in regards to Sql Server 2008 Database Engine. When a query in the OLTP engine is running, I would like to see / have the sort of insight one gets when profiling an SSAS Query using Progress End event - where one sees something like "Done reading PartititionXYZ".
b) if the estimated query plan or the actual query plan shows no parallel processing does that mean that all partitions will be / were checked / read? * What I was trying to say here was - just b/c I don't see parallelism in a query plan, that doesn't guarantee the query isn't hitting multiple partitions - right? Or - is there a solid relationship between parallelism and # partitions accessed?
c) suggestions? Is there more information that I need to provide?
d) how can I tell if a query is processing in parallel w/o looking # the actual query plan? * I'm really only interested in this if it is helpful in pinning down what partitions are being used.
Added Nov 10
Try this:
Create querys that should hit 1, 3, and all your partitions
Open an SSMS query window, and run SET SHOWPLAN_XML ON
Run each query one by one in that window
Each run will kick out a chunk of XML
Compare these XML results (I use a text diff tool, “CompareIt”, but any similar tool would do)
You should see that the execution plans are significantly different. In my “3” and “All” querys, there’s a chunk of text tagged as “ConstantScan” that has an entry for (respectively) 3 and All partitions in the table, and that section is not present for the “1 partition” query. I use this to infer that yes indeed, SQL doing what it says it will do, to wit: only read as much of the table as it believes it needs to in order to resovle the query.
Got a pretty good answer here: http://www.sqlservercentral.com/Forums/Topic1064946-391-1.aspx#bm1065048
a) I am not aware of any way to determine how a query has progressed while the query is still running. Maybe something finicky with the latching and locking system views, but I doubt it. (I am, alas, not familiar enough with SSAS to draw parallels between the two.)
b) SQL will probably use parallelism when working with multiple partitions within a single table, in which case you will see parallel processing "tokens" in your query plan. However, if for whatever reason parallelism is not invoked yet multiple partitions must be read, they will be read without the use of parallelism.
d) Another thing that perhaps cannot be done. Under very controlled cirsumstances, you could use System Monitor (Perfmon) to track CPU usage or perhaps disk reads during the execution of they query. This won't help if the server is performing other work, or the data is resident in memory (the buffer cache), and so may be of limited use.
c) What is it you are actually trying to figure out? Which partitions (if any) are being accessed by users over a period of time? Is SQL generating a "smart" query plan? Without details of the data, structure, and query, it's hard to come up with advice.

Resources