How to move queued queries to a separate Snowflake warehouse? - snowflake-cloud-data-platform

Snowflake Documentation says:
If the running query load is high or there’s queuing, consider starting a separate warehouse and moving queued queries to that warehouse.
I have the following queries:
How do you move the queued queries to a separate Snowflake warehouse?
Is there a SQL statement to move the queued queries? Or I have to cancel the queued queries and run them again to separate Snowflake warehouse.
Please, provide some guidance.

You can't move queued queries to a new warehouse, you would have to cancel and restart them on the new warehouse.
Alternatively you can use Snowflake's multi-clustering warehouse feature to allow more parallel execution of queries.

Related

Will there be an impact on the graph database if multiple SUBMIT JOB STATS are carried out simultaneously?

NebulaGraph Version: 3.1.0
Deployment method: distributed / single machine
Does submitting multiple SUBMIT JOB STATS tasks at the same time affect the gallery?
We need to calculate the number of points and edges of each graph dataset.
But if I have multiple graphs and run dozens of SUBMIT JOB STATS tasks at the same time, what is NebulaGraph’s mechanism for executing job tasks?
Will this affect the NebulaGraph clusters, for example, all resources are placed on the job task, causing other operations to freeze, etc.?
Will NebulaGraph execute all job tasks sequentially by jobid?
In the same space, jobs are executed serially. If multiple graphs execute SUBMIT JOB STATS, they are executed separately and in parallel.
Whether executing multiple STATS jobs affects the performance of the cluster depends on the architecture of the cluster and the capability of each machine. If the data volume stored in the cluster is very large and the machine capability is relatively not so good, it is not recommended to run multiple STATS jobs together.

SQL Server: Local Query Time vs. Network Query Time... and Locks

Querying from a view into a temp table can insert 800K records in < 30 seconds. However, querying from the view to my app across the network takes 6 minutes. Does the server build the dataset and then send it, releasing any locks acquired after the dataset is built? Or are the locks held for that entire 6 minutes?
Does the server build the dataset and then send it, releasing any locks acquired after the dataset is built?
If you're using READ COMMITTED SNAPSHOT or are in SNAPSHOT isolation then there are no row and page locks in the first place.
Past that depends a on whether it's a streaming query plan or not. With a streaming plan SQL Server may be reading slowly from the tables as the results are sent across the network.

How to Update and sync a Database tables at exactly same time?

I need to sync(upload first to remote DB-download to mobile device next) DB tables with remote DB from mobile device (which may insert/update/delete rows from multiple tables).
The remote DB performs other operation based on uploaded sync data.When sync continues to download data to mobile device the remote DB still performing the previous tasks and leads to sync fail. something like 'critical condition' where both 'sync and DB-operations' want access remote Databse. How to solve this issue? is it possible to do sync DB and operate on same DB at a time?
Am using Sql server 2008 DB and mobilink sync.
Edit:
Operations i do in sequence:
1.A iPhone loaded with application which uses mobilink for SYNC data.
2.SYNC means UPLOAD(from device to Remote DB)followed by DOWNLOAD(from Remote DB to device).
3.Remote DB means Consolidated DB ; device Db is Ultralite DB.
4.Remote DB has some triggers to fire when certain tables are updated.
5.An UPLOAD from device to Remote will fire triggers when sync upload finished.
6.Very next moment the UPLOAD finished DOWNLOAD to device starts.
7.Exactly same moment those DB triggers will fire.
8.Now a deadlock between DB SYNC(-DOWNLOAD) and trigger(Update queries included within) operations occur.
9.Sync fails with error saying cannot access some tables.
I did a lots of work around and Google! Came out with a simple(?!) solution for the problem.
(though the exact problem cannot be solved at this point ..i tried my best).
Keep track of all clients who does a sync(kind of user details).
Create a sql job scheduler which contains all the operations to be performed when user syncs.
Announce a "maintenance period" everyday to execute the tasks of sql job with respect to saved user/client sync details.
Here keeping track of client details every time is costlier but much needed!
Remote consolidated DB "completely-updated" only after maintenance period.
Any approaches better than this would be appreciated! all Suggestions are welcome!
My understanding of your system is following:
Mobile application sends UPDATE statement to SQL Server DB.
There is ON UPDATE trigger, that updates around 30 tables (= at least 30 UPDATE statements in the trigger + 1 main update statement)
UPDATEis executed in single transaction. This transaction ends when Trigger completes all updates.
Mobile application does not wait for UPDATE to finish and sends multiple SELECT statements to get data from database.
These SELECTstatements query same tables as the Trigger above is updating.
Blocking and deadlocks occur at some query for some user as Trigger is not completing updates before selects and keeps lock on tables.
When optimizing we are trying make it our processes less easy for computer, achieve same result in less iterations and use less resources or those resources that are more available/less overloaded.
My suggestions for your design:
Use parametrized SPs. Every time SQL Server receives any statement it creates Execution plan. For 1 UPDATE statement with a trigger DB needs at least 31 execution plan. It happens on busy Production environment for every connection every time app updates DB. It is a big waste.
How SPs would help reduce blocking?
Now you have 1 transaction for 31 queries, where locks are issued against all tables involved and held until transaction commits. With SP you'll have 31 small transaction and only 1-2 tables will be locked at a time.
Another question I would like to address: how to do asynchronous updates to your database?
There is a feature in SQL Server called Service Broker. It allows to process message queue (rows from the queue table) automatically: it monitors queue, takes messages from it and does processing you specify and deletes processes messages from the queue.
For example, you save parameters for your SPs - messages - and Service Broker executes SP with parameters.

SQL Server replication for 70 databases with transformation in a small time window

We have 70+ SQL Server 2008 databases that need to be copied from an OLTP environment to a separate reporting server. Once the DB's are copied, we will do some partial data transformation: de-normalization, row level security, etc.
SSRS Reports will be written based on these static denormalized tables and views.
We have a small nightly window for copying and transforming all 70 databases (3 hours).
Currently databases average about 10GB.
Options:
1. Transactional replication:
We would need to create 100+ static denormalized tables on each reporting database.
Doing this for all 70 databases almost reaches our nightly time limit.
As the databases grow we will exceed the time limit. We thought of mixing denormalized tables with views to speed up transformation. But then there would be some dynamic and some static data which is not a solution we can use.
Also with 70 databases using transactional replication we are concerned about bandwidth usage.
2. Snapshot replication:
Copy the entire database each night.
This means we could have a mixture of denormalized tables and views so the data transformation process is quicker.
But the snapshot is a full data copy, so as the DB grows, we will exceed our time limit for completing copy and transformation.
3. Log shipping:
In our nightly window, we could use the log shipping to update the reporting databases, then truncate and repopulate the denormalized tables and use some views.
However, I understand that with log shipping, extra tables and views cannot be added to the subscribing database.
4. Mirroring:
Mirroring is being deprecated, but also the DB is not active for reporting against until failover.
5. SQL Server 2012 AlwaysOn.
We don't have SQL Server 2012 yet, can this be configured to do an update once a day instead of realtime?
And can extra tables and views be created on the subscribing database (our reporting databases)?
6. Merge replication:
This is meant to be for combining multiple data sources into one database.
But is looks like it allows for a scheduled update (once per day) and only updates the subscriber DB with the latest changes rather than doing an entire snapshot.
It requires adding a rowversion column to every table but we could handle this. Also with this solution would additional tables be able to be created on the subscriber database without the update getting out of sync?
The final option is that we use SSIS to select only the data we need from the OLTP databases. I think this options creates more risk as we would have to handle inserts/updates/deletes to our denormalized tables, rather than just drop and recreate the denormalized tables daily.
Any help on our options would be greatly appreciated.
If I've made any incorrect assumptions, please say.
If it were me, I'd go with transactional replication that runs continuously and have views (possibly indexed) at the subscriber. This has the advantage of not having to wait for the data to come over since it's always coming over.

what kind of replication should I use?

I can't understand the difference between transactional replication and merge replication.
This is my scenario:
In an organization I have a SQL server which need to collect information from different sql servers which are located in different parts of organization or around the city and some report will create according to gathered information.
Data in different SQL servers update every 5 or 6 minutes.
I don't know should I use transactional or merge replication?
Transactional replication delivers incremental changes from a single publisher to one or more subscribers.
Merge replication brings changes from multiple subcribers together into a central publisher.
It sounds like you'll want merge replication in your scenario.
Merge. Each site is a master of it's own data.
Transactional is one way usually.
You need to share information so merge it is...
Edit, after comment
In which case, yes. Your question implies reporting at each location
However, for performance, I'd consider pushing all updates into a queue using a trigger and Service broker. This way, the write to the remote server is decoupled from the local transaction.

Resources