Random slow querys in SQL Server in longrunning mass data migration process - sql-server

We have do migrate data from another huge db to our db using tons of sql scripts. To do this a fast way, we drop most of all indexes from our destination db.
Now, in a not repeatable way, some queries tending to run for minutes an hours instead of minutes. It are not always the same queries. If we stop the hole migration for a time, and then repeat the query, it runs fast again.
So it seems, sql server sometimes chooses wrong execution paths or is having other problems. How should we approach this problem?

Related

MS SQL Server: prevent execution plans going out-of-date

My application has some queries (complex ones with fulltext-search). The queries usually run fast and often (let's say 30x/h). But periodically (lets say every two or three weeks) it looks like Sql Server drops the execution plan and the queries are extremely slow.
After running "EXEC sp_updatestats" the queries are fast again.
Anyone has an idea of what I can do to find the reason for this problem?
Installed SQL Server version is 13.0.4224.16, running on Windows Server 2016. The application doesn't make use of stored procedures.

Speeding up SQL transfer over a network

I have two SQL Server environments, data warehouse which collects data and a datamart which people access for a subset of the data, each with their own SQL Server 2016 databases. I run a script which pulls out data, transforms it and transfer it from the data warehouse to the datamart using Linked Servers. The entire process takes around 60+ hours to run. I want to avoid at all costs at having the data warehouse data in the datamart.
I experimented to see why the whole process was taking so long. I did a backup of the data warehouse, restored it onto the datamart and ran the import script and the entire process took around 3 hours to run. The script itself to 1.5 hours, telling me of the 60+ hours its the linked server transfer of data between the two servers that is the slowest part. I've pretty much ruled out network speed or issues between the two servers; this is all SQL. I'm trying to avoid having to write an application to do all of this in .NET if I can keep it in SQL Server.
Does anyone have any suggestions on how to improve performance time between SQL Server transfers?
The slowliness could be from the destination database
try disable triggers, indexes, locks, etc.
look at this link may help more
https://social.msdn.microsoft.com/Forums/sqlserver/en-US/e04a8e21-54a9-46a4-8eb2-67da291dc7e1/slow-data-transfer-through-linked-server?forum=transactsql

How many SQL jobs a sql server can handle?

I am creating a database medical system and then I came to a point where I am trying to create a notification feature and i will use SQL jobs in it, where the SQL job responsibility is to check some tables and the entities that will find it need to be notified for a change in certain data will put their ids in an entity called Notification and a trigger will be called for the app to check that table and send the notificiation.
what I want to ask is how many SQL jobs can a sql server handle ?
Does the number of running SQL jobs in background affect the performance of my application or the database performance in a way or another ?
NOTE: the SQL job will run every 10 seconds
I couldn't find any useful information online.
thanks in advance.
This question really doesn't have enough background to get a definitive answer. What are the considerations?
Do the queries in your ten-second job actually complete in ten seconds, even when your DBMS is under its peak transactional workload? Obviously, if the job routinely doesn't complete in ten seconds, you'll get jobs piling up.
Do the queries in your job lock up tables and/or indexes so the transactional load can't run efficiently? (You should use SET ISOLATION LEVEL READ UNCOMMITTED; as much as you can so database reads won't lock things unnecessarily.)
Do the queries in your job do a lot of rows' worth of inserts and updates, and so swamp the SQL Server transaction logs?
How big is your server? (CPU cores? RAM? IO capacity?) How big is your database?
If your project succeeds and you get many users, will your answers to the above questions remain the same? (Hint: no.)
You should spend some time on the execution plans for the queries in your job, and try to make them as efficient as possible. Add the necessary indexes. If necessary refactor the queries to make them more efficient. SSMS will show you the execution plans and suggest appropriate indexes.
If your job is doing things like deleting expired rows, you may want to build the expiration in your data model. For example, suppose your job does
DELETE FROM readings WHERE expiration_date >= GETDATE()
and your application does this, relying on your job to avoid getting expired readings.
SELECT something FROM readings
You can refactor your application query to say
SELECT something FROM readings WHERE expiration_date < GETDATE()
and then run your job overnight, at a quiet time, rather than every ten seconds.
A ten-second job is not the greatest idea in the world. If you can rework your application so it will function correctly with a ten-second, ten-minute, or twelve-hour job, you'll have a more resilient production system. At any rate if something goes wrong with the job when your system is very busy you'll have more than ten seconds to fix it.

SQL Server Jobs Running Every 2 minutes...Bad Practice?

We have two servers, one is very slow and geographically far away. Setting up distributed queries is a headache because it does not always work (sometimes we receive a The semaphore timeout period has expired) and if the query works it can be slow.
One solution was to setup a job that populates temporary tables on the slow server with the data we need and then writes INSERT, UPDATE and DELETE statements to our server tables from the temporary tables so we have updated data on our faster servers. The job takes about one minunte and 30 seconds and are setup to run every 2 minutes. Is this bad practice and will it hurt our slower SQL Server box?
EDIT
The transactions are happening on the slow server agent (where the job is running) and using distributed queries to connect and update our fast server. If the job runs on the fast server we get that timeout error every now and then
As for the specifics, if the record exists on the faster server we perform an update, if it does not exist we insert and if the record no longer exists on the slow server we delete...I can post code when I get to a computer

ETL Performance Problem

I have an important problem running ETL Process in production environment. While my ETL is running, the OLAP Server turns extremely slowly, I think this is because the ETL is updating several existing rows in the fact table and adding new ones. I tried to avoid this problem having a whole data base replication and ETL writes in DB1 and OLAP Server read from DB2(the replicated ones). It doesn't work at all.
Can you give me some advices which point me in the right solutions to avoid this problem.
I'm using SQL Server 2005. 8GB RAM
Mondrian OLAP Server into a Jboss Server. 8GB RAM.
ETL is running every 3 hours and is taking 2 hours running.
I'd appreciate any help.
Have the ETL run it's process into a new table, then use ALTER TABLE ... SWITCH PARTITION to switch in the new data into the Facts table. See Transferring Data Efficiently by Using Partition Switching. I would also revisit the ETL itself, given that the one can import about 2TB/hour with a well tuned ETL process, you can probably squeeze a bit more of performance yourself, I doubt you import 4TB every 3 hours...
As for the idea of using Replication to alleviate load problems, all I can say is this: Replication will always add to the load, as the replication process itself is quite expensive. Every insert, update, delete on the publisher is also an insert, update or delete on the subscriber, and there is the additional overhead of detecting changes on publisher, distribution overhead and applying changes on subscriber... is just going to add up, not subtract anything.

Resources