SqlPackage.exe takes a long time - sql-server

So we are working on adding SQL DACPACs as part of our continues integration. We do this with powershell script running "sqlpackage.exe" with each DacPac file. we have about 12 DBs so it's about 12 DacPacs.
Everytime we run "sqlpackage.exe" to publish or script the DacPac we notice that it will take between 0.5-1+ min to complete initialization. Most of this time is being taken during initialization flow when "sqlpackage.exe" begins.
i'm trying to find a way to reduce this if possible since we have 12 DBs we are talking about at least 12 min for DB Deployment which is too much for us.
Do you know of any way to be able to reduce this?

The time up front is, I believe, used to get the current schema from the database, compare it to what's in the dacpac, and come up with the change scripts required.
There is no way to eliminate that using a dacpac.
However, you could do it ahead of time by having SqlPackage create the upgrade scripts and then at deploy-time just run the scripts. This could reduce downtime to just the time it takes to run the script, and, as suggested, if run in parallel the down time could be dramatically reduced.
If all your DBs are guaranteed to be in the same (schema) state you could just create a single upgrade script off the first DB and run it on all DBs.

This might be too late but potentially useful for others - we found that opening up all ports in the firewall on the SQL server solved our problem of deployment taking a long time. Went from about 4-5 minutes to about 1 minute.

Related

How to config Oracle Data Integrator to restart when a job is error?

My company is using Oracle Data Integrator for ETL jobs. Recently, there's an issue with a source database that lead to extracting job sometimes fail (very randomly, once or twice per 10 extract jobs). When we restart the job, most of the times it run successfully.
So while we are trying to fix the connection to source database, is there any way to restart that particular job 1 or 2 times if it fails? How can I config that?
Thanks!
You can enclose the scenario in a package. Then, set the Processing after failure options in the package Advanced tab:

Azure SQL - Running a stored procedure manually from SSMS takes 16 minutes, running it with logic apps takes > 12 hours

I have a stored procedure in an Azure SQL database that does two simple things:
Rebuilds indexes based on fragmentation level (once an index gets above a certain level of fragmentation it'll rebuild it).
Updates SQL statistics
When I run this manually in SSMS it can take anywhere from 15-30 minutes to run. However, when I run it from logic apps, sometimes it will run just fine, and other times it will run until the timeout I have set (which is 12 hours) then fail. Why would running the procedure on logic apps get stuck whereas running it manually always works?
I'm assuming that the logic app will only fail if there's an index that needs rebuilt because after I run it manually, it seems like the logic app will complete just fine until there's an index that needs rebuilt.
Also, I never had this issue with whatever I was using to run the stored procedure before I had to move it to logic apps since Azure was deprecating whatever I was using before (I can't remember what ran the job last time).
Appreciate any help anyone can provide here or troubleshooting steps. Thank you!

SQL server Instance hanging randomly

I have a SQL Server agent job running every 5 minutes with SSIS package from SSIS Catalog, that package does:
DELETE all existing data ON OLTP_DB
Extract data from Production DB
DELETE all existing data on OLAP_DB and then
Extract data transformed from OLTP_DB into OLAP_DB ...
PROBLEM
That job I mentioned above is hanging randomly for some reason that I don't know,
I just realize using the activity monitor, every time it hangs it shows something like:
and if I try to run any query against that database it does not response just say executing.... and nothing happen until I stop the job.
The average running time for that job is 5 or 6 minutes, but when it hangs it can stay running for days if I donĀ“t stop it. :(
WHAT I HAVE DONE
Set delayValidation : True
Improved my queries
No transactions running
No locking or blocking (I guess)
Rebuild and organize index
Ran DBCC FREEPROCCACHE
Ran DBCC FREESESSIONCACHE
ETC.....
My settings:
Recovery Mode Simple
SSIS OLE DB Destination
1-Keep Identity (checked)
2-Keep Nulls (checked)
3-Table lock (checked)
4-Check constraints (unchecked)
rows per batch (blank)
Maximum insert commit size(2147483647)
Note:
I have another job running a ssis package as well (small) in the same instance but different databases and when the main ETL mentioned above hangs then this small one sometimes, that is why I think the problem is with the instance (I guess).
I'm open to provide more information as need it.
Any assistance or help would be really appreciated!
As Joeren Mostert said, it's showing CXPACKET which means that it's executing some work in parallel. (cxpacket)
It's also showing ASYNC_NETWORK_IO (async_network_io) which means it's also transfering data to the network.
There could be many reasons. Just a few more hints:
- Have you checked if network connection is slow? - What is the size of the data being transfered vs the speed of the network? - Is there an antivirus running that could slow the data transfer?
My guess is that there is lots of data to transfer and that it's taking a long time. I would think either I/O or network but since you have an asyn_network_io that takes most of the cumulative wait time, I would go for network.
As #Jeroen Mostert and #Danielle Paquette-Harvey Said, By doing right click using the activity monitor I could figure out that I had an object that was executing in parallel (for some reason in the past), to fix the problem I remove the parallel structure and put everything to run in one batch.
Now it is working like a charm!!
Before:
After:

How many round trips to the database Flyway is doing to determine which migration have to be run?

I have 10 migration scripts (V1 to V10) in my folder "db/migration". When I previously launched my application the 5 first were run.
So, next time I will launch it I expect that the script from V6 to V10 will be run.
My question is:
How Flyway determines which scripts have to be run?
If it has to check information in the database:
How many round trip to the database are necessary?
It is really important for me that the number of round trip is the minimum possible.
Flyway executes one roundtrip per migration. This means that every time it applies a migration it will then query the schema_version table again before applying the next one (this is necessary to support multiple nodes attempting to migrate the DB in parallel)

Tips for manually writing SQL Server upgrade scripts

We have some large schema changes coming down the pipe and are in needs of some tips in writing upgrade scripts manually. We're using SQL Server 2000 and do not have access to automated tools nor are they an option at this point in time. The only database tool we have is SQL Server Management Studio.
You can import the database to a local machine with has a newer version of SQL, then you can use the 'Generate Scripts' feature to script out a lot of the database objects.
Make sure to set in the Advanced Settings to script for SQL Server 2000.
If you are having problems with the script generated, you can try breaking it up into chunks and run it in small batches. That way if you have any specific generated scripts you can just write the SQL manually to get it to run.
While not quite what you had in mind, you can use Schema comparing tools like SQL Compare, and then just script the changes to a sql file, which you can then edit by hand before running it. I guess that would be as close to writing it manually without writing it manually.
If you -need- to write it all manually i would suggest getting some intellisense-type of tools to speed things up.
Your upgrade strategy is probably going to be somewhat customized for your deployment scenario, but here are a few points that might help.
You're going to want to test early and often (not that you wouldn't do this anyway), so be sure to have a testing DB in your initial schema, with a backup so you can revert back to "start" and test your upgrade any number of times.
Backups & restores can be time-consuming, so it might be helpful to have a DB with no data rows (schema-only) to test your upgrade script. Remember to get a "start" backup so you can go back there on-demand.
Consider stringing a series of scripts together - you can use one per build, or feature, or whatever. This way, once you've got part of the script working, you can leave it alone.
Big data migration can get tricky. If you're doing data transformations, copying or moving rows to new tables, etc., be sure to check row counts before the move and account for all rows afterwards.
Plan for failure. If something goes wrong, have a plan to fix it -- whether that's rolling everything back to a backup taken at the beginning of the deployment, or whatever. Just be sure you've got a plan and you understand where your go / no-go points are.
Good luck!

Resources