Microsoft SQL Database Engine Tuning Advisor seems to crash constantly for me... on multiple different servers, for multiple different databases, and throughout multiple different versions of SQL server (and DTA)...
I know this is probably a ridiculous question and not of the quality one would expect on stackoverflow :( but has anyone else experienced this?
I was having the same problem, as recently as SQL Server 2014 with Service Pack 2. I had to use a two-step approach to get it working again:
Installed both the latest service pack, and also the latest cumulative update for the service pack. This fixed the issue with Database Engine Tuning Advisor, but it was still crashing for me (see step 2)
I read where "hypothetical indexes" are added to your database when Database Engine Tuning Advisor is running. If it crashes, and does not complete successfully, the hypothetical indexes are not dropped. It was recommended that the hypothetical indexes be dropped from your database.
The combination of installing the latest service packs and cumulative updates, along with dropping the hypothetical indexes, seems to have worked for me.
I've experienced this behavior many times, and the way to fix it was to update my instances to the latest Service packs.
also the first version of SQL 2012 Tuning Advisor was crashing for some reasons, but updating to the latest SP2 has fixed this issue.
side note: plan-cash (a new features in SQL 2012) maybe helpful until you fix this problem permanently.
I was experiencing the same issue when running analysis on a database that contained encrypted stored procedures. I removed the encryption before I captured my profiler trace workload then re-ran the analysis and issue was resolved.
Related
I'm trying to update a SQL Server project in Visual Studio 2019 by using the SSDT schema comparison. My source is a running database server, the destination is the VS SQL Server project.
When the comparison is done and I click "Update", I get the message
Source schema drift detected. Press Compare to refresh the comparison
No matter how many times I refresh the comparison, I always get the same result.
I have tried various connection tweaks (read-only intent, asynchronous processing, multiple active result sets) in the hopes that I can make the comparison run faster and update the project before the drift happens, but to no avail. I have also tried reducing the types of objects included in the comparison, but have not been able to reduce it enough to prevent drift from being detected.
I think the biggest issue I have is that aside from the "schema drift detected" message, I feel like I'm shooting in the dark. By that I mean that I have no idea what is causing SSDT to detect drift, and therefore I can't work around it.
I tried running the SQL Profiler to capture what SSDT is doing so I could find where SSDT is detecting drift. However, I haven't been able to find any query that gives different results when run multiple times within a short period.
So in conclusion, my questions are:
What does SSDT look at to determine when the database schema has drifted?
How can I update my SQL Server project when it always detects schema drift?
I also struggled for months to find the cause of the same error. I was already thinking about flashing Windows 10 on my laptop. I won't list the dead ends anymore. In my final desperation, I copied the SQL Server database and VS project to another machine, and there the comparison worked without a bone. The suspicion arose that maybe the error is not in VS, but rather that my SQL server is confusing VS.
I have a SQL Server 2012. I put the latest update on it (SP4) and wonder of wonders, compare and sync started working perfectly right away. Of course, now before every update I pray a little so that I don't encounter the "Source schema drift detected" message.
I have been unsuccessfully fighting this annoying error for MANY SSDT versions.
Searching for it you will see multiple places where it is claimed to be fixed, WHICH IS FALSE, as it is happening right now with VS 2022 SSDT.
In my case, it ONLY happens when comparing against ONE out of the 5 database servers I regularly use the tool with.
The only workaround I have found that usually works is to REBOOT the destination database server (NOT just cycle the SQL Server Service) and then run the SSDT compare QUICKLY!
As the server that this happens on is an integration server running on a VM in my local network, I can bounce the server, but in other scenarios this would be a show-stopper.
IMO the most onerous things about this issue is that you cannot even generate the script to copy / paste into SSMS, which is how I often use the tool.
This issue has not been fixed for YEARS and is very intermittent, so I have no hope of seeing it actually fixed - I hope this workaround is helpful to someone.
For some of our environment, we have upgraded our SQL Version to 2017. And what we are doing is that we take SQL 2012 database backup and restore it on SQL 2017 through some automated scripts. And same was the case when these databases were migrated to SQL 2012 from SQL 2008 R2. But the compatibility level of our databases is still of 2008 & 2012 because basically we don’t update compatibility level of database during migration. And the sole reason is the difference of memory architecture among these SQL Versions.
So can anyone help me understanding what can be the possible impacts if we upgrade compatibility of our databases to 2017 from 2012 or 2008? And is there any parallel feature that we’ll have to enable/use in order to have minimal impact of compatibility level change on our environment?
Any help in understanding this would be appreciated.
Adding as an answer as I was running out of space as a comment:
I suspect this question is too broad to be able to provide a comprehensive answer. The best approach might be to simply try the change in a test environment and see what happens. What impact there is will depend largely on your type of workload and what features you're using.
In my experience the biggest catch in these cases when going from a pre-2014 version to a post one is the change to the cardinality estimator. The average query will probably get a little faster but you probably won't notice that because the real problems it solves will have been optimised away. On the other hand there will be a handful of queries that will get slower for you and you'll need to identify these and fix them. As a quick short term fix you can use the hint OPTION (USE HINT ('FORCE_LEGACY_CARDINALITY_ESTIMATION'))
If you're using MERGE statements then there's a risk those might start throwing errors as there's a few known bugs that can appear where the query deadlocks itself, we had to switch to dropping some indexes before merging and re-building them in a big ETL pipeline.
I've done a migration from SQL 2008 to SQL 2014. Unfortunately, one of the SSIS package which takes only 6 hours to run on 2008 is now taking 8 hours on 2014.
Can somebody told me why this is happening and how can I solve this problem? Is it something to do with setting?
I appreciate any idea/help from you guys. Thanks in advance.
Could be some problems:
Check the operating system is the same data SQL 2008.
Check the memory SQL SERVER:
Right-click: Server properties -> Memory -> Maximum Server Memory
Sometimes the virtual team, lowers the CPU consumption for the benefit of another machine
(If this is a virtual machine).
What about logging?
In 2012, the concept of project deployments was born. In addition to that concept, a centralized SSIS database was created by default when as Integration Services server was installed. Are you deploying the packages to a server to be run? If so, then logging might slow you down. http://msdn.microsoft.com/en-us/library/hh231191.aspx especially if the default it set to verbose and/or you're doing your own custom logging ( for each event, two executions happen).
Your SSIS server may be drowning from the default logging in addition to the standard workload of the data movements in the package. Try turning logging down or off. Basic works well for us. While the package is executing, monitor any resources that are running too high. That could give you some hints about potential bottle necks and where else to look.
We need to move an enterprise ERP during the upgrade (from 2005 to 2008). I have done some reading regarding the merits of running compatibility mode and I know there are some differences in the SQL estimator running native vs. compatibility mode, but I was wondering if any of you have encountered any performance improvements running a SQL database in compatibility mode on a newer server, i.e. are there any papers or actual experience that suggest that I am going to get better performance running SQL2008 vs. SQL2014 with Compat mode on the database. Do I actually benefit from the new server. We are licensed either way and the ERP is only "guaranteed" on 2008.
I hope to get some feedback for anyone who has run into this problem before. (Compatibility has been around for a long time, so I am sure someone has). Considering that our databases are ~400GB, clustered and SAN'd makes a really real-world test somewhat difficult to really do. Even more-so that the SAN will "prioritise" things - just make the test even more difficult. We all know that SQL 2014 performs better than 2012, but with the poorest of data, it may be the case - hence the general request.
I have never run into any issues with compatibility mode for any version of SQL Server. I also haven't really noticed any performance benefits or drawbacks doing so, but I admit that I haven't done any real timing tests. Usually when I've had to do that, I've upgraded the hardware on the box, so a true comparison is difficult.
Having said that, are you sure that's the way you want to go?
Why not just run a test environment with the database migrated to 2008 and no compatibility? If everything works for you in the test environment, then upgrade directly.
Most SQL Server upgrades are pretty painless, unless you're trying to skip a version or two, which you aren't. Even in failover clusters they aren't that hard as long as you follow the step-by-step procedure from Microsoft.
We have been using sql server 2005 during development and have had no real issues.
Now at the time of the release, it has been suggested we go with SQL server 2008. The installation of it is a bit of a nightmare with all the configuration options and I am sure this will confuse the customers (even with documentation). Powershell etc had to be installed together some other apps before it could install, as well as some windows updates.
Should we just release with SQL server 2005 (if I am not wrong the installation was easier) or would it be better to go with 2008 (and just accept the installation issues)?
JD
If you've developed and tested on 2005, then stay with 2005 to avoid complications, but I would upgrade everything to 2008 at some point.
Based on the information provided, stick to 2005.
Migrate to 2008 after you've been able to run the application in a test environment, thoroughly test it, and check documentation to be sure things line up.
I'm always wwary of making last-minute changes to an application before release. If you decide to switch to SQL 2008 will your team have time to fully regression test your application? I wouldn't just assume that everything will work fine.
If you have the time for full regression testing then the question is, what does SQL 2008 offer to you? Obviously there's no critical technical reason - no function that is in SQL 2008 that isn't in 2005 but which you absolutely must have. Do you think that your customers will care one way or another? Do you have customer that already have 2008 servers and want to leverage that? Will it help your sales to be on the "latest and greatest" version?
Those are just some of the questions to ask.
You can always save the switch to 2008 for a future upgrade.
The install/config issues shouldn't really be the primary focus, because those are one-time things and a well-written guide can address the vast majority of the issues you encounter.
Here's what I would consider making my decision:
Testing - Everything should work fine out of the box, but you're responsible either way. Better the devil you know than the devil you don't. There is a time and a place for upgrades, and right before a release isn't it.
Licensing - Remember that newer software often comes with a higher price tag, and you don't want to be forcing more expensive software on your users without some tangible of benefit.
All in all, I would stay away from upgrading for upgrade's sake. Stick with what you know, until there is a legitimate gain from upgrading.
In a few years time new customers will not be willing to run with SQL Server 2005, so you will have to support SQL Server 2008 at some point.
My experience is it can be very hard to get current customers to update to a new version of a database server when you wish them to take an upgrade of your software, so you will most likely be forced to support SQL Server 2005 for a long time if you allow your customers to use it now.
So do you wish to?
Have some pain now getting SQL Server 2008 working
Or, have the cost of testing and supporting both SQL Server 2008 and SQL Server 2005 every time you do a new release for many years to come
You have to decide if a small delay in releasing is worth it to save the ongoing cost...
This is hard, as time-to-market is often more important in the real world then ongoing maintenance costs. After all if you don’t hit the market window, you have no ongoing maintenance costs!
Will this delay your release? Will it delay it enough to cost money?
I wouldn't worry too much about the installation complication. Your customers will experience the same (or worse) when they have to upgrade to 2008, which they will eventually because you won't stick with 2005 forever. I'd be more concerned about how it affects your project and sales. That would be the trade-off that I would focus on.