I am new to CRM, trying to help our CRM team to make our CRM application faster. We have on-premises 2015 version.
I ran the stats query on CRM DB, there are more than 1000 indexes having more 30% avg_fragmentation.
what is safest way to resolve this issue? can we do this in Live CRM DB or can we create a copy of Live CRM DB and do on the live again? and another question is how to create live copy and manage in such way so that whenever main DB is down it goes to live copy and keep application running?
PLEASE Help.
Creating and de-fragmenting indexes on the CRM database are supported operations. Microsoft recommends that indexes with a fragmentation level greater than 30% be rebuilt, while those with greater than 10% fragmentation should be reorganized.
Defragmenting your production indexes is perfectly safe but you should run the operation overnight to minimize any affect on system performance.
Related
It's my first time using CRM as an app. I have been using SQL Server for quite a while now. Our company is experiencing issues with emails, thus having warnings in event viewer
Query execution time of 17.3 seconds exceeded the threshold of 10 seconds.'
for almost all select statements.
My main concern here is I am in doubt that the CRM job tool for reindexing is not running properly. Also, I have checked the reindex maintenance plan in the SQL Server, no configurations were made for reindex.
I hope someone can help me. Thanks!
In Dynamics 2016 On Premises you are free to create and maintain your own indexes. What is needed really depends on how your company is using the system. Sometimes recalculating statistics does the trick (as far as I remember this is part of the built-in maintenance jobs), sometimes you may even want to partition the database.
E-mail storage in MS Dynamics can quickly grow out of hand, because attachments (e.g. these colorful company footers) use to take up a lot of valuable database space. Decide what e-mail really needs to be tracked, if attachments and images can be stripped from them or if they can be stored in a separate file group. (To mention a few options.)
Poor SQL Server management is actually often the ground reason for performance issues on MS Dynamics. Your skills will be of much value!
I need to measure the SQL Azure DB performance using DTA, is it possible or not, if not what is the workaround to consume a workload file (.trc)??
Database Engine Tuning Advisor does not support Azure SQL Database. It is also not possible to create a trace file from an Azure SQL Database using SQL Server Profiler.
SQL Azure automates the creation of indexes that may improve performance of your workload with a feature named automatic tuning. Automatic Tuning on Azure SQL also drops redundant indexes and uses the best execution plan for queries
Alberto is correct - there are features within SQL Azure which help watch and improve the performance of your database queries automatically in some cases. Profiler trace + DTA are not currently supported in SQL Azure. The DTA (Database Tuning Advisor) feature in SQL Server is very good for taking traces and trying to replay them on a different server to simulate possible index and partitioning changes which could improve your performance. The automatic tuning feature does that for you without having to use DTA today yourself.
https://learn.microsoft.com/en-us/azure/sql-database/sql-database-automatic-tuning
If all you want to do is explore the performance of your database, then you can use the query store in SQL Azure (and SQL Server 2016+) to do this kind of analysis.
https://azure.microsoft.com/en-us/blog/query-store-a-flight-data-recorder-for-your-database/
https://learn.microsoft.com/en-us/sql/relational-databases/performance/monitoring-performance-by-using-the-query-store?view=sql-server-2017
If you have not tried this using a recent release of SQL Server Management Studio(SSMS), then I highly suggest you download this and try it. You can see top N queries by different metrics, plan changes over time, and other metrics which give you faster insight into the performance profile of your database + application.
There is no way to take a .trc file today and examine it in the query store, but you can enable query store in an on-premises SQL Server (2016+) and then record your production workload for awhile to see how it is behaving. Please understand there is an overhead to running with the query store on - usually it is modest, but for highly ad hoc OLTP query workloads you may see larger overhead. There are some knobs to tune this, so please just go through normal due diligence before modifying a production system. If you have problems, turn it back off and re-examine until you have the right settings to help capture the relevant data from your workload to help make tuning decisions.
Hope that helps!
Sincerely,
Conor Cunningham
Architect, SQL
What is the fastest way to backup/restore Azure SQL database?
The background: We have the database with size ~40 GB and restoring it from the .bacbac file (~4GB of compressed data) in the native way by Azure SQL Database Import/Export Service takes up to 6-8 hours. Creating .bacpac is also very long and takes ~2 hours.
UPD:
UPD.
Creating the database (by the way transactional consistent) copy using CREATE DATABASE [DBBackup] AS COPY OF [DB] takes only 15 minutes with 40 GB database and the restore is simple database rename.
UPD. Dec, 2014. Let me share with you our experience about the fastest way of DB migration schema we ended up with.
First of all, the approach with data-tier application (.bacpac) turned out to be not viable for us after DB became slightly bigger and it also will not work for you if you have at least one non-clustered index with total size > 2 GB until you disable non-clustered indexes before export - it's due to Azure SQL transaction log limit.
We stick to Azure Migration Wizard that for data transfer just runs BCP for each table (parameters of BCP are configurable) and it's ~20% faster than approach with .bacpac.
Here are some pitfalls we encountered with the Migration Wizard:
We run into encoding troubles for non-Unicode strings. Make sure
that BCP import and export runs with same collation. It's -C ... configuration switch, you can find parameters with which BCP calling
in .config file for MW application.
Take into account that MW (at least the version that is actual at the moment of this writing) runs BCP with parameters that will leave the constraints in non-trusted state, so do not forget to check all non-trusted constraints after BCP import.
If your database is 40GB it's long past time to consider having a redundant Database server that's ready to go as soon as the main becomes faulty.
You should have a second server running alongside the main DB server that has no actual routines except to sync with the main server on an hourly/daily basis (depending on how often your data changes, and how long it takes to run this process). You can also consider creating backups from this database server, instead of the main one.
If your main DB server goes down - for whatever reason - you can change the host address in your application to the backup database, and spend the 8 hours debugging your other server, instead of twiddling your thumbs waiting for the Azure Portal to do its thing while your clients complain.
Your database shouldn't be taking 6-8 hours to restore from backup though. If you are including upload/download time in that estimate, then you should consider storing your data in the Azure datacenter, as well as locally.
For more info see this article on Business Continuity on MSDN:
http://msdn.microsoft.com/en-us/library/windowsazure/hh852669.aspx
You'll want to specifically look at the Database Copies section, but the article is worth reading in full if your DB is so large.
Azure now supports Point in time restore / Geo restore and GeoDR features. You can use the combination of these to have quick backup / restore. PiTR and Geo restore comes with no additional cost while you have to pay for
Geo replica
There are multiple ways to do backup, restore and copy jobs on Azure.
Point in time restore.
Azure Service takes full backups, multiple differential backups and t-log backups every 5 minutes.
Geo Restore
same as Point in time restore. Only difference is that it picks up a redundant copy from a different blob storage stored in a different region.
Geo-Replication
Same as SQL Availability Groups. 4 Replicas Async with read capabilities. Select a region to become a hot standby.
More on Microsoft Site here. Blog here.
Azure SQL Database already has these local replicas that Liam is referring to. You can find more details on these three local replicas here http://social.technet.microsoft.com/wiki/contents/articles/1695.inside-windows-azure-sql-database.aspx#High_Availability_with_SQL_Azure
Also, SQL Database recently introduced new service tiers that include new point-in-time-restore. Full details at http://msdn.microsoft.com/en-us/library/azure/hh852669.aspx
Key is to use right data management strategy as well that helps solve your objective. Wrong architecture and approach to put everything on cloud can prove disastrous... here's more to it to read - http://archdipesh.blogspot.com/2014/03/windows-azure-data-strategies-and.html
I would like to know which is the best way to make a copy and keep the copies synchronized of a on premises SQL Server 2008 (not R2) database to SQL Azure.
Think of the SQL Azure as a failover kind of structure...
Notes:
The database runs fine in SQL Azure
I have already figured out how to get the rest of the app running on Azure
Please consider suggestions of the type "Upgrade to SQL Server 2012 because of X" if the gain (reliability, efficiency, time to replicate, etc...) are worth it
I`m looking for instant replication (as fast as possible)
Yes it will have to sync back eventually. If the on-premises deploy crash and the cloud get activated and changed, sync back will be necessary, but i think it does not need to be automatic... of it is, better!
The Database consist of 900+ tables (legacy system)
http://www.windowsazure.com/en-us/manage/services/sql-databases/getting-started-w-sql-data-sync/
http://msdn.microsoft.com/en-us/library/hh456371.aspx
I think the best bet is to use SQL Data Sync, it should give you bidirectional and we use it currently to sync data around the world in terms of datacenters and one local on premise database. It will only give you 5 mins sync timing but this will probably do, otherwise the next best options is to use SQL Server VMs and do the old fashion way. But with SQL Azure Data Sync we have found to be reasonable reliable and been running it for a good six months syncing across 4 database in four data centres in Azure.
Some problems though with it,
It uses Triggers.
It will obivously add load and connections to your current SQL Database.
The new control panel in Azure is a nightmare for it, so I would use the old panel for the moment.
It is in preview last time I looked, so it might not be 100% suitable
for you.
I would imagine there is some better third party solutions out there but off the shelf and in Azure SQL Data sync is well worth a look for the situation you a describing.
I want design and implement an enterprise software with silverlight.I use sql server database for this.many useres run sql queireis on sql server database.
how can i configure sql server database for best performance?
how can i distribute sql server database for best performance?
how can i distribute sql server database between some servers for best performance?
and so what technologies can i use in sql server for best performance?
In addition to replication you can use mirroring or log shipping for this. Note that I am talking only about scaling out reads, not write. So reports etc. can be run from the copies of the database but writes must go to the main copy (unless you are using merge replication, which is frightening to me). There are some caveats of course.
With database mirroring, you can use the secondary as a read-only reporting source by taking a snapshot. There are limits here to how many databases you can mirror and there is of course maintenance to manage the snapshots. It is not quite true distribution of resources here, but it can be helpful to offload some of the load. In the next version of SQL Server (Denali), you will be able to set secondaries as read-only, so you can avoid the maintenance of snapshots.
With log shipping, you can essentially keep a stale version of the database around for reporting, and replace it periodically by restoring logs to it. You have a lot more flexibility here compared to replication or mirroring, as you can actually define a delay (like every 6 hours or once a day, you refresh the copy) - which can also serve as a "recover from a shoot-yourself-in-the-foot" scenario. The downside is that to restore a new copy of the database you need to kick all the current users out, as the database needs to be in single user mode in order to recover.
Those are just a couple of ideas for helping scale out reads, but deep down I agree with #gbn - are you solving a problem you don't have yet? It's one thing to design for scalability, but it's very easy to step over that line and completely over-engineer.
Well, SQL Server doesn't really have a load balancing mchanism in and off itself. What it does support, however, is an active/passive node configuration and also replication.
We are using the replication strategy in one application I support. You can read more about it here:
http://msdn.microsoft.com/en-us/library/ms151198.aspx
In our configuration, we basically have a transactional database and a reporting database. We replicate the data from our transactional DB to the reporting DB. Any reporting is done against this reporting DB, so that we don't slow down work being done on the transactional DB due to some long running report.
Note that the replication isn't truly real time. In other words, there's some time involved in replicating the data from the transactional to the reporting DB, albeit a very small time amount. But replication is certainly one strategy you could consider if you are trying to balance workload.
Other things you might consider are partitioning large tables for better performance.
As gbn pointed out in his comment though, it's better to determine if you actually need these strategies before implementing them, because they add a lot of complexity and maintenance efforts, which may not even be needed. It's important to properly analyze how much data you think you will have, and how much activity will be occurring against that data to determine if strategies such as the ones I just described are even needed.
Also, you can refer to this link for some other helpful information and some links to whitepapers you may find helpful:
http://social.msdn.microsoft.com/Forums/en/sqldisasterrecovery/thread/05cf41b7-c558-44bf-86c6-12f5c2b2ffe2