Index fragmentation growing rapidly even using fillfactor - sql-server

I am using SQL Server 2012, from few days I have noticed that fragmentation of some indexes are growing very rapidly. I have read different article and apply the fill factor.
First I have change the fill factor to 95 and rebuild, after one day fragmentation was about 50%. So I decrease the fill factor to 90 and then 80 but after one day fragmentation again reach to 50%.
I need some help to find out the reason for growing fragmentation and solution to fix it.
FYI, I am applying fill factor on index level, only 4-5 indexes are having this issue I have applied fill factor to other indexes as well they are working fine.
Thanks in advance.

There are many things which causes index fragmentation..some of them are below
1.Insert and Update operations causing Page splits
2.Delete operations
3.Initial allocation of pages from mixed extents
4.Large row size
SQL Server only uses fillfactor when you’re creating, rebuilding, or reorganizing an index,so even if you specify a fill factor of 70, you may still get page splits.. and further Index fragmentation is an “expected” and “unavoidable” characteristic of any OLTP environment.
So with your fill factor setting, sql server leaves some space when index is rebuilt and this helps in first scenario only and this is also subjected to your workload
So i recommend not worrying about fragmentation much unless your workload does a lot of range scans..below are some links which helps you
further you can track Pagesplits/deletes which are some of the causes for fragmentation using Perfmon counters/extended events and also using transaction log
https://dba.stackexchange.com/questions/115943/index-fragmentation-am-i-interpreting-the-results-correctly
https://www.brentozar.com/archive/2012/08/sql-server-index-fragmentation/
References :
Notes - SQL Server Index Fragmentation, Types and Solutions

Related

FILLFACTOR on empty database

I have a system which populates an empty database with many millions of records.
The database has various types of indexes, the ones I'm worried about are:
Indices on foreign keys. These are non-clustered, and not necessarily inserted in sequential order.
Indices on BINARY(32) fields. These are content hashes and not ordered at all. Basically, these are like GUIDS and not sequential.
So as the data is bulk-inserted, there is significant fragmentation of these indices.
Question 1: if I set FILLFACTOR=75 to these indices when database is created, will it have any effect at all as the data is inserted? It seems FILLFACTOR has effect after data is created not before. Or will new index pages be allocated with original fillfactor setting?
Question 2: what other recommended strategies can I use to make sure these indices perform optimally?
Question1:
Fill factor is used only when indexes are rebuilt,SQL doesnt try to store pages based on fill factor while doing inserts.
Question2:
It depends on what you are saying as optimal.On a minimal you can check whether your indexes are usefull and your queries are using your indexes.There are tons of best practices around indexes like selective first key,small keys..
Its good to search for any thing about indexes from Kimberly Tripp and DBA.SE
References:
http://www.sqlskills.com/blogs/paul/a-sql-server-dba-myth-a-day-2530-fill-factor/
http://www.sqlskills.com/blogs/kimberly/category/indexes/
Check the index fragmentation as well as the write/read ration. If the write/read ratio is very high AND you see fragmentation, you can experiment with adding a fillfactor during index rebuild operation.
The amount of fill-factor really depends on your fragmentation you are seeing. If you see 0-20% fragmentation (and these happened over a period of time), you may not want any fill-factor. If you see 20-40% fragmentation, you may try 90%.
Lastly get a good index maintenance plan. Ola Hallengren's index script is excellent.
NB: Above suggestions are just suggestions - your

Fill factor and indexes

I have a confusion. If i set fill factor to 50% then sql engine will leave half the space empty for future growth, so data will store in the datepages upto 4KB (approx) because page max size is 8KB. Also fill factor applies only rebuilding indexes. please clear my doubt on the above scenario. Any help would be appreciated.
Thanks
DBAs and developers often read that lowering the fillfactor improves performance by reducing page splits. Perhaps they’re trying to fix a performance problem, or perhaps they’re feeling paranoid. They either lower fillfactor too much on some indexes, or apply a fillfactor change to all indexes.
Here’s the scoop: it’s true that the default fillfactor of 100% isn’t always good. If I fill my pages to the brim, and then go back and need to insert a row onto that page, it won’t fit. To make the data fit and preserve the logical structure of the index, SQL Server will have to do a bunch of complicated things (a “bad” type of page split), including:
1)Add a new page.
2)Move about half the data to the new page.
3)Mark the data that was moved on the old page so it’s not valid anymore.
4)Update page link pointers on existing pages to point to the new page
And yep, that’s a lot of work.
It generates log records and causes extra IO. And yes, if you have this happen a lot, you might want to lower the fillfactor in that index a bit to help make it happen less often.
BEST PRACTICES FOR SETTING FILLFACTOR
Here’s some simple advice on how to set fillfactor safely:
1)Don’t set the system wide value for fillfactor. It’s very unlikely that this will help your performance more than it hurts.
2)Get a good index maintenance solution that checks index fragmentation and only acts on indexes that are fairly heavily fragmented. Have the solution log to a table. Look for indexes that are frequently fragmented. Consider lowering the fillfactor gradually on those individual indexes using a planned change to rebuild the index. When you first lower fillfactor, consider just going to 95 and reassessing the index after a week or two of maintenance running again. (Depending on your version and edition of SQL Server, the rebuild may need to be done offline. Reorganize can’t be used to set a new fillfactor.)
This second option may sound nitpicky, but in most environments it only takes a few minutes to figure out where you need to make a change. You can do it once a month. And it’s worth it– because nobody wants their database performance to slow down and realize that they’ve been causing extra IO by leaving many gigabytes of space in memory needlessly empty.

SQL Server - will PAGLOCK table hint cause high fragmentation?

We have an application where we are parsnig / loading large amounts of messages and loading them into a SQL Server 2008 R2 instance. We are using TVPs to pass collections which is enabling large throughput, however it is also causing us to encounter quite a few deadlocks. We recently added the PAGLOCK hint to the areas where we were having issues and it has resolved a majority of the problems.
My concern is that this will cause a large amount of fragmentation? Is that correct? We are also working on some re-design options, but wanted to gain some isnight into the impacts of using the PAGLOCK hint.
Any suggestions / comments would be greatly appreciated.
Thanks,
S
PAGLOCK does not affect fragmentation. Fragmentation happens if the insert of a new row causes a page split because the new row does not fit in the correct page anymore and is therfore independent of the lock type used. Page splits are common if the clustered index key is not an increasing value like an identity. It also happens in nonclustered indexes for the same reason, albeit less frequent because more rows tend to fit in a page.
What that means is that you can't avoid fragmentation on tables with multiple indexes and a high insert frequency. Therefore you should reorganize/rebuild your indexes frequently, based on their actual fragmentation.
PAGLOCK hint has nothing to do with fragmentation. Fragmentation is related to indexes on the table. Looks like you don't have the right indexes on the table for which there are heavy inserts.
PAGLOCK is a table hint which is related to concurrency. Please refer to SQL Server Books Online:
MSDN: Table Hints (Transact-SQL)

How to avoid sql server page fragmentation in this scenario?

I want to order SQL Inserts into a table to optimize page use by avoiding fragmentation as much as possible.
I will be running a .net Windows Service, which every 2 hours will take some data from a database and optimize it
for future queries. A varchar(6000) column is involved, though I estimate it will rarely go beyond 4000 bytes.
In fact, this column can vary normally between 600 and 2400.
It's 6000 to help avoiding truncating errors. Still I can control that column size through .net.
There won't ever be updates nor delete. Just selects (and inserts every 2 hours).
There will be around 1000 inserts every 2 hours.
I'm using Sql Server 2005. Page size are said to be 8096 bytes.
I need to insert rows in a table. Given the size of rows, between 4 and 12 rows could fit in a page.
So from .net I will read data from database, store it in memory, (use some clustering algorithm maybe?), and the insert around 1000 rows.
I was wondering if there is a way to avoid or minimize page fragmentation in this scenario.
Is the table a btree or a heap? Do you have a clustered index on it? If yes, then what column is the clustered index on, and how is the column value computed at insert?
Why do you care about fragmentation to start with? Space consideration or read ahead performance? For space, you should skip SQL 2005 and go to SQL 2008 for Page compression. For read ahead, it would be worth investigating why you need large read aheads to start with.
Overall, index fragmentation is more of an overhiped bru-ha-ha everyone talks about but very few really understand. There are many many more aveanues to pursue before fragmentation becomes the real bottleneck.

indexing large table in SQL SERVER

I have a large table (more than 10 millions records). this table is heavily used for search in the application. So, I had to create indexes on the table. However ,I experience a slow performance when a record is inserted or updated in the table. This is most likely because of the re-calculation of indexes.
Is there a way to improve this.
Thanks in advance
You could try reducing the number of page splits (in your indexes) by reducing the fill factor on your indexes. By default, the fill factor is 0 (same as 100 percent). So, when you rebuild your indexes, the pages are completely filled. This works great for tables that are not modified (insert/update/delete). However, when data is modified, the indexes need to change. With a Fill Factor of 0, you are guaranteed to get page splits.
By modifying the fill factor, you should get better performance on inserts and updates because the page won't ALWAYS have to split. I recommend you rebuild your indexes with a Fill Factor = 90. This will leave 10% of the page empty, which would cause less page splits and therefore less I/O. It's possible that 90 is not the optimal value to use, so there may be some 'trial and error' involved here.
By using a different value for fill factor, your select queries may become slightly slower, but with a 90% fill factor, you probably won't notice it too much.
There are a number of solutions you could choose
a) You could partition the table
b) Consider performing updates in batch at offpeak hours (like at night)
c) Since engineering is a balancing act of trade-offs, you would have to choose which is more important (Select or Update/insert/delete) and which operation is more important. Assuming you don't need the results in real time for an "insert", you can use Sql server service broker for those operations to perform the "less important" operation asynchronously
http://msdn.microsoft.com/en-us/library/ms166043(SQL.90).aspx
Thanks
-RVZ
We'd need to see your indexes, but likely yes.
Some things to keep in mind are that you don't want to just put an index on every column, and you don't generally want just one column per index.
The best thing to do is if you can log some actual searches, track what users are actually searching for, and create indexes targeted at those specific types of searches.
This is a classic engineering trade-off... you can make the shovel lighter or stronger, never both... (until a breakthrough in material science. )
More index mean more DML maintenance, means faster queries.
Fewer indexes mean less DML maintenance, means slower queries.
It could be possible that some of your indexes are redundant and could be combined.
Besides what Joel wrote, you also need to define the SLA for DML. Is it ok that it's slower, you noticed that it slowed down but does it really matter vs. the query improvement you've achieved... IOW, is it ok to have a light shovel that weak?
If you have a clustered index that is not on an identity numeric field, rearranging the pages can slow you down. If this is the case for you see if you can improve speed by making this a non-clustered index (faster than no index but tends to be a bit slower than the clustered index, so your selects may slow down but inserts improve)
I have found users more willing to tolerate a slightly slower insert or update to a slower select. That is a general rule, but if it becomes unacceptably slow (or worse times out) well no one can deal with that well.

Resources