I got some non-clustered indexes (unique) with uniqueidentifier (GUID) as column. The index gets a lot of fragmentation all the time.
How should I solve this with Ola HallengrenĀ“s maintenance script?
Skip reorg/rebuild of these index?
The problem is described here:
https://blogs.msdn.microsoft.com/sqlserverfaq/2011/08/30/another-reason-of-index-logical-fragmentation/
here you have two options:
Very basic information.
DBCC DBReindex: locks up the tables and users may not be able to access the data until the reindex is done. Bottom line - this drops
the indexes and creates them from scratch. You have brand new indexes
when this is done, so they are in the 'best state' possible. Again, it
ties up the database tables. This is an all or nothing action. If you
stop the process, everything has to rollback.
DBCC INDEXDEFRAG: Does not lock up the tables as much. Users can still access the data. The indexes still exist, they are just being
'fixed'. If this is stopped, it doesn't rollback everything. So the
index will be less defragged than when you started.
If you run DBReindex, you don't need to run INDEXDEFRAG. There's
nothing to defrag when you have brand new indexes.
hope this help!
I think in this instance you should exclude these from Ola Hallengren's maintenance script. Also Guids should not be part of any clustered index.
Related
I have a question to understand how to optimize a database. We have a SQL Server and the main data is ordered vertically. So a record has these columns
ID version fieldindex fieldvalue
and ID, version, fieldindex is the primary key.
So if you want to load a logical recordset you have to load all lines of an ID + version. One "record" could contain of somewhat 60 lines. I'm afraid that this causes some problems in the performance of the database.
There are around 10 users working in parallel in the application and we are getting deadlocks very often. We even get deadlocks inserting new lines, so normally a record that isn't persistent can't be locked.
So my question is, how does SQL Server lock records? Is it possible that a record is locked even if I am not selecting this record particularly?
I would be glad if someone could explain how the database is working.
You've got EAV, which is generally considered bad.
To make EAV work acceptably, you'll need the right indexing structure and possibly some care with lock hints and transactions.
Generally you'll want your clustered index to be (EntityID, AttributeId), so all the attributes for an entity are co-located. But to avoid deadlocks you may need to X lock the main Entity row when modifying the attributes, as SQL Server will otherwise use row locking on the AttributeValue table, which can lead to deadlocks, and logical inconsistencies. You can X lock it by modifying the row as the first operation in your transaction, or by reading it with an XLOCK hint.
Depending on the role of "version" in your system, it will be somewhere in the Clustered Index too. If attributes are individually versioned, then at the end. If individual Entities are viersioned then in the middle. And if a Version contains multiple Entities then as the leading column.
I have an .exe that compares a vbTab delimited .txt file with an SQL table.
Updates to the table's existing records goes very fast. Inserts into the table for new records is quite slow.
As I'm new to SQL, I'm wondering if my idea is crazy talk:
I thought that maybe a solution would be to "pre populate" the database with 10,000 empty rows (minus the primary key) and somehow have this speed up the process?
Any suggestions would be greatly appreciated.
There is no straightforward answer to your question as many things are unknown to us (DB configuration, HW, existing data etc.)
But you can try below things,
Try using DB export-import functionality
Instead of fetching records from DB with an iterator and comparing them with a record from a file and then do insert of modification you can directly import those records into DB using upsert (update if present or insert if not) strategy. Believe me this works lot faster than previous.
If you have indexes on that table, while import or insert drop the current indexes on that table and do the operation. After operation re-apply those indexes again. Indexes slows down the performance of inserts.
If import strategy is not good for you (If you are doing with those records first before insertion) then probably go for stored procedure for modification and insert new rows after dropping indexes.
During this activity check for DB configuration as well. Use proper tuning for buffers, paging, locking.
Hope this helps :)
To answer your question we may need more information.
How many rows does your table have?
I guess it may be a lack of indexes.
Using SQL Server 2012 Entreprise.
I have a table of 12 billion rows that takes 700GB on disk, in 30 partitions.
It has only one index, clustered.
I have 500 GB free disk space.
I disabled the index (please don't ask why. If you have to know, I targeted the wrong database).
I now want to enable the index. If I do
alter index x1 on t1 rebuild
I eventually get an error because there is not enough free disk space. That was a painful lesson about disk space requirements for rebuilding a clustered index.
Ideally, I want to rebuild the index one partition at a time. If I do
alter index x1 on t1 rebuild partition = 1
I get the error: Cannot perform the specified operation on disabled index.
Any solution, besides buying more physical disks? The table has not changed since disabling the index (can't be accessed anyway), so I am really looking for a hack that can fool SQL into thinking the index is enabled. Any suggestions?
Thanks
If its a clustered index that you have disabled you have effectively disabled the table the only operations you can execute on this table is "drop" or "rebuild" as far as i am aware.
you could try the deprecated dbbc dbreindex command, maybe you are lucky and it rebuilds more disk space efficiently. Also you might squeeze some more space out if you set a fill factor to 100 when you rebuild. assuming you database table is now only being read.
DBCC DBREINDEX ('Person.Address', 'PK_Address_AddressID', 100)
allows you to reindex just the clustered index.
I am trying to insert thousands of rows into a table and performance is not acceptable. Rows on a particular table take 300ms per row to insert.
I know that tools exist to profile queries run against SQL Server (SQL Server Profile, Database Tuning Advisor), but how would I profile insert and update statements to determine slow running inserts? Am I forced to use perfmon while the queries run and deduce the issue with counters?
I would first check the query plan of a single insert to understand the costs associated to that operation - it is not known from the question whether the insert is selecting the data from elsewhere.
I would then check the table indexing for the following:
how many indexes are in place (apart from filtered indexes, each index will be inserted into as well)
whether a clustered index is present or are we inserting into a heap.
if the clustered index key means we will be getting a hotspot benefit on the end of the table or causing a large quantity of page splits.
This is all SQL schema based issues, assuming there is no problems within SQL, you can start checking disk IO counters to check for disk queue lengths and response time. Not forgetting the Log drive response time since each insert will be logged.
These kind of problems are very difficult to nail down as any 1 perscriptive thing / silver bullet you can give advice over, just a range of things you should be checking.
I'm betting that the problem is with the selects and not necessarily the updates. Have you tried profiling the select part of the update statement to make sure there isn't a problem there first?
How does INSERT, UPDATE & DELETE work on a SQL Server table partition?
Technical explaination please how the SQL server engine handles table partition vs non table partition.
The SQL optimiser will use the query predicates to decide on how many table partitions will be affected. This makes the query run faster as unnecessary data is not read from disk. The query will then be run against the relevant data blocks in the affected partitions. To the user this is completely transparent.
I found this article by Kimberly Tripp to be incredibly useful in figuring out the ins and outs of table partitioning. It's about 40 pages long, technically detailed, and a printout sits on my desk as a permanenet reference.