Is It possible to a rebuild Index with out taking instance offline? - sql-server

I have this one NONCLUSTERED INDEX that's 85.71% total fragmentation and 55.35% page fullness.
Can this be done without taking my instance offline and not enterprise edition?
TITLE: Microsoft SQL Server Management Studio
------------------------------
Rebuild failed for Index 'idx_last_success_download'. (Microsoft.SqlServer.Smo)
For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=10.50.2500.0+((KJ_PCU_Main).110617-0038+)&EvtSrc=Microsoft.SqlServer.Management.Smo.ExceptionTemplates.FailedOperationExceptionText&EvtID=Rebuild+Index&LinkId=20476
------------------------------
ADDITIONAL INFORMATION:
An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo)
------------------------------
Lock request time out period exceeded. (Microsoft SQL Server, Error: 1222)
For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=10.50.2500&EvtSrc=MSSQLServer&EvtID=1222&LinkId=20476
------------------------------
BUTTONS:
OK
------------------------------
After Reorganized:
ALTER INDEX idx_last_success_download ON dbo.TERMINAL_SYNCH_STATS
REORGANIZE;
I'm still getting 85.71 fragmentation?
Using for my stats: DBCC SHOWCONTIG
DBCC SHOWCONTIG scanning 'TERMINAL_SYNCH_STATS' table...
Table: 'TERMINAL_SYNCH_STATS' (331148225); index ID: 38, database ID: 7
LEAF level scan performed.
- Pages Scanned................................: 7
- Extents Scanned..............................: 5
- Extent Switches..............................: 6
- Avg. Pages per Extent........................: 1.4
- Scan Density [Best Count:Actual Count].......: 14.29% [1:7]
- Logical Scan Fragmentation ..................: 85.71%
- Extent Scan Fragmentation ...................: 40.00%
- Avg. Bytes Free per Page.....................: 3613.9
- Avg. Page Density (full).....................: 55.35%

Lock time out is not a version issue
Yes it is possible to rebuild an index online.
You have a lock timeout. I suspect it is an active table and rebuild simply cannot acquire a lock.
Try a Reorganize
Reorganize and Rebuild Indexes

Please note in any case you dont have to take SQL server database or SQL Server instance offline to rebuild any index. Yes if you have Standard edition ONLINE index rebuild is not possible and you have to make sure application or some query is not accessing the table otherwise index rebuild would fail
What is output of
select ##Version
The erorr message
Lock request time out period exceeded. (Microsoft SQL Server, Error: 1222)
Only says that when index rebuild task was trying to get exlcusive lock on table, because during index rebuild index is dropped and recreated , it was not able to get hence the error message. It is not a threatening message. You can get this message both in standard and enterprise edition while rebuilding index.
Index rebuild is maintenance activity so should always be done when load on database is relatively very less or during mainteance window.
For solution try rebuilding when no body is accessing database or laod is very less

Try to run rebuild with specifying option WAIT_AT_LOW_PRIORITY
e.g. as below
ALTER INDEX idx_last_success_download ON dbo.TERMINAL_SYNCH_STATS
REBUILD WITH
( FILLFACTOR = 80, SORT_IN_TEMPDB = ON, STATISTICS_NORECOMPUTE = ON,
ONLINE = ON (WAIT_AT_LOW_PRIORITY
(MAX_DURATION = 4 MINUTES, ABORT_AFTER_WAIT = BLOCKERS ) ),
DATA_COMPRESSION = ROW
);
For more info refer: https://msdn.microsoft.com/en-us/library/ms188388.aspx

Related

Using flyway - How can memory optimized tables be deployed

I am using Flyway Community Edition 6.3.2 by Redgate and attempting to deploy a memory optimized table.
The content of my versioned script is...
CREATE TABLE temp_memory_optimized.test
(
id INT NOT NULL PRIMARY KEY NONCLUSTERED
) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_ONLY);
GO
On deploy time I am seeing this error...
ERROR: Migration of schema [dbo] to version 1.0.2 - add memory optimized objects failed! Changes successfully rolled back.
ERROR:
Migration v1.0.2__add_memory_optimized_objects.sql failed
---------------------------------------------------------
SQL State : S000109
Error Code : 12331
Message : DDL statements ALTER, DROP and CREATE inside user transactions are not supported with memory optimized tables.
Location : C:\...\v1.0.2__add_memory_optimized_objects.sql (C:\...\v1.0.2__add_memory_optimized_objects.sql)
Line : 1
Statement : CREATE TABLE temp_memory_optimized.test
(
id INT NOT NULL PRIMARY KEY NONCLUSTERED
) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_ONLY);
MO filegroup is configured correctly and I can successfully manually deploy onto my test box.
I have set -mixed=true on the migrate command.
I know I cannot be the first person to hit this problem however internet searches are proving fruitless in trying to track down a solution.
As mentioned in issue 2062 Flyway is not detecting that
CREATE TABLE WITH MEMORY_OPTIMIZED = ON is not valid in a transaction automatically. You will need to override this behaviour on a per-script basis as detailed here: https://flywaydb.org/documentation/scriptconfigfiles and will need to do so for each CREATE/ALTER/DELETE on in-memory objects.

Database migration without downtime

In our organization we require to run a database migration on live site data. We want to add a column with default value in a table with around 1000 rows. Can you suggest any method so that we get zero or minimum downtime . We are using postgresql database and elixir phoenix app .
Thanks.
PS : we want minimum down time not exact zero . Also we want to run migration using Ecto in elixir and not through script.
Also if you can tell expected time taken to run migration when we have default constraint set.
In general ALTER TABLE requires exclusive lock on table but adding a column with default values can be very fast because only system catalog should be updated (and this action does not depend on the table size):
For example with PostgreSQL 12, I get:
# select count(*) from t;
count
---------
1000000
(1 row)
Time: 60.003 ms
# begin;
BEGIN
Time: 0.096 ms
# alter table t add newcol int default 19;
ALTER TABLE
Time: 0.457 ms
# commit;
COMMIT
Time: 9.211 ms
You should be able to get very small downtime with PostgreSQL 11 or 12. With a lower version PG rewrites the table: but even in this case 1000 rows is very very small and should be also very fast.

Columnstore index creation fails with this error "There is insufficient memory in resource pool 'default' to run this query"

When trying to execute this command to create a columnstore index I get the following error "There is insufficient memory in resource pool 'default' to run this query"
The server is SQL Server 2016 Enterprise. 5tb hard drive space, 128gb of RAM. Max memory is set to 120gb leaving 8gb for the OS. There are no other services aside from SQL running on this machine.
The table I am attempting to create the columnstore index on is 36gb, 2bn rows and 8 columns wide, predominantly int and date.
Am i correct in assuming that 120gb is insufficient? although i have tried running a columnstore on a 55m row 6gb table and that also failed.
CREATE CLUSTERED COLUMNSTORE INDEX [CCI-BIG] ON [DBO].[BIG_DATA]
WITH (DROP_EXISTING = OFF, COMPRESSION_DELAY = 0) ON [PRIMARY]
GO
The error means that you don't have enough RAM to create the index. You need to check the amount of RAM allocated to SQL Server (at least check the RAM utilization by SQL Server in Process Manager. It doesn't have to be less than the size of the table).
Also, try to reduce MAXDOP. For instance, run the following code:
CREATE CLUSTERED COLUMNSTORE INDEX [CCI-BIG] ON [DBO].[BIG_DATA]
WITH (DROP_EXISTING = OFF, COMPRESSION_DELAY = 0, MAXDOP = 1) ON [PRIMARY]
GO

SQL Azure raise 40197 error (level 20, state 4, code 9002)

I have a table in a SQL Azure DB (s1, 250Gb limit) with 47.000.000 records (total 3.5Gb). I tried to add a new calculated column, but after 1 hour of script execution, I get: The service has encountered an error processing your request. Please try again. Error code 9002 After several tries, I get the same result.
Script for simple table:
create table dbo.works (
work_id int not null identity(1,1) constraint PK_WORKS primary key,
client_id int null constraint FK_user_works_clients2 REFERENCES dbo.clients(client_id),
login_id int not null constraint FK_user_works_logins2 REFERENCES dbo.logins(login_id),
start_time datetime not null,
end_time datetime not null,
caption varchar(1000) null)
Script for alter:
alter table user_works add delta_secs as datediff(second, start_time, end_time) PERSISTED
Error message:
9002 sql server (local) - error growing transactions log file.
But in Azure I can not manage this param.
How can I change my structure in populated tables?
Azure SQL Database has a 2GB transaction size limit which you are running into. For schema changes like yours you can create a new table with the new schema and copy the data in batches into this new table.
That said the limit has been removed in the latest service version V12. You might want to consider upgrading to avoid having to implement a workaround.
Look at sys.database_files by connecting to the user database. If the log file current size reaches the max size then you hit this. At this point either you have to kill the active transactions or update to higher tiers (if this is not possible because of the amount of data you modifying in a single transaction).
You can also get the same by doing:
DBCC SQLPERF(LOGSPACE);
Couple ideas:
1) Try creating an empty column for delta_secs, then filling in the data separately. If this still results in txn log errors, try updating part of the data at a time with a WHERE clause.
2) Don't add a column. Instead, add a view with the delta_secs column as a calculated field instead. Since this is a derived field, this is probably a better approach anyway.
https://msdn.microsoft.com/en-us/library/ms187956.aspx

Indexes on small table needing to be constantly rebuilt when server busy

In my SQL database (compatibility level SQL Server 2008, but on SQL Server 2012), I have a small table called Locations: 2036 rows in 21 pages.
"Select * from Locations" will return all rows in a split second. However, after moving to a virtual environment, under heavy load that SQL will hang until I rebuild indexes on that table: "ALTER INDEX ALL ON dbo.Locations REBUILD WITH (FILLFACTOR = 100)".
And then it's fine. Until it slows down again and I need to rebuild again -- sometimes 5 sec later!
When I run: "DBCC CHECKTable (Locations);", I get "DBCC results for 'Locations'.There are 2036 rows in 21 pages for object "Locations".
Any ideas what this could be or where I should start looking?
A query on the whole table;
Select * from Locations
will not reference any indexes, and I therefore believe its pure coincidence that an index rebuild is 'solving' the problem. Have you checked for process thread locks on the SQL Server? There may be some contention on the table that is locking that query.

Resources