When I use the 'Select Top 1000 Rows' function in SSMS in three of the tables in my database, I get an error that the database is offline. But the database name in the error message does not match the name of the database in the query.
SELECT TOP 1000 ...
FROM [vc-live].[dbo].[Errors]
Msg 942, Level 14, State 4, Line 2
Database 'vc-live-old' cannot be opened because it is offline.
If I add an explicit using statement -- either [master] or [vc-live] -- the query runs fine.
The only other weirdness I can find is that the vc-live-old database shows online in sys.master_files though it is offline in SSMS.
As you probably suspect, the database was renamed some time back using an alter statement after placing it in single user mode with rollback immediate.
The application that accesses the database is running fine and I am not concerned with data loss due to the nature of the application. However, I am concerned about what might happen when the database engine is restarted.
DB is 2012 SP2.
Any thoughts on this unexpected behavior?
https://dba.stackexchange.com/questions/48237/renaming-sql-server-database-unusual-result
The execution plans for the SELECT TOP 1000 queries were cached. Per the advice at the above link, I used DBCC FREEPROCCACHE to correct the issue.
Also this post was good if you want to use the command with more finesse: https://sqlserverperformance.wordpress.com/2009/12/28/fun-with-dbcc-freeproccache/
Related
I have a Microsoft SQL database that we have been using for several years. Starting this morning a single table in the database is throwing a time-out error whenever we attempt to insert or update any records.
I have tried to insert and update through:
Microsoft Access ODBC
a .Net Program via Entity Framework
a stored procedure run as an automatic job -- that runs each morning
a custom query written this morning to test the database and executed through SQL Server Management Studio
Opening the table directly via 'Edit Top 200 Rows' and typing in the appropriate values
We have restarted the service, then restarted the entire server and continue to get the same problems. The remainder of the database appears to be working fine. All data can be read even from the affected table, and other tables allow updates and inserts to be run just fine.
Looking through the data in the table, I have not found anything that appears out of the ordinary.
I am at a loss as to the next steps on finding the cause or solution.
Its not a space issue is it ? try ...
SELECT volume_mount_point Drive,
cast(sum(available_bytes)*100 / sum(total_bytes) as int) as [Free%],
avg(available_bytes/1024/1024/1024) FreeGB
from sys.master_files f
cross apply sys.dm_os_volume_stats(f.database_id, f.[file_id])
group by volume_mount_point
order by volume_mount_point;
I have a database whose log file size is 527GB, showing almost 100% use. The DB is in AO Asynchronous replication with another SQL server. DB is in Full backup mode, and Log backup is happening every hour. I tried to shrink log file, it didn't work, gave me following message.
Msg 1468, Level 16, State 2, Line 2
The operation cannot be performed on database "MYDB" because it is involved in a database mirroring session or an availability group. Some operations are not allowed on a database that is participating in a database mirroring session or in an availability group.
Msg 5069, Level 16, State 1, Line 2
ALTER DATABASE statement failed.
The log for database 'MYDB' cannot be shrunk until all secondaries have moved past the point where the log was added..
(1 row(s) affected)
DBCC execution completed. If DBCC printed error messages, contact your system administrator.
I think its not doing it because of replication all logs are not transferred and its been all the time. How do solve the issue without taking Database out of AO replication group? I can move it to Synchronous mode, but I do not want it to move out of AO.
To check out the cause of your problem have a look at results of the query below (log_truncation_holdup_reason field):
select * from sys.dm_db_log_stats(db_id('YourDatabaseName'))
UPD: For versions lower than 2016 SP2 you can get the same using a query:
SELECT log_reuse_wait_desc
FROM sys.databases
WHERE name = 'YourDatabaseName'
If it is AVAILABILITY_REPLICA then check active transactions to understand which one could cause it.
dbcc opentran
Depending on results you will be able to decide what you should do.
Here's my setup. Once a day, a full backup of my DB is retrieved from the production server and restored onto a local SQL Server instance. Every 15 minutes aftwerwards, a SQL transaction log is retrieved from production and restored locally.
RESTORE DATABASE [DBNAME] from disk=#path with NORECOVERY, REPLACE)
RESTORE LOG [DBNAME] from disk=#path with NORECOVERY
In case of a failure of the production environment, I need to be able to use the local DB instead. This means "finishing up the restore" and changing some configuration values like this :
RESTORE DATABASE [DBNAME] with RECOVERY
UPDATE [DBNAME].dbo.[TABLE] SET [COL1] = 1
I have put this code in a stored procedure (in another DB on the same SQL Server instance). However, I am unable to execute it as the second line causes an error:
Database 'DBNAME' cannot be opened. It is in the middle of a restore.
I assume this is due to pre-validation by the SQL Server engine (since the DB is not available until the RESTORE query is executed), but I would like to know how to work around it as cleanly as possible. I found a workaround, which I posted as an answer below, but it's definetely not a great way to solve the problem.
Thanks for the help!
You should be able to work around this by placing the second statement in an EXEC:
RESTORE DATABASE [DBNAME] with RECOVERY
EXEC('UPDATE [DBNAME].dbo.[TABLE] SET [COL1] = 1')
The issue you're likely seeing is that SQL Server wants to compile the entire stored procedure before it starts executing. In order to compile the UPDATE, it needs to, at the very least, confirm the existence of the table and column(s) involved.
So, put it in an EXEC so that it's not compiled until that part of the procedure is reached.
The workaround I have found is simple:
Create a stored procedure that executes the query on the database.
Call this SP in the first stored procedure.
However, I'd like to avoid cluttering up my SQL Server databases with unnecessary stored procedures if at all possible!
If you know roughly long it takes to restore you could put a delay in it maybe?
restore database [DBName] with recovery;
Begin
waitfor delay '00:01'; --one minute delay
update [DBTable] set [Col1]= 1;
END;
I have a SQL Server [2012 Express with Advanced Services] database, with not much in it. I'm developing an application using EF Code First, and since my model is still in a state of flux, the database is getting dropped and re-created several times per day.
This morning, my application failed to connect to the database the first time I ran it. On investigation, it seems that the database is in "Recovery Pending" mode.
Looking in the event log, I can see that SQL Server has logged:
Starting up database (my database)
...roughly twice per second all night long. (The event log filled up, so I can't see beyond yesterday evening).
Those "information" log entries stop at about 6am this morning, and are immediately followed by an "error" log entry saying:
There is insufficient memory in resource pool 'internal' to run this query
What the heck happened to my database?
Note: it's just possible that I left my web application running in "debug" mode overnight - although without anyone "driving" it I can't imagine that there would be much database traffic, if any.
It's also worth mentioning that I have a full-text catalog in the database (though as I say, there's hardly any actual content in the DB at present).
I have to say, this is worrying - I would not be happy if this were to happen to my production database!
With AUTO_CLOSE ON the database will be closed as soon as there are no connections to it, and re-open (run recovery, albeit a fast paced one) every time a connection is established to it. So you were seeing the message because every 2 second your application would connect to the database. You probably always had this behavior and never noticed before. Now that your database crashed, you investigated the log and discovered this problem. While is good that now you know and will likely fix it, this does not address you real problem, namely the availability of the database.
So now you have a database that won't come out of recovery, what do you do? You restore from you last backup and apply your disaster recovery plan. Really, that's all there is to it. And there is no alternative.
If you want to understand why the crash happened (it can be any of about 1 myriad reasons...) then you need to contact CSS (Product Support). They have the means to guide you through investigation.
If you wanted to turn off this message in event log.
Just goto SQL Server Management Studio,
Right click on your database
Select Options (from left panel)
Look into "Automatic" section, and change "Auto Close" to "False"
Click okay
That's All :)
I had a similar problem with a sql express database stuck in recovery. After investigating the log it transpired that the database was starting up every couple of minutes. Running the script
select name, state_desc, is_auto_close_on from sys.databases where name = 'mydb'
revealed that auto close was set to on.
So it appears that the database is in always in recovery but is actually coming online for a brief second before going offline again because there are no client connections.
I solved this with following script.
Declare #state varchar(20)
while 1=1
begin
Select #state = state_desc from sys.databases where name='mydb';
If #state = 'ONLINE'
Begin
Alter database MyDb
Set AUTO_CLOSE_OFF;
Print 'Online'
break;
End
waitfor delay '00:00:02'
end
I have a SQL Server 2005 database that has been deleted, and I need to discover who deleted it. Is there a way of obtaining this user name?
Thanks, MagicAndi.
If there has been little or no activity since the deletion, then the out-of-the-box trace may be of help. Try running:
DECLARE #path varchar(256)
SELECT #path = path
FROM sys.traces
where id = 1
SELECT *
FROM fn_trace_gettable(#path, 1)
[In addition to the out-of-the-box trace, there is also the less well-known 'black box' trace, which is useful for diagnosing intermittent server crashes. This post, SQL Server’s Built-in Traces, shows you how to configure it.]
I would first ask everyone who has admin access to the Sql Server if they deleted it.
The best way to retrieve the information is to restore the latest backup.
Now to discuss how to avoid such problems in the future.
First make sure your backup process is running correctly and frequently. Make transaction log baclup evey 15 mintues or half an hour if it is a higly transactional database. Then the most you lose is a half an hour's worht of work. Practice restoring the database until you can easily do it under stress.
In SQL Server 2008 you can add DDL triggers (not sure if you can do this in 2005) which allow you to log who did changes to structure. It might be worth your time to look into this.
Do NOT allow more than two people admin access to your production database - a dba and a backup person for when the dba is out. These people should load all changes to the database structure and code and all of the changes should be scripted out, code reviewed and tested first on QA. No unscripted, "run by the seat of your pants" code should ever be run on prod.
Here is bit more precise TSQL
SELECT DatabaseID,NTUserName,HostName,LoginName,StartTime
FROM
sys.fn_trace_gettable(CONVERT(VARCHAR(150),
( SELECT TOP 1
f.[value]
FROM sys.fn_trace_getinfo(NULL) f
WHERE f.property = 2
)), DEFAULT) T
JOIN sys.trace_events TE ON T.EventClass = TE.trace_event_id
WHERE TE.trace_event_id =47 AND T.DatabaseName = 'delete'
-- 47 Represents event for deleting objects.
This can be used in the both events of knowing or not knowing the database/object name. Results look like this: