DBCC shrinkfile gives error - sql-server

I am trying to shrink my log file using DBCC SHRINKFILE(db_2.ldf), which is the name for log file
It gives me error every time:
8985, Level 16, State 1, Line 1
Could not locate file 'FIelD' for database db in sys.database_files. The file either does not exist, or was dropped.
Can you please suggest what can I do to fix it.

The file name should be the logical file name and not the physical file name. Look in the Database properties, on the Files tab for the Logical Name of the file you are trying to shrink, and use that name.

Are you running it in the context of the database that has the log you are trying to shrink? Make sure you have the right USE statement before running DBCC commands

Had the same problem over here, the solution was to rename the logical file to match the database name. Below is an example to query the logical file names and then perform a rename of the files:
-- retrieve the logical files for the current db
SELECT [name] AS logical_file FROM sys.database_files df
-- rename the logical file (to match the database name)
ALTER DATABASE YourDB
MODIFY FILE (NAME = 'LogicalFile1', NEWNAME='NewLogicalFile1')
GO
ALTER DATABASE YourDB
MODIFY FILE (NAME = 'LogicalFile2', NEWNAME='NewLogicalFile2')
GO
The reason for the two alters is that there are usually two files associated with each database, the data file and the log file.

The command below worked. However I don't see it reducing the size. I see the same size after and before running it. Can you please let me know what I might have missed.
fileid groupid size maxsize growth status perf name filename
2 0 1048 268435456 10 1048642 0 PrimaryLogFileName
Thanks

From SQL Management Studio
Right click the database name, Tasks, Shrink, Database
Ctrl+Shift+N (or Script Action to New Query Window)
Generates the following:
USE [DataBaseName]
GO
DBCC SHRINKDATABASE(N'DataBaseName' )
GO

Related

What is the reason that the LOG file size of the SQL Server database does not decrease?

I used the following query to view the database log file.
declare #templTable as table
( DatabaseName nvarchar(50),
LogSizeMB nvarchar(50),
LogSpaceUsedPersent nvarchar(50),
Statusee bit
)
INSERT INTO #templTable
EXEC('DBCC SQLPERF(LOGSPACE)')
SELECT * FROM #templTable ORDER BY convert(float , LogSizeMB) desc
DatabaseName LogSizeMB LogSpaceUsedPersent
===============================================
MainDB 6579.93 65.8095
I also used the following code to view the amount of space used by the main database file.
with CteDbSizes
as
(
select database_id, type, size * 8.0 / 1024 size , f.physical_name
from sys.master_files f
)
select
dbFileSizes.[name] AS DatabaseName,
(select sum(size) from CteDbSizes where type = 1 and CteDbSizes.database_id = dbFileSizes.database_id) LogFileSizeMB,
(select sum(size) from CteDbSizes where type = 0 and CteDbSizes.database_id = dbFileSizes.database_id) DataFileSizeMB
--, (select physical_name from CteDbSizes where type = 0 and CteDbSizes.database_id = dbFileSizes.database_id) as PathPfFile
from sys.databases dbFileSizes ORDER BY DataFileSizeMB DESC
DatabaseName LogFileSizeMB DataFileSizeMB
===============================================
MainDB 6579.937500 7668.250000
But whatever I did, the amount of database log space is not less than 6 GB. Do you think there is a reason that the database log has not been changed for more than a month? Is there a solution to reduce this amount or not? I also used different methods and queries to reduce the size of the log file. I got good answers on other databases. Like folow:
use [master];
GO
USE [master]
GO
ALTER DATABASE [MainDB] SET RECOVERY SIMPLE WITH NO_WAIT
GO
USE [MainDB]
GO
DBCC SHRINKDATABASE(N'MainDB')
GO
DBCC SHRINKFILE (N'MainDB_log' , EMPTYFILE)
GO
ALTER DATABASE [MainDB] SET RECOVERY FULL WITH NO_WAIT
GO
But in this particular database, the database log is still not less than 6 GB. Please help. Thanks.
As I informed in the main post, after a while, the size of our database file log reached about 25 gigs and we could no longer even compress the database files. After some searching, I came to the conclusion that I should back up the log file and then compress the log file. For this purpose, I defined a job that prepares a file from a log backup file almost every 30 minutes, and the size of these files usually does not exceed 150 MB. Then, after each backup of the log file, I run the log compression command once. With this method, the size of the log file is greatly reduced and now we have about 500 MB of log file. Of course, due to the large number of transactions on the database, the mentioned job must always be active. If the job is not active, I will increase the log volume again.

SQL Server FileTable : delete file

I use filetable. When I delete files from SQL Server's FileTable, I wantthe files to be deleted from the folder, and when I delete file from folder, it should be deleted from the filetable.
And I have a second question: is filetable the faster way to save file in server and reading it (files larger than 1MB)?
For the first question, the files should be deleted as soon as you delete the relative row (DELETE FROM ...) and commit. The same should apply at reverse (if you delete a file the relative row should disappear).
This is true for the file exposed through the network share, the physical file will be removed at later time, depending on the recovery model and the filestream's garbage collection process (see sp_filestream_force_garbage_collection stored procedure).
For the second question, the access will be always slower than a pure filesystem one because of the SQL Server overhead (the find time will be orders of magnitude faster though).
Compared to a T-SQL access, though, it all depends on the size of the blob you are storing.
In a nutshell, if your blobs are smaller than 1 MB using T-SQL should be faster. Please refer here: Best Practices on FILESTREAM implementations for more detailed figures.
DROP TABLE [ IF EXISTS ] [ database_name . [ schema_name ] . | schema_name . ]
table_name [ ,...n ]
[ ; ]
You can Use this code to drop the file table.if you want to delete only certain specific data use where or having clauses in the Tsql statement
first thing that you must remember File Table in SQL Server is best way that you can use it because of this is manager engine for helping to developer that they want to have pattern for managing files.
this manger use physical place for saving files and create table for keeping basic information of file data.when you are deleted file,File Table manager have job that run some times and it is deleted physical file.
if you want to delete physical file immediately you can use this clause:
checkpoint;
EXEC sp_filestream_force_garbage_collection #dbname = N'[DB Name]';
you must remember use this clause after delete row from file table with delay
or use it in SQL Trigger (after delete) :
Create TRIGGER [Schema].[TriggerName]
ON [Schema].[TableName]
after DELETE
AS
declare #T table ([file_name] varchar(50),num_col int, num_marked int , num_unproce int ,last_col bigint)
BEGIN
checkpoint;
insert into #T EXEC sp_filestream_force_garbage_collection #dbname = N'[DB Name]';
END

Could not locate file 'mydatabase' for database 'mydatabase' in sys.database_files. The file either does not exist, or was dropped

dbcc shrinkfile('mydatabase',113311) fails with following error
Could not locate file 'mydatabase' for database 'mydatabase' in sys.database_files. The file either does not exist, or was dropped
Its failing once in while randomly. I have nightly task that executes dbcc shrinkfile. Works fine most of the time. There is no problem with logical file name etc...
my logical file name is 'mydatabase'. I have verified my logical name using below queries.
DBCC FILEHEADER (mydatabase)
select * from mydatabase.dbo.sysfiles
Select * from master..sysaltfiles
This is really strange as i could't find any root cause. SQL 2008R2 SP2
I had the same issue in sql server 2012. Quick fix you can use file id instead of logical name to shrink the file. Secondly check what is the logical name in master_files and in database_files. I had different in master_files than in database_files. So you need to just run alter database to set again the logical name for the file and then it works just fine.
dbcc shrinkfile('mydatabase',113311)
Note that 'mydatabase' should be logical name.
You can find the logical name from the file tab under data base properties window.
Note
Verify the Logical Name and Database Same .Other wise it woud'nt work
Here I Marked with red .the database name and Logicalname different.
So consider the Logical Name and execute .Database->properites->File
I tried all of the above and still had the issue. Database was called clientdatabase and the log file clientdatabase_log.
I managed to resolve it by renaming the logical name of the log file:
USE [clientdatabase];
ALTER DATABASE clientdatabase MODIFY FILE
(NAME = clientdatabase_log, NEWNAME = clientdatabase_log_1);
Running the script
USE [clientTdatawarehouse]
GO
DBCC SHRINKFILE (clientTDataWarehouse_log_1, 1024)
GO
Now worked.
I blogged about it here:
https://hybriddbablog.com/answer-to-could-not-locate-file-xxx_log-for-database-xxx-in-sys-database_files/

SQL Server database has 2 log files and I want to remove one. HOW?

I am new to this. I have a database (created by someone else) that has 2 .ldf files. (blah_log.ldf and blah_log2.ldf). My manager asked me to remove one of the log files but I cannot. How do I do this? I tried to put it on another server, detach, delete log files, attach, but it gives an error. I thought that way it would create just one, but it wanted both. Then i tried to right click properties and delete the files, would not let me delete. It said the log file was not empty. How in the heck do I achieve this. I just want to make it where the dang database has one freaking log file not two. This shouldn't be this complicated. I am a beginner and know nothing so maybe it isn't really. Please HELP!
I just tried this:
empty SQL Server database transaction log file
backup log [dbname] with truncate_only
go
DBCC SHRINKDATABASE ([dbname], 10, TRUNCATEONLY)
go
Then I deleted the second log file and clicked ok. I guess this is all I need to do? I tried it on a test server from a restore.
This MSDN article describes how to accomplish this at a high-level:
You cannot move transaction log data from one log file to another to
empty a transaction log file. To remove inactive transactions from a
transaction log file, the transaction log must be truncated or backed
up. When the transaction log file no longer contains any active or
inactive transactions, the log file can be removed from the database.
And this blog post shows the actual T-SQL that will accomplish this task:
USE master
IF DB_ID('rDb') IS NOT NULL DROP DATABASE rDb
GO
CREATE DATABASE rDb
ON
PRIMARY
( NAME = N'rDb', FILENAME = N'C:\rDb.mdf' , SIZE = 50MB ,
FILEGROWTH = 1024KB )
LOG ON
(NAME = N'rDb_log2', FILENAME = N'C:\rDb_log2.ldf', SIZE = 3MB,
FILEGROWTH = 2MB)
,(NAME = N'rDb_log3', FILENAME = N'C:\rDb_log3.ldf', SIZE = 3MB,
FILEGROWTH = 2MB)
,(NAME = N'rDb_log4', FILENAME = N'C:\rDb_log4.ldf', SIZE = 3MB,
FILEGROWTH = 2MB)
GO
ALTER DATABASE rDb SET RECOVERY FULL
BACKUP DATABASE rDb TO DISK = 'C:\rDb.bak' WITH INIT
CREATE TABLE rDb..t(c1 INT IDENTITY, c2 CHAR(100))
INSERT INTO rDb..t
SELECT TOP(15000) 'hello'
FROM syscolumns AS a
CROSS JOIN syscolumns AS b
--Log is now about 46% full
DBCC SQLPERF(logspace)
--Check virtual log file layout
DBCC LOGINFO(rDb)
--See that file 4 isn't used at all (Status = 0 for all 4's rows)
--We can remove file 4, it isn't used
ALTER DATABASE rDb REMOVE FILE rDb_log4
--Check virtual log file layout
DBCC LOGINFO(rDb)
--Can't remove 3 since it is in use
ALTER DATABASE rDb REMOVE FILE rDb_log3
--What if we backup log?
BACKUP LOG rDb TO DISK = 'C:\rDb.bak'
--Check virtual log file layout
DBCC LOGINFO(rDb)
--3 is still in use (status = 2)
--Can't remove 3 since it is in use
ALTER DATABASE rDb REMOVE FILE rDb_log3
--Shrink 3
USE rDb
DBCC SHRINKFILE(rDb_log3)
USE master
--... and backup log?
BACKUP LOG rDb TO DISK = 'C:\rDb.bak'
--Check virtual log file layout
DBCC LOGINFO(rDb)
--3 is no longer in use
--Can now remove 3 since it is not in use
ALTER DATABASE rDb REMOVE FILE rDb_log3
--Check explorer, we're down to 1 log file
--See what sys.database_files say?
SELECT * FROM rDb.sys.database_files
--Seems physical file is gone, but SQL Server consider the file offline
--Backup log does it:
BACKUP LOG rDb TO DISK = 'C:\rDb.bak'
SELECT * FROM rDb.sys.database_files
--Can never remove the first ("primary") log file
ALTER DATABASE rDb REMOVE FILE rDb_log2
--Note error message from above

Calling sp_rename on a table kills database connection in Sybase

I'm trying to rename a table using the following syntax
sp_rename [oldname],[newname]
but any time I run this, I get the following [using Aqua Datastudio]:
Command was executed successfully
Warnings: --->
W (1): The SQL Server is terminating this process.
<---
[Executed: 16/08/10 11:11:10 AM] [Execution: 359ms]
Then the connection is dropped (can't do anything else in the current query analyser (unique spid for each window))
Do I need to be using master when I run these commands, or am I doing something else wrong?
You shouldn't be getting the behaviour you're seeing.
It should either raise an error (e.g. If you don't have permission) or work successfully.
I suspect something is going wrong under the covers.
Have you checked the errorlog for the ASE server? Typically these sorts of problems (connections being forcibly closed) will be accompanied by an entry in the errorlog with a little bit more information.
The error log will be on the host that runs the ASE server, and will probably be in the same location that ASE is installed into. Something like
/opt/sybase/ASE-12_5/install/errorlog_MYSERVER
try to avoid using "sp_rename". Because some references in system tables remain like old name. Someday this may cause some faulties if you forget this change.
I suggest;
select * into table_backup from [tableRecent]
go
select * into [tableNew] from table_backup
go
drop table [tableRecent] -- in case of backup you may not drop that table
go
drop table table_backup -- in case of backup you may not drop that table
go
to achieve that; your database has an option "select into/bulkcopy/pllsort"
if your ata is huge, check your free space on that database.
and enjoy :)

Resources