I have moved some data from one file group to another, however when I check the drive with master.sys.xp_fixeddrives the drive does not say that it has been altered in size.
I want to know how to check what drive the filegroup is physically stored on.
Thanks
Filegroups are made up of a collection of files, all of which could be on separate drives, partitions, mount points, smb shares, etc. The filegroup doesn't exist anywhere but logically, you want to know where the file inside of the filegroup reside.
Below can be run to give you the information you are looking for in terms of where the files are in that filegroup. Optionally you can omit the where clause to see all of the files and filegroups, where the files reside. It will need to be run in the context of the database in question.
SELECT f.name AS [FGName], df.name AS [FileName], LEFT(df.physical_name, 1) AS [Drive]
, df.physical_name AS [Full_Path]
FROM sys.database_files df
inner join sys.filegroups f
on df.data_space_id = f.data_space_id
WHERE f.name = 'MyFilegroupNameHere'
Related
I am in the process of migrating one of the databases that has multiple filegroups into Azure Sql Server. Azure does not seem to support multiple file groups yet according to DMA(Data Migration Assistant).
Now I would like to merge filegroups back into single file.
By searching on internet I found out the following commands to shrink the filegroup to empty the filegroup. Which I believe just merge the content into Primary filegroup and then the filegroup can be removed.
DBCC SHRINKFILE ('ndfFile', EMPTYFILE);
GO
ALTER DATABASE myDb REMOVE file ndfFile
GO
When I run this I get the following error message:
Msg 2556, Level 16, State 1, Line 2
There is insufficient space in the filegroup to complete the emptyfile operation.
I have checked the Primary file group file and auto growth is enabled
I have taken a full backup and removed the database and restored the database again but again no success.
I have also tried to create a new database with a single file and tried to use import/export to copy the data into new database. However import/export wizard does only create tables but not referential integrity constraint nor primary key etc.
When I create the objects by using Generate Scripts feature to create all objects then I cannot copy the data even the "Insert Identity" option is selected.
Question 1: Is there any way that I can restore MyDb.bak backup file into a single file (and log file)?
Question 2: Is there any way to merge filegroups into single file?
You need to move existing data from the other filegroup(s) to the primary filegroup using WITH(DROP_EXISTING=ON) as explained in this article.
After that you can remove the filegroup with an ALTER DATABASE statement.
ALTER DATABASE [DB1] REMOVE FILEGROUP [fg_Secondary]
At the end you will be able to migrate your database to Azure SQL Database.
Finally I got to bottom of this issue. The issue was caused by the Full Text search indexes that are placed into the second file group. It was not easy to identify those with the provided answer.
Running this script below identifies all the full text indexes in the selected database and filegroup for the indexes.
USE yourdatabasename;
SELECT
t.name AS TableName,
c.name AS FTCatalogName ,
i.name AS UniqueIdxName,
cl.name AS ColumnName,
f.name as FileGroupName
FROM
sys.tables t
INNER JOIN
sys.fulltext_indexes fi
ON
t.[object_id] = fi.[object_id]
INNER JOIN
sys.fulltext_index_columns ic
ON
ic.[object_id] = t.[object_id]
INNER JOIN
sys.columns cl
ON
ic.column_id = cl.column_id
AND ic.[object_id] = cl.[object_id]
INNER JOIN
sys.fulltext_catalogs c
ON
fi.fulltext_catalog_id = c.fulltext_catalog_id
INNER JOIN
sys.indexes i
ON
fi.unique_index_id = i.index_id
INNER JOIN sys.filegroups f ON fi.data_space_id = f.data_space_id
AND fi.[object_id] = i.[object_id];
Once you have identified all the indexes in the secondary file group then you will need to drop all those indexes and create them again. If you do not specify file group name they will be placed in the default (PRIMARY) file group.
Once the indexes are re-created in the PRIMARY file group then you can run the following script to empty and remove the secondary file group from database:
DBCC SHRINKFILE ('ndfFile', EMPTYFILE);
GO
ALTER DATABASE myDb REMOVE file ndfFile
GO
You can also empty and remove the file in the SSMS properties by following steps below:
Right click database >Tasks>Shrink>Files
Select the second file group in FileGroup dropdown.
Then use the radio button says "Empty file by migrating..." and hit Ok. This will remove the space.
Now file group can be deleted by :
Right click database>Properties
Select FileGroups on the left pane. and select the second file group and remove.
All should be good to go and migrate into Azure Sql Server.
I use filetable. When I delete files from SQL Server's FileTable, I wantthe files to be deleted from the folder, and when I delete file from folder, it should be deleted from the filetable.
And I have a second question: is filetable the faster way to save file in server and reading it (files larger than 1MB)?
For the first question, the files should be deleted as soon as you delete the relative row (DELETE FROM ...) and commit. The same should apply at reverse (if you delete a file the relative row should disappear).
This is true for the file exposed through the network share, the physical file will be removed at later time, depending on the recovery model and the filestream's garbage collection process (see sp_filestream_force_garbage_collection stored procedure).
For the second question, the access will be always slower than a pure filesystem one because of the SQL Server overhead (the find time will be orders of magnitude faster though).
Compared to a T-SQL access, though, it all depends on the size of the blob you are storing.
In a nutshell, if your blobs are smaller than 1 MB using T-SQL should be faster. Please refer here: Best Practices on FILESTREAM implementations for more detailed figures.
DROP TABLE [ IF EXISTS ] [ database_name . [ schema_name ] . | schema_name . ]
table_name [ ,...n ]
[ ; ]
You can Use this code to drop the file table.if you want to delete only certain specific data use where or having clauses in the Tsql statement
first thing that you must remember File Table in SQL Server is best way that you can use it because of this is manager engine for helping to developer that they want to have pattern for managing files.
this manger use physical place for saving files and create table for keeping basic information of file data.when you are deleted file,File Table manager have job that run some times and it is deleted physical file.
if you want to delete physical file immediately you can use this clause:
checkpoint;
EXEC sp_filestream_force_garbage_collection #dbname = N'[DB Name]';
you must remember use this clause after delete row from file table with delay
or use it in SQL Trigger (after delete) :
Create TRIGGER [Schema].[TriggerName]
ON [Schema].[TableName]
after DELETE
AS
declare #T table ([file_name] varchar(50),num_col int, num_marked int , num_unproce int ,last_col bigint)
BEGIN
checkpoint;
insert into #T EXEC sp_filestream_force_garbage_collection #dbname = N'[DB Name]';
END
The question: Is there somewhere on the disk where I might find human-readable files for each procedure / trigger on a Database instance?
I've looked through Program Files\Microsoft SQL Server\ on my disk but it's mostly just .dll / .rll files.
The reason I ask is because my company has decided to increase a certain VARCHAR(10) field to VARCHAR(50) and this field makes an appearance in 350+ stored procedures / triggers. And it's my job to figure out which of these scripts need to be modified to account for this field length increase. It'd be great if I could just write a script to parse out these files and identify ones that match various different regular expressions.
as shawnt00 mentioned stored procedures [SP], functions, table definitions, etc. are stored in the database files themselves not with the file system. But you can use system views to query against the SP or Function text to attempt to find the column/variable you are looking for.
The one exception is any object that is encrypted as the text definition of the object will not be in clear text but rather it will be encrypted so it will not be returned from a query like this.
SELECT
SCHEMA_NAME(o.schema_id) as SchemaName
,o.name AS ObjectName
,o.type
,o.type_desc
,m.definition as ObjectText
FROM
sys.sql_modules m
INNER JOIN sys.objects o
ON m.object_id = o.object_id
WHERE
m.definition LIKE '%VARCHAR(10)%'
How could I know the physical location (so I can see it in Windows Explorer) path of a FILESTREAM data that I've just inserted into DB?
There is one option for this: method PhysicalPathName(). If you are on SQL Server 2012 or upper now, this code will work for you:
SELECT stream.PhysicalPathName() AS 'Path' FROM Media
OPTION (QUERYTRACEON 5556)
For SQL Server 2008/2008 R2 you will need to enable trace flag 5556 for the whole instance:
DBCC TRACEON (5556, -1)
GO
or for the particular connection in which you are calling PhysicalPathName() method:
DBCC TRACEON (5556, -1)
GO
I know this is an older post but as it still comes up high in the Google search rankings I thought I'd post an answer. Certainly in later versions of SQL (I've not tried this on 2008) you can run the following query:
SELECT t.name AS 'table',
c.name AS 'column',
fg.name AS 'filegroup_name',
dbf.type_desc AS 'type_description',
dbf.physical_name AS 'physical_location'
FROM sys.filegroups fg
INNER JOIN sys.database_files dbf
ON fg.data_space_id = dbf.data_space_id
INNER JOIN sys.tables t
ON fg.data_space_id = t.filestream_data_space_id
INNER JOIN sys.columns c
ON t.object_id = c.object_id
AND c.is_filestream = 1
Source
As Pawel has mentioned, it is not a good idea to access the FILESTREAM files using Windows Explorer. If you are still determined to go ahead and explore this, the following tip might help.
The FILESTREAM file names are actually the log-sequence number from the database transaction log at the time the files were created. Paul Randal has explained it in this post. So One option is to find out the log sequence number and look for a file named after that in the file stream data container.
First you need to understand that the FileStream is being stored on the server hosting your SQL Server 2008 database. If you have a DBA, ask them where they created it the FileStream at. Of course, you'll then need rights to the server to navigate it to see the directories. You won't be able to manipulate the files in any way either, but you will be able to see them. Most DBA's won't be keen on letting you know where the FileStream is located at.
However, you can get at the path by a few other means. One way that comes to mind is by selecting upon the PathName() of the FileStream field. Assume that the FileStream enabled field is ReportData, and the table in which it resides is TblReports. The following t-sql syntax will yield an UNC to the location:
select top 1 ReportData.PathName(0)
from dbo.datReport
I believe you can also get at the path by other means through enterprise manager, but I forget how to at the moment.
--filestream file path
SELECT col.PathName() AS path FROM tbl
Is there a way to determine what MDF goes with what LDF file for SQL Server? We had a server crash and pull these files off and were only named with a random integer for the file name. So now we need to guess which MDF and LDF go together to get them up but what is the best way to do that?
You would find your current database's MDF and LDF with this:
sp_helpdb 'YourDBName'
Or you could see everything you have in your instance:
SELECT name, physical_name AS current_file_location FROM sys.master_files
In case of offline scenario try this:
SELECT DB.name, MF.name, MF.type_desc, MF.physical_name
FROM sys.databases DB
INNER JOIN sys.master_files MF ON db.database_id = mf.database_id
WHERE DB.state = 6
When DB.State= 6 means offline state.