I am in the process of migrating one of the databases that has multiple filegroups into Azure Sql Server. Azure does not seem to support multiple file groups yet according to DMA(Data Migration Assistant).
Now I would like to merge filegroups back into single file.
By searching on internet I found out the following commands to shrink the filegroup to empty the filegroup. Which I believe just merge the content into Primary filegroup and then the filegroup can be removed.
DBCC SHRINKFILE ('ndfFile', EMPTYFILE);
GO
ALTER DATABASE myDb REMOVE file ndfFile
GO
When I run this I get the following error message:
Msg 2556, Level 16, State 1, Line 2
There is insufficient space in the filegroup to complete the emptyfile operation.
I have checked the Primary file group file and auto growth is enabled
I have taken a full backup and removed the database and restored the database again but again no success.
I have also tried to create a new database with a single file and tried to use import/export to copy the data into new database. However import/export wizard does only create tables but not referential integrity constraint nor primary key etc.
When I create the objects by using Generate Scripts feature to create all objects then I cannot copy the data even the "Insert Identity" option is selected.
Question 1: Is there any way that I can restore MyDb.bak backup file into a single file (and log file)?
Question 2: Is there any way to merge filegroups into single file?
You need to move existing data from the other filegroup(s) to the primary filegroup using WITH(DROP_EXISTING=ON) as explained in this article.
After that you can remove the filegroup with an ALTER DATABASE statement.
ALTER DATABASE [DB1] REMOVE FILEGROUP [fg_Secondary]
At the end you will be able to migrate your database to Azure SQL Database.
Finally I got to bottom of this issue. The issue was caused by the Full Text search indexes that are placed into the second file group. It was not easy to identify those with the provided answer.
Running this script below identifies all the full text indexes in the selected database and filegroup for the indexes.
USE yourdatabasename;
SELECT
t.name AS TableName,
c.name AS FTCatalogName ,
i.name AS UniqueIdxName,
cl.name AS ColumnName,
f.name as FileGroupName
FROM
sys.tables t
INNER JOIN
sys.fulltext_indexes fi
ON
t.[object_id] = fi.[object_id]
INNER JOIN
sys.fulltext_index_columns ic
ON
ic.[object_id] = t.[object_id]
INNER JOIN
sys.columns cl
ON
ic.column_id = cl.column_id
AND ic.[object_id] = cl.[object_id]
INNER JOIN
sys.fulltext_catalogs c
ON
fi.fulltext_catalog_id = c.fulltext_catalog_id
INNER JOIN
sys.indexes i
ON
fi.unique_index_id = i.index_id
INNER JOIN sys.filegroups f ON fi.data_space_id = f.data_space_id
AND fi.[object_id] = i.[object_id];
Once you have identified all the indexes in the secondary file group then you will need to drop all those indexes and create them again. If you do not specify file group name they will be placed in the default (PRIMARY) file group.
Once the indexes are re-created in the PRIMARY file group then you can run the following script to empty and remove the secondary file group from database:
DBCC SHRINKFILE ('ndfFile', EMPTYFILE);
GO
ALTER DATABASE myDb REMOVE file ndfFile
GO
You can also empty and remove the file in the SSMS properties by following steps below:
Right click database >Tasks>Shrink>Files
Select the second file group in FileGroup dropdown.
Then use the radio button says "Empty file by migrating..." and hit Ok. This will remove the space.
Now file group can be deleted by :
Right click database>Properties
Select FileGroups on the left pane. and select the second file group and remove.
All should be good to go and migrate into Azure Sql Server.
Related
I use filetable. When I delete files from SQL Server's FileTable, I wantthe files to be deleted from the folder, and when I delete file from folder, it should be deleted from the filetable.
And I have a second question: is filetable the faster way to save file in server and reading it (files larger than 1MB)?
For the first question, the files should be deleted as soon as you delete the relative row (DELETE FROM ...) and commit. The same should apply at reverse (if you delete a file the relative row should disappear).
This is true for the file exposed through the network share, the physical file will be removed at later time, depending on the recovery model and the filestream's garbage collection process (see sp_filestream_force_garbage_collection stored procedure).
For the second question, the access will be always slower than a pure filesystem one because of the SQL Server overhead (the find time will be orders of magnitude faster though).
Compared to a T-SQL access, though, it all depends on the size of the blob you are storing.
In a nutshell, if your blobs are smaller than 1 MB using T-SQL should be faster. Please refer here: Best Practices on FILESTREAM implementations for more detailed figures.
DROP TABLE [ IF EXISTS ] [ database_name . [ schema_name ] . | schema_name . ]
table_name [ ,...n ]
[ ; ]
You can Use this code to drop the file table.if you want to delete only certain specific data use where or having clauses in the Tsql statement
first thing that you must remember File Table in SQL Server is best way that you can use it because of this is manager engine for helping to developer that they want to have pattern for managing files.
this manger use physical place for saving files and create table for keeping basic information of file data.when you are deleted file,File Table manager have job that run some times and it is deleted physical file.
if you want to delete physical file immediately you can use this clause:
checkpoint;
EXEC sp_filestream_force_garbage_collection #dbname = N'[DB Name]';
you must remember use this clause after delete row from file table with delay
or use it in SQL Trigger (after delete) :
Create TRIGGER [Schema].[TriggerName]
ON [Schema].[TableName]
after DELETE
AS
declare #T table ([file_name] varchar(50),num_col int, num_marked int , num_unproce int ,last_col bigint)
BEGIN
checkpoint;
insert into #T EXEC sp_filestream_force_garbage_collection #dbname = N'[DB Name]';
END
I have moved some data from one file group to another, however when I check the drive with master.sys.xp_fixeddrives the drive does not say that it has been altered in size.
I want to know how to check what drive the filegroup is physically stored on.
Thanks
Filegroups are made up of a collection of files, all of which could be on separate drives, partitions, mount points, smb shares, etc. The filegroup doesn't exist anywhere but logically, you want to know where the file inside of the filegroup reside.
Below can be run to give you the information you are looking for in terms of where the files are in that filegroup. Optionally you can omit the where clause to see all of the files and filegroups, where the files reside. It will need to be run in the context of the database in question.
SELECT f.name AS [FGName], df.name AS [FileName], LEFT(df.physical_name, 1) AS [Drive]
, df.physical_name AS [Full_Path]
FROM sys.database_files df
inner join sys.filegroups f
on df.data_space_id = f.data_space_id
WHERE f.name = 'MyFilegroupNameHere'
We ran a SQL Server 2008 Export wizard overnight to copy 100+ tables from one server to another. The process worked fine (if a little slow over our network)
The report produced at the end does not show a start and end time for the operation for some unknown reason
I know the names of all the tables created (they are brand new on the target server) - is there any SQL I can run against sys.tables or a similar table that can show the last write time against the table?
The create_date value on sys.tables seems to imply that the export wizard creates empty copies of each table before starting the data insert rather than doing each one in turn
You can get an estimation by looking at the index usage statistics
SELECT Object_Name(object_id) As object
, Max(last_user_update) As last_update
FROM sys.dm_db_index_usage_stats
WHERE database_id = DB_ID()
GROUP
BY Object_Name(object_id)
ORDER
BY object
What is the difference of the sys.master_files and sys.database_files? I have about 20 databases in my instance but when I query the sys.master_files I do not receive any rows. Why? When I query sys.database_files I get the information about the database files concerning the current database.
sys.master_files :
Contains a row per file of a database
as stored in the master database. This
is a single, system-wide view.
sys.database_files :
Contains a row per file of a database as stored in the database itself. This is a per-database view.
So, SELECT * FROM sys.master_files should list the files for each database in the instance whereas SELECT * FROM sys.database_files should list the files for the specific database context.
Testing this here (SQL 2K8), it works as per the above?
Update:
If you're not seeing rows from sys.master_files, it could be a permissions issue as BOL states:
The minimum permissions that are
required to see the corresponding row
are CREATE DATABASE, ALTER ANY
DATABASE, or VIEW ANY DEFINITION.
Whereas for sys.database_files just requires membership in the public role.
How could I know the physical location (so I can see it in Windows Explorer) path of a FILESTREAM data that I've just inserted into DB?
There is one option for this: method PhysicalPathName(). If you are on SQL Server 2012 or upper now, this code will work for you:
SELECT stream.PhysicalPathName() AS 'Path' FROM Media
OPTION (QUERYTRACEON 5556)
For SQL Server 2008/2008 R2 you will need to enable trace flag 5556 for the whole instance:
DBCC TRACEON (5556, -1)
GO
or for the particular connection in which you are calling PhysicalPathName() method:
DBCC TRACEON (5556, -1)
GO
I know this is an older post but as it still comes up high in the Google search rankings I thought I'd post an answer. Certainly in later versions of SQL (I've not tried this on 2008) you can run the following query:
SELECT t.name AS 'table',
c.name AS 'column',
fg.name AS 'filegroup_name',
dbf.type_desc AS 'type_description',
dbf.physical_name AS 'physical_location'
FROM sys.filegroups fg
INNER JOIN sys.database_files dbf
ON fg.data_space_id = dbf.data_space_id
INNER JOIN sys.tables t
ON fg.data_space_id = t.filestream_data_space_id
INNER JOIN sys.columns c
ON t.object_id = c.object_id
AND c.is_filestream = 1
Source
As Pawel has mentioned, it is not a good idea to access the FILESTREAM files using Windows Explorer. If you are still determined to go ahead and explore this, the following tip might help.
The FILESTREAM file names are actually the log-sequence number from the database transaction log at the time the files were created. Paul Randal has explained it in this post. So One option is to find out the log sequence number and look for a file named after that in the file stream data container.
First you need to understand that the FileStream is being stored on the server hosting your SQL Server 2008 database. If you have a DBA, ask them where they created it the FileStream at. Of course, you'll then need rights to the server to navigate it to see the directories. You won't be able to manipulate the files in any way either, but you will be able to see them. Most DBA's won't be keen on letting you know where the FileStream is located at.
However, you can get at the path by a few other means. One way that comes to mind is by selecting upon the PathName() of the FileStream field. Assume that the FileStream enabled field is ReportData, and the table in which it resides is TblReports. The following t-sql syntax will yield an UNC to the location:
select top 1 ReportData.PathName(0)
from dbo.datReport
I believe you can also get at the path by other means through enterprise manager, but I forget how to at the moment.
--filestream file path
SELECT col.PathName() AS path FROM tbl