I use filetable. When I delete files from SQL Server's FileTable, I wantthe files to be deleted from the folder, and when I delete file from folder, it should be deleted from the filetable.
And I have a second question: is filetable the faster way to save file in server and reading it (files larger than 1MB)?
For the first question, the files should be deleted as soon as you delete the relative row (DELETE FROM ...) and commit. The same should apply at reverse (if you delete a file the relative row should disappear).
This is true for the file exposed through the network share, the physical file will be removed at later time, depending on the recovery model and the filestream's garbage collection process (see sp_filestream_force_garbage_collection stored procedure).
For the second question, the access will be always slower than a pure filesystem one because of the SQL Server overhead (the find time will be orders of magnitude faster though).
Compared to a T-SQL access, though, it all depends on the size of the blob you are storing.
In a nutshell, if your blobs are smaller than 1 MB using T-SQL should be faster. Please refer here: Best Practices on FILESTREAM implementations for more detailed figures.
DROP TABLE [ IF EXISTS ] [ database_name . [ schema_name ] . | schema_name . ]
table_name [ ,...n ]
[ ; ]
You can Use this code to drop the file table.if you want to delete only certain specific data use where or having clauses in the Tsql statement
first thing that you must remember File Table in SQL Server is best way that you can use it because of this is manager engine for helping to developer that they want to have pattern for managing files.
this manger use physical place for saving files and create table for keeping basic information of file data.when you are deleted file,File Table manager have job that run some times and it is deleted physical file.
if you want to delete physical file immediately you can use this clause:
checkpoint;
EXEC sp_filestream_force_garbage_collection #dbname = N'[DB Name]';
you must remember use this clause after delete row from file table with delay
or use it in SQL Trigger (after delete) :
Create TRIGGER [Schema].[TriggerName]
ON [Schema].[TableName]
after DELETE
AS
declare #T table ([file_name] varchar(50),num_col int, num_marked int , num_unproce int ,last_col bigint)
BEGIN
checkpoint;
insert into #T EXEC sp_filestream_force_garbage_collection #dbname = N'[DB Name]';
END
Related
I have a database in SQL Server. Basically, table consists of a number of XML documents that represent same table data at given time (like backup history). What is the best method to cut off all the old (3 months) backups, remove from DB and save them archived?
There is no export out of the box in SQL Server.
Assuming
Your table can be pretty big, since it looks like you and image of the table every minute.
If you want to do it all from inside SQL Server.
Then I'll suggest doing cleanup in chunks.
The usual process in SQL to delete by chunks is using DELETE in combination with OUTPUT statement.
The easiest way to archive and remove then would be having the OUTPUT to a table in another database, for that sole purpose.
so your steps would be:
Create a new database (ArchiveDatabase)
Create an Archive table in ArchiveDatabase (ArchiveTable) with same structure of the table that you want to remove.
In a while loop perform the DELETE/OUTPUT
Backup the ArchiveDatabase
TRUNCATE ArchiveTable table in ArchiveDatabase
The DELETE/OUTPUT loops will look like something like
declare #RowsToDelete int = 1000
declare #DeletedRowsCNT int = 1000
while #DeletedRowsCNT = #RowsToDelete
begin
delete top (#RowsToDelete)
from MyDataTable
output deleted.* into ArchiveDatabase.dbo.ArchiveTable
where dt < dateadd(month, 3, getdate())
set #DeletedRowsCNT = ##ROWCOUNT
end
I recently had a horrible blunder.
While attempting to fix an issue we were having with our Exact Synergy system I was attempting to replace the data in two columns for one account with NULL, instead I replaced those two columns in ALL accounts with NULL. Completely restoring from a backup is not an option so now I am left trying to figure out how to replace the missing data.
I have made a full restore of a recent backup for this database to a test database and have confirmed that the data I need is there. I am trying to figure out how to properly write a query that will replace the data in the two columns.
Since this is a backup of the same database, the tables and columns are all identically named.
The databases are Synergy and Synergy_TESTDB
The owner of the tables is dbo
The table is called Addresses
The columns are called textfield1 and textfield2
What I would like to do is take the data in textfield1 and textfield2 from the backup database and use it to populate the empty, or NULL, columns in the live database.
I am extremely new to SQL, and would appreciate any help.
This is obviously untested. I take no responsibility for you using this code.
That said I'd like to try and help you.
The main point is the 3 part database.table naming: I'm assuming you restored backup to same server. I'm also assuming you have a primary key on the table? And that Synergy_TESTDB is the restored database:
update target
set target.textfield1 = source.textfield1
from Synergy.dbo.Addresses target
join Synergy_TESTDB.dbo.Addresses source on target.PrimaryKeyCol = source.PrimaryKeyCol
where target.textfield1 IS NULL
update target
set target.textfield2 = source.textfield2
from Synergy.dbo.Addresses target
join Synergy_TESTDB.dbo.Addresses source on target.PrimaryKeyCol = source.PrimaryKeyCol
where target.textfield2 IS NULL
(Sure it could be done in a single update, but I'm trying to keep it simple.)
I strongly suggest you try in another test database first.
A good habit to get in to is to use a pattern like this:
BEGIN TRANSACTION
-- Perform updates
-- Examine the results: select * from dbo.Blah ...
-- If results are wrong, we just rollback anyway
ROLLBACK
-- If results are what you want, uncomment the COMMIT and comment out the ROLLBACK
-- COMMIT TRANS
I have a legacy db from an application that is no longer supported. I'm trying to deconstruct a stored procedure to make a modification, but I can't figure out where #tempimport is located. I've searched high and low through the schema, but I'm not finding anything close. This is how it's used:
SET NOCOUNT ON;
DECLARE #saleid UNIQUEIDENTIFIER
SELECT TOP 1 #saleid = [sale_id] FROM #tempimport;
Is this a T-SQL specific thing, or can I actually find the data in this table somewhere?
Tables with the # preceding the name are temporary tables.
You can find them in the tempdb database under System Databases
Tables that are prefixed with a # are temporary tables that are created in the tempdb system database.
try to find code of this table creation in you DB schema
select * from information_schema.routines
where object_definition(object_id(routine_name)) like '%#tempimport%'
try find #tempimport in schemas of neighboring DBs
try find #tempimport in your application sources if exists.
try to profile(SQL Profiler tool) your application and search #tempimport here.
Additional
#tempimport can be created by any connected application. Not only from your DB runnable code
you can research deeper and try to monitoring the moment of #tempimport creation. Example
select left(name, charindex('_',name)-1)
from tempdb..sysobjects
where charindex('_',name) > 0 and
xtype = 'u' and not object_id('tempdb..'+name) is null
and left(name, charindex('_',name)-1) = '#tempimport'
source: Is there a way to get a list of all current temporary tables in SQL Server?
Let's say we have MS SQL Server database and table A in it. Then, we perform something like this:
select A.a1
into #temp1
from A
This link says: "If memory is available, both table variables and temporary tables are created and processed while in memory (data cache)."
Let's say that we have 100 rows in #temp1, which easily fits into memory...so whole #temp1 is in memory now. But then, we execute this statement:
UPDATE #temp1 SET a1 = a1 + 1
Does this involve some IO operations? For example, is something written into temp_log (which is, I think, not in RAM)? Or perhaps, since we are doing update now, the whole #temp1 is moved to hdd...?
My understanding is that no log is kept for temp tables, as tempdb is cleared on every restart.
Update:
Logging occurs in tempdb, but it is reduced: http://msdn.microsoft.com/en-ca/library/ms190768.aspx;
Perhaps a table-valued variable is suitable instead of a table in tempdb:
http://msdn.microsoft.com/en-us/library/ms175010.aspx
I've got a monumentally tedious task that is to find several tables from a huge schema, and generate the DDL for these tables.
Say, I've got a schemaA has 1000 tables, I need to find if a tableA existed in this schemaA, if it does, generate the DDL and save it to filesystem, if don't, print it's name out or write it to a file. Any ideas?
The DBMS_METADATA package (assuming you are on a reasonably recent version of Oracle) will generate the DDL for any object in the database. So
SELECT dbms_metadata.get_ddl( 'TABLE', 'TABLEA', 'SCHEMAA' )
FROM dual;
will return a CLOB with the DDL for SchemaA.TableA. You could catch the exception that is thrown that the object doesn't exist, but I would tend to suggest that you query the data dictionary (i.e. DBA_OBJECTS) to verify that there is a table named TableA in SchemaA, i.e.
SELECT COUNT(*)
FROM dba_objects
WHERE owner = 'SCHEMAA'
AND object_name = 'TABLEA'
AND object_type = 'TABLE'
Note that if you don't have access to DBA_OBJECTS, you could use ALL_OBJECTS instead. The concern there, however, is that there may be a TableA in SchemaA that you don't have SELECT access on. That table would not appear in ALL_OBJECTS (which has all the objects that you have access to) but it would appear in DBA_OBJECTS (which has all the objects in the database regardless of your ability to access them).
You can then either write the DDL to a file or, if the count is 0, indicate that the object doesn't exist. From a stored procedure, you can use the UTL_FILE package to write to a file on the database server. If you are trying to write to a file on the client file system, you would need to use a language that has access to the client operating system's resources. A small C/ Java/ Perl/ etc. program should be able to select a CLOB and write that data to a file on the client operating system.