Delete from FILETABLE with foreign key constraint - sql-server

Background
I'm looking into creating a simple web app, a part of which will display Images associated with Items. I've decided to look into using the FILETABLE feature of SQL Server which will allow binary image data to be uploaded into the exposed share directly. As such there is a use case to allow the deletion of files (rows in a FILETABLE) through Windows Explorer. This example replicates the issue, which stems from having a foreign key relationship to a FILETABLE.
Structure
Having already added an image using File Explorer to the FILETABLE with the path_locator of 0xFF5354649088A1EFEE8F747CD11030F80800170620:
CREATE TABLE [dbo].[Image] AS FILETABLE WITH (FileTable_Directory = 'Images');
GO
CREATE TABLE [dbo].[ImageLink] (
[id] INT NOT NULL IDENTITY(1, 1)
,[path_locator] HIERARCHYID NOT NULL
,FOREIGN KEY ([path_locator]) REFERENCES [dbo].[Image] ([path_locator])
);
GO
INSERT INTO [dbo].[ImageLink] ([path_locator]) VALUES (0xFF5354649088A1EFEE8F747CD11030F80800170620);
Issue
Upon deleting the file through File Explorer...
... the file disappears from the directory as Windows reports the deletion a success but the row is not removed from the FILETABLE.
However, when trying to delete through SQL Server, the familiar reference constraint conflict error is thrown:
DELETE FROM [dbo].[Image] WHERE [path_locator] = 0xFF5354649088A1EFEE8F747CD11030F80800170620;
Msg 547, Level 16, State 0, Line 69
The DELETE statement conflicted with the REFERENCE constraint "FK__ImageLink__path___5070F446". The conflict occurred in database "FileTableTest", table "dbo.ImageLink", column 'path_locator'.
I added an AFTER DELETE trigger to the FILETABLE with the intention of removed the referencing row, but this also does not get executed.
Question
How might I go about propagating the delete through the link table upon deletion through Windows Explorer?
Is there some kind of SQL Server/Windows API hook I can detect and execute DML code that handles the delete?
Update #1
From BOL, the following section kind of confirms the behaviour, although doesn't offer any further information.
Transactional Semantics
When you access the files in a FileTable by using file I/O APIs, these operations are not associated with any user transactions, and have the following additional characteristics:
Since non-transacted access to FILESTREAM data in a FileTable is not associated with any transaction, it does not have any specific isolation semantics. However SQL Server may use internal transactions to enforce locking or concurrency semantics on the FileTable data. Any internal transactions of this type are done with read-committed isolation.

The problem is the foreign key.
Use 'ON CASCADE DELETE' in your foreign key, so when you delete through File Explorer the associated ImageLink is deleted too.

It looks like ,there is problem with the foreign key. As there is foreign key attached to that table so you cannot simply delete the row as foreign key contraint fails.
So first disable foreign key check in sql by :
SET FOREIGN_KEY_CHECKS = 1;
and then try deleting this & yes don't forget to set foreign key check to 0 by:
SET FOREIGN_KEY_CHECKS = 0;
after deleting the row.

Related

Deadlock while running multiple instances of a spring batch job [duplicate]

This question already has answers here:
Spring Batch Deadlock - Could not increment identity; nested exception is com.microsoft.sqlserver.jdbc.SQLServerException
(2 answers)
Closed 7 months ago.
I have a spring batch job that reads from a database and writes into a file after doing some processing, in a chunk based step.
My requirement is to run almost 16 instances of the job parallelly at the same time, just with different job parameters.
But I've been facing the a couple of issues while doing so.
1.
Could not open JDBC Connection for transaction. Nested exception is java.sql.SQLTransientConnectionException: Hikaripool -1 - Connection is not available.
Exception: could not increment identity. Nested Exception is com.microsoft.SQLserver.jdbc.SQLServerException: Transaction (process ID 124) was deadlocked on lock resources with other process, and has been chosen as the deadlock victim. Rerun the transaction.
I've tried the solutions provided in the link Github link, by setting the IsolationLevel and altering the metadata tables as shown below.
Set the IsolationLevelForCreate like this
JobRepositoryFactoryBean factory = new JobRepositoryFactoryBean();
factory.setIsolationLevelForCreate("ISOLATION_REPEATABLE_READ");
Have the DBA add indexes to each of the SEQ tables like this (JET is my schema that I put the repo tables in):
ALTER TABLE [JET].[BATCH_JOB_EXECUTION_SEQ]
ADD CONSTRAINT [BATCH_JOB_EXECUTION_SEQ_PK] PRIMARY KEY CLUSTERED ([ID] ASC)
GO
ALTER TABLE [JET].[BATCH_JOB_SEQ]
ADD CONSTRAINT [BATCH_JOB_SEQ_PK] PRIMARY KEY CLUSTERED ([ID] ASC)
GO
ALTER TABLE [JET].[BATCH_STEP_EXECUTION_SEQ]
ADD CONSTRAINT [BATCH_STEP_EXECUTION_SEQ_PK] PRIMARY KEY CLUSTERED ([ID] ASC)
GO
But I am still facing the issue.
PS: The spring batch has been deployed to AKS(Azure Kubernetes Services), and using Azure SQLServer as datasource.
Based on the discussion in https://github.com/spring-projects/spring-batch/issues/1448, the issue seems to be caused by the SqlServerMaxValueIncrementer from Spring Framework not using SQLServer's native sequences. Here is an excerpt from the Javadoc:
There should be one sequence table per table that needs an auto-generated key.
Example:
create table tab (id int not null primary key, text varchar(100))
create table tab_sequence (id bigint identity)
insert into tab_sequence default values
This could be due to SQLServer not supporting sequences until recently. But I guess that's why Spring Batch uses tables to emulate sequences for MS SQL Server.
I suggest you try to change the default DDL to use sequences instead of tables:
CREATE SEQUENCE BATCH_STEP_EXECUTION_SEQ ;
CREATE SEQUENCE BATCH_JOB_EXECUTION_SEQ ;
CREATE SEQUENCE BATCH_JOB_SEQ ;
This is the default sequences definition based on MS SQL Server's docs. This should work, but you can customize them if needed.
You might also need to provide a custom DataFieldMaxValueIncrementer that is based on sequences (since the one from Spring Framework uses tables) and register it in Spring Batch through a DataFieldMaxValueIncrementerFactory (See JobRepositoryFactoryBean#setIncrementerFactory).

Using flyway - How can memory optimized tables be deployed

I am using Flyway Community Edition 6.3.2 by Redgate and attempting to deploy a memory optimized table.
The content of my versioned script is...
CREATE TABLE temp_memory_optimized.test
(
id INT NOT NULL PRIMARY KEY NONCLUSTERED
) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_ONLY);
GO
On deploy time I am seeing this error...
ERROR: Migration of schema [dbo] to version 1.0.2 - add memory optimized objects failed! Changes successfully rolled back.
ERROR:
Migration v1.0.2__add_memory_optimized_objects.sql failed
---------------------------------------------------------
SQL State : S000109
Error Code : 12331
Message : DDL statements ALTER, DROP and CREATE inside user transactions are not supported with memory optimized tables.
Location : C:\...\v1.0.2__add_memory_optimized_objects.sql (C:\...\v1.0.2__add_memory_optimized_objects.sql)
Line : 1
Statement : CREATE TABLE temp_memory_optimized.test
(
id INT NOT NULL PRIMARY KEY NONCLUSTERED
) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_ONLY);
MO filegroup is configured correctly and I can successfully manually deploy onto my test box.
I have set -mixed=true on the migrate command.
I know I cannot be the first person to hit this problem however internet searches are proving fruitless in trying to track down a solution.
As mentioned in issue 2062 Flyway is not detecting that
CREATE TABLE WITH MEMORY_OPTIMIZED = ON is not valid in a transaction automatically. You will need to override this behaviour on a per-script basis as detailed here: https://flywaydb.org/documentation/scriptconfigfiles and will need to do so for each CREATE/ALTER/DELETE on in-memory objects.

SSDT - Exclude certain schema along with unnamed constraint

Task:
Automate database deployment (SSDT/dacpac deployment with CI/CD)
The database is a 3rd party database
It also includes our own customized tables/SP/Fn/Views in separate schemas
Should exclude 3rd party objects while deploying the database project(dacpac) to Production
Thanks to Ed Elliott for the AgileSqlClub.DeploymentFilterContributor. Used the dll to filter out the schema successfully.
Problem:
The 3rd party schema objects(Tables) are defined with unnamed constraints(default / primary key) when creating the tables. Example:
CREATE TABLE [3rdParty].[MainTable]
(ID INT IDENTITY(1,1) NOT NULL,
CreateDate DATETIME DEFAULT(GETDATE())) --There is no name given to default constraint
When I generate the script for deployment using sqlpackage.exe, I see following statements in the generated script.
Generated the script using:
"C:\Program Files\Microsoft SQL Server\150\DAC\bin\sqlpackage.exe" /action:script /sourcefile:C:\Users\User123\source\repos\DBProject\DBProject\bin\Debug\DBProject.dacpac /TargetConnectionString:"Data Source=MyServer; Initial Catalog=MSSQLDatabase; Trusted_Connection=True" /p:AdditionalDeploymentContributorPaths="C:\Program Files\Microsoft SQL Server\150\DAC\bin\AgileSqlClub.SqlPackageFilter.dll" /p:AdditionalDeploymentContributors=AgileSqlClub.DeploymentFilterContributor /p:AdditionalDeploymentContributorArguments="SqlPackageFilter=IgnoreSchema(3rdParty)" /outputpath:"c:\temp\script_AfterDLL.sql"
Script Output:
/*
Deployment script for MyDatabase
This code was generated by a tool.
Changes to this file may cause incorrect behavior and will be lost if
the code is regenerated.
*/
...
...
GO
PRINT N'Dropping unnamed constraint on [3rdParty].[MainTable]...';
GO
ALTER TABLE [3rdParty].[MainTable] DROP CONSTRAINT [DF__MainTabl__Crea__59463169];
...
...
...(towards the end of the script)
ALTER TABLE [3rdParty].[MainTable_2] WITH CHECK CHECK CONSTRAINT [fk_518_t_44_t_9];
I cannot alter 3rd party schema due to company restrictions
There are many lines of unnamed constraint and WITH CHECK CHECK constraints generated in the script.
Question:
How can I be able to remove the lines to DROP unnamed Constraint on 3rd party schemas? - Even though the dll excludes 3rd party schema, it still has these unnamed constraints scripted/deployed. Also, it is not Adding them back too !!
How can I be able to skip/remove generating WITH CHECK CHECK CONSTRAINT on 3rd party schemas
Any suggestions will be greatly helpful.
EDIT:
Also, I found another issue. The deployment will not succeed due to Rows were detected. The schema update is terminating because data loss might occur
Output:
/*
The column [3rdParty].[MainTable_1].[Col1] is being dropped, data loss could occur.
The column [3rdParty].[MainTable_1].[Col2] is being dropped, data loss could occur.
The column [3rdParty].[MainTable_1].[Col3] is being dropped, data loss could occur.
The column [3rdParty].[MainTable_1].[Col4] is being dropped, data loss could occur.
*/
IF EXISTS (select top 1 1 from [3rdParty].[MainTable_1])
RAISERROR (N'Rows were detected. The schema update is terminating because data loss might occur.', 16, 127) WITH NOWAIT
GO
Regarding the unnamed constraints, I couldn't find any solution using sqlpackage.exe.
But Redgate SQL Compare has an option to ignore them called IgnoreSystemNamedConstraintAndIndexNames that ignores system generated constraints and generates a much cleaner script.
For example when comparing 2 dacpacs:
SQLCompare /Scripts1:"\unpacked_dacpac_source_folder" /Scripts2:"\unpacked_dacpac_dest_folder" /options:IgnoreSystemNamedConstraintAndIndexNames /scriptFile:"script_result.sql"
You can find more info here:
Handling System-named Constraints in SQL Compare

Sqlbulkcopyoptions.firetriggers does not fire trigger in the table

I have a table with a trigger newly added to it.
When I try to run a batch file and upload the file using SqlBulkCopy it did not work.
After reading that adding SqlBulkCopyOptions.FireTriggers will fire the triggers that are set to the table, I added it.
But still I get the same error - 'Bulk copy failed. User does not have ALTER TABLE permission on table. ALTER TABLE permission is required on the target table of a bulk copy operation if the table has triggers or check constraints, but 'FIRE_TRIGGERS' or 'CHECK_CONSTRAINTS' bulk hints are not specified as options to the bulk copy command.
'
Any idea what is to be done? Any help would be appreciated
SQL Server requires ALTER TABLE permissions to bulk insert data without the FireTriggers and CheckConstraint options, as well as with the KeepIdentity option, to help safeguard against minimally-privileged users from violating data integrity rules enforced by SQL Server. See What permission do I need to use SqlBulkCopy in SQL Server 2008?.
The FireTriggers and CheckConstraint options of SqlBulkCopy are off by default to maximize performance. It is the application's responsibility to ensure data integrity that would otherwise be enforced by constraints and triggers when these options are off. Foreign key and check constraints become not trusted after data are bulk copied without these options and must be re-validated with ALTER TABLE...CHECK CONSTRAINT` afterwards before SQL Server will trust the constraints again.
No special permissions other than SELECT/INSERT to use SqlBulkCopy when FireTriggers and CheckConstraint options are on, and the KeepIdentity option is off.
I tried adding Checkconstraints as well but it says the same error.
using (var bulkCopy = new SqlBulkCopy(connection, SqlBulkCopyOptions.FireTriggers | SqlBulkCopyOptions.CheckConstraints))
Is there another way to do it?

SQL Server snapshot replication: error on table creation

I receive the following error (taken from replication monitor):
The option 'FILETABLE_STREAMID_UNIQUE_CONSTRAINT_NAME' is only valid when used on a FileTable. Remove the option from the statement. (Source: MSSQLServer, Error number: 33411)
The command attempted is:
CREATE TABLE [dbo].[WP_CashCenter_StreamLocationLink](
[id] [bigint] NOT NULL,
[Stream_id] [int] NOT NULL,
[Location_id] [numeric](15, 0) NOT NULL,
[UID] [uniqueidentifier] NOT NULL
)
WITH
(
FILETABLE_STREAMID_UNIQUE_CONSTRAINT_NAME=[UC_StreamLocation]
)
Now, for me there's two things unclear here.
Table already existed on subscriber, and I've set #pre_creation_cmd = N'delete' for the article. So I don't expect the table to be dropped and re-created. In fact, table still exists on subscriber side, although create table command failed to complete. What am I missing? Where does this create table command come from and why?
I don't understand why does this FILETABLE_STREAMID_UNIQUE_CONSTRAINT_NAME option appear in creation script. I tried generating create table script from table in SSMS and indeed, it's there. But what's weird, I can't drop and re-create the table this way - I get the very same error message.
EDIT: Ok, I guess now I know why the table is still there - I noticed begin tran in sql server profiler.
If your table on the publisher is truly not defined as a FileTable, then the issue has to do with the column named "Stream_id". I believe there is a known issue in SQL 2012 where if you have a column named "Stream_id", which is kind of reserved for FileTable/FileStream, it will automatically add that constraint, and unfortunately break Replication. The workaround here is to rename the column to something other than "Stream_id".
Another workaround is to set the schema option to not replicate constraints (guessing this will work). If you require constraints on the subscriber, you can then try to manually apply them on the sbuscriber after the fact (or script them out and use #post_snaphsot_script).

Resources