So I'm running the SqlRestore utility created by AppHarbor, which can be found here:
https://github.com/appharbor/AppHarbor-SqlServerBulkCopy
The first step of the utility is to wipe out the data from the destination database, using these commands:
// http://stackoverflow.com/questions/155246/how-do-you-truncate-all-tables-in-a-database-using-tsql/156813#156813
StringBuilder commandBuilder = new StringBuilder();
commandBuilder.Append(
#"
-- disable all constraints
EXEC sp_msforeachtable ""ALTER TABLE ? NOCHECK CONSTRAINT all""
-- delete data in all tables
EXEC sp_msforeachtable ""DELETE FROM ?""
-- enable all constraints
exec sp_msforeachtable ""ALTER TABLE ? WITH CHECK CHECK CONSTRAINT all""
");
When I run this (both via the program, and also manually via Management Studio), the delete statement is throwing an error saying invalid column name schoolid
At first, I had no idea which table was throwing the error, so I re-wrote the delete step to a cursor, shown here:
declare tableCursor cursor local forward_only for
select [name]
from sys.objects
where [type] = 'U'
declare #tableName varchar(50)
open tableCursor
fetch next from tableCursor into #tableName
while (##FETCH_STATUS = 0) begin
print 'trying to delete from ' + #tableName
exec('delete from ' + #tableName)
print 'deleted from ' + #tableName
fetch next from tableCursor into #tableName
end
close tableCursor
deallocate tableCursor
Executing the script this way tells me that it was "trying to delete from table X", but when I look at the table definition of table X, that column DOES NOT EXIST (and never has)!
So next, I decide to just manually delete the table, then use VS Sql Server Schema comparison to re-generate the table to my destination database, since it was possibily corrupt somehow.
Once it's re-generated, I re-run that delete script, AND IT STILL THROWS THE ERROR!
Here's the table definition:
CREATE TABLE [dbo].[TourneyPoolMatchResult] (
[TourneyPoolMatchResultId] INT IDENTITY (1, 1) NOT NULL,
[TournamentTypeId] INT NOT NULL,
[WinningWrestlerId] INT NOT NULL,
[LosingWrestlerId] INT NOT NULL,
[WinType] VARCHAR (5) NOT NULL,
[Score] VARCHAR (20) NULL,
[BonusPoints] DECIMAL (2, 1) NOT NULL,
[AdvancementPoints] DECIMAL (2, 1) NOT NULL,
[PlacementPoints] INT NOT NULL,
[LosingWrestlerPlacementPoints] INT NOT NULL,
PRIMARY KEY CLUSTERED ([TourneyPoolMatchResultId] ASC)
);
GO
CREATE trigger [dbo].[trg_TourneyPoolMatchResult_Change]
on [dbo].[TourneyPoolMatchResult]
after insert, update, delete
as
exec [UpdateTeamPoints];
exec [UpdatePointsForSubmittal];
As you can see, nowhere in that table definition does it have anything about SchoolId.
What in the world is going on here?
You use the delete statement and you have some triggers on delete. Maybe your trigger function (e.g. UpdateTeamPoints or UpdatePointsForSubmittal) reference a column named schoolid? Please search your sql code, is there any schoolid anywhere?
If your tables haven't Foreign Key constraints instead of
DELETE FROM table_name;
use:
TRUNCATE TABLE table_name;
From TRUNCATE:
Removes all rows from a table or specified partitions of a table,
without logging the individual row deletions. TRUNCATE TABLE is
similar to the DELETE statement with no WHERE clause; however,
TRUNCATE TABLE is faster and uses fewer system and transaction log
resources.
and
TRUNCATE TABLE cannot activate a trigger because the operation does
not log individual row deletions. For more information, see CREATE
TRIGGER (Transact-SQL).
Related
I have hit a problem with SQL Server that results in it infinitely recompiling a function.
To reproduce, create a new database with the option Parameterization = Forced or execute the following on an existing DB:
ALTER DATABASE [DatabaseName] SET PARAMETERIZATION FORCED WITH NO_WAIT
Then execute the following script:
CREATE TABLE dbo.TestTable(
ID int IDENTITY(1,1) NOT NULL,
FullTextField varchar(100) NULL,
CONSTRAINT PK_TestTable PRIMARY KEY CLUSTERED
(ID ASC)
)
GO
IF NOT EXISTS(SELECT 1 FROM sysfulltextcatalogs WHERE name = 'FullTextCat')
CREATE FULLTEXT CATALOG FullTextCat;
GO
CREATE FULLTEXT INDEX ON dbo.TestTable (FullTextField) KEY INDEX PK_TestTable
ON FullTextCat
WITH
CHANGE_TRACKING AUTO
GO
CREATE OR ALTER FUNCTION dbo.fn_TestFullTextSearch(#Filter VARCHAR(8000))
RETURNS TABLE
AS
RETURN SELECT
ID,
FullTextField
FROM dbo.TestTable
WHERE CONTAINS(FullTextField, #Filter)
GO
SELECT * FROM dbo.fn_TestFullTextSearch('"a*"')
The query will never return. Running SQL Profiler to monitor SP:CacheInsert and SP:CacheRemove will show SQL server is doing this endlessly and the SQL logs will show countless "A possible infinite recompile was detected for SQLHANDLE" messages.
Setting the Parameterization = Simple works around the issue but we need this to be set to Forced for other reasons.
Has anyone come across this issue before and/or have a suggested solution?
Thanks,
Chuck
While I still experience the problem with the original code I provided, by following #Martin's approach of explicitly parameterizing the call to the function:
EXEC sys.sp_executesql N'SELECT * FROM dbo.fn_TestFullTextSearch(#Filter)', N'#Filter VARCHAR(4)', #Filter = '"a*"'
I have been able to successfully work around the problem.
I have a problem that works in SQL Server 2017 but not in SQL Server 2019. It is related to tempdb caching. This has to do with creating temporary tables in stored procedures and changing its structure using dynamic SQL. We have a need to do that for various dynamic reporting needs. The first time it is called, the structure is cached and subsequent call to the procedure fails or returns invalid results. How do I prevent caching of such tables? Below is some sample code and how come it works in 2017. Help appreciated.
CREATE PROCEDURE [dbo].[tempDBCachingCheck]
#yearList varchar(max)
AS
BEGIN
SET NOCOUNT ON
DECLARE #yearCount int
DECLARE #yearCounter INT
DECLARE #yearValue INT
DECLARE #sql nvarchar(max)
-- With table variable
DECLARE #tempYearList TABLE (id INT IDENTITY(1,1), rpt_yr int)
INSERT INTO #tempYearList (rpt_yr)
SELECT value FROM STRING_SPLIT(#yearList, ',');
SELECT * FROM #tempYearList
--------------------------------------------------------------------
--With temporary table, since we will be altering this with dynamic sql
CREATE TABLE #returnTable (id INT IDENTITY(1,1))
-- Tried adding a named constraint to not make it cache, but does not work
ALTER TABLE #returnTable
ADD CONSTRAINT UC_ID UNIQUE (id);
SELECT #yearCount = COUNT(*) FROM #tempYearList
-- Add the years as columns to the return table to demostrate the problem
SET #sql = N'ALTER TABLE #returnTable ADD '
SET #yearCounter = 1
WHILE #yearCounter <= #yearCount
BEGIN
SELECT #yearValue = rpt_yr FROM #tempYearList WHERE id = #yearCounter
IF #yearCounter > 1
SET #Sql = #Sql + N', '
SET #sql = #sql + N' [' + convert(varchar(20), #yearValue) + N'] float'
SET #yearCounter = #yearCounter + 1
END
EXECUTE sp_executesql #sql
SELECT * FROM #returnTable
-- No need to drop the temporary tables but doing just in case
DROP TABLE #returnTable
END
GO
-- run these statements and you will see the second call with return the cached #returnTable
EXEC tempDBCachingCheck '2019,2020'
EXEC tempDBCachingCheck '2017,2018,2019,2020'
GO
-- Clear temp table cache and call in reverse order, then will hit an error
-- 'A severe error occurred on the current command. The results, if any, should be discarded.'
USE tempDB
GO
DBCC FREEPROCCACHE
GO
EXEC tempDBCachingCheck '2017,2018,2019,2020'
EXEC tempDBCachingCheck '2019,2020'
GO
It seems this has been fixed in one of cummulative update. The description seems to match:
KB4538853:
When you repeatedly run a stored procedure that uses temporary table with indexes on SQL Server 2019, the client may receive an unexpected error with message "A severe error occurred on the current command" and an access violation exception is recorded on the SQL Server. If the same workload is executed on any previous major version of SQL Server, this issue does not occur.
Dan Guzman's recommendation to install newest CU is the way to go.
Using: EXEC tempDBCachingCheck '2017,2018,2019,2020' WITH RECOMPILE could help as well.
Note: the highest linked question does not solve the problem for system stored procedures, but it's close. With help of the commenters, I came to a working answer.
Trying to use statements such as the following for sp_spaceused, throws an error
SELECT * INTO #tblOutput exec sp_spaceused 'Account'
SELECT * FROM #tblOutput
The errors:
Must specify table to select from.
and:
An object or column name is missing or empty. For SELECT INTO statements, verify each column has a name. For other statements, look for empty alias names. Aliases defined as "" or [] are not allowed. Change the alias to a valid name.
When I fully declare a table variable, it works as expected, so it seems to me that the stored procedure does return an actual table.
CREATE TABLE #tblOutput (
name NVARCHAR(128) NOT NULL,
rows CHAR(11) NOT NULL,
reserved VARCHAR(18) NOT NULL,
data VARCHAR(18) NOT NULL,
index_size VARCHAR(18) NOT NULL,
unused VARCHAR(18) NOT NULL)
INSERT INTO #tblOutput exec sp_spaceused 'Response'
SELECT * FROM #tblOutput
Why is it not possible to use a temp table or table variable with the result set of EXECUTE sp_xxx? Or: does a more compact expression exist than having to predefine the full table each time?
(incidentally, and off-topic, Googling for the exact term SELECT * INTO #tmp exec sp_spaceused at the time of writing, returned exactly one result)
TL;DR: use SET FMTONLY OFF with OPENQUERY, details below.
It appears that the link provided by Daniel E. is only part of the solution. For instance, if you try:
-- no need to use sp_addlinkedserver
-- must fully specify sp_, because default db is master
SELECT * FROM OPENQUERY(
[SERVERNAME\SQL2008],
'exec somedb.dbo.sp_spaceused ''Account''')
you will receive the following error:
The OLE DB provider "SQLNCLI10" for linked server "LOCALSERVER\SQL2008" supplied inconsistent metadata for a column. The name was changed at execution time.
I found the solution through this post, and then a blog-post on OPENQUERY, which in turn told me that until SQL2008, you need to use SET FMTONLY OFF. The final solution, which is essentially surprisingly simple (and easier to accomplish since there is no need to specify a loopback linked server), is this:
SELECT * FROM OPENQUERY(
[SERVERNAME\SQL2008],
'SET FMTONLY OFF
EXEC somedb.dbo.sp_spaceused ''Account''')
In addition, if you haven't set DATA-ACCESS, you may get the following error:
Server 'SERVERNAME\SQL2008' is not configured for DATA ACCESS.
This can be remedied by running the following command:
EXEC sp_serveroption 'SERVERNAME\SQL2008', 'DATA ACCESS', TRUE
We cannot SELECT from a stored procedure thats why SELECT * INTO ..Exec sp_ will not work.
To get the result set returned from a store procedure we can INSERT INTO a table.
SELECT INTO statement creates a table on fly and inserts data from the source table/View/Function. The only condition is source table should exist and you should be able to Select from it.
Sql Server doesn't allow you to use SELECT from sp_ therefore you can only use the INSERT INTO statement when executing a stored procedure this means at run time you can add the returned result set into a table and Select from that table at later stage.
INSERT INTO statement requires the destination table name, An existing table. Therefore whether you use a Temp Table, Table variable or Sql server persistent table you will need to create the table first and only they you can use the syntax
INSERT INTO #TempTable
EXECUTE sp_Proc
Using [YOUR DATABASE NAME]
CREATE TABLE [YOURTABLENAME]
(Database_Name Varchar(128),
DataBase_Size VarChar(128),
unallocated_Space Varchar(128),
reserved Varchar(128),
data Varchar(128),
index_size Varchar(128),
unused Varchar(128)
);
INSERT INTO dbo.[YOUR TABLE NAME]
(
Database_Name,
DataBase_Size,
unallocated_Space,
reserved,
data,
index_size,
unused
)
EXEC sp_spaceused #oneresultset = 1
--To get it to return it all as one data set add the nonresultset=1 at the end and viola good to go for writing to a table. :)
I have SQL Server 2008 R2. I have around 150 tables in a database and for each table I have recently created triggers. It is working fine in my local environment.
Now I want to deploy them on my live environment. The question is I want to deploy only the triggers.
I tried the Generate Script wizard but it is creating script with table schema along with triggers, NOT triggers only.
Is there anyway to generate all the triggers drop and create type script?
Forget the wizard. I think you have to get your hands dirty with code. Script below prints all triggers code and stores it into table. Just copy the script's print output or get it from #triggerFullText.
USE YourDatabaseName
GO
SET NOCOUNT ON;
CREATE TABLE #triggerFullText ([TriggerName] VARCHAR(500), [Text] VARCHAR(MAX))
CREATE TABLE #triggerLines ([Text] VARCHAR(MAX))
DECLARE #triggerName VARCHAR(500)
DECLARE #fullText VARCHAR(MAX)
SELECT #triggerName = MIN(name)
FROM sys.triggers
WHILE #triggerName IS NOT NULL
BEGIN
INSERT INTO #triggerLines
EXEC sp_helptext #triggerName
--sp_helptext gives us one row per trigger line
--here we join lines into one variable
SELECT #fullText = ISNULL(#fullText, '') + CHAR(10) + [TEXT]
FROM #triggerLines
--adding "GO" for ease of copy paste execution
SET #fullText = #fullText + CHAR(10) + 'GO' + CHAR(10)
PRINT #fullText
--accumulating result for future manipulations
INSERT INTO #triggerFullText([TriggerName], [Text])
VALUES(#triggerName, #fullText)
--iterating over next trigger
SELECT #triggerName = MIN(name)
FROM sys.triggers
WHERE name > #triggerName
SET #fullText = NULL
TRUNCATE TABLE #triggerLines
END
DROP TABLE #triggerFullText
DROP TABLE #triggerLines
Just in generate scripts wizard in the second step ("Set Scripting Options) press Advanced button=> Table/View Options=> Set Script Triggers to True.
check also this link or this. If you want only triggers just select one table to proceed the next step.
I am trying to restore a backup of a Microsoft Dynamics NAV database, which unfortunately fails as it tries to set a CLUSTERED KEY for the tables which already have clustered keys.
In NAV, every company in the database gets its own copy of the tables, prefixed with the Company's name, e.g. COMPANY$User_Setup. I'd therefore like to remove any clustered key on a given company, which means on any table which name starts with 'Company$'.
Has anybody got a SQL statement that could perform this?
You'll need to do it as a cursor. Assuming each PK constraint is named consistantly and is based on the table name, you'd be able to do something like (untested, so may contain typos or vauge syntax errors):
DECLARE mycursor CURSOR FOR SELECT name FROM sysobjects WHERE name LIKE 'Company$%'
OPEN CURSOR
FETCH NEXT FROM mycursor INTO #tablename
WHILE ##FETCH_STATUS = 0
BEGIN
SET #sql = N'ALTER TABLE QUOTENAME(' + #tablename + ') DROP CONSTRAINT PK_' + #tablename
EXEC sp_ExecuteSQL #sql
FETCH NEXT FROM mycursor INTO #tablename
END
CLOSE CURSOR
DEALLOCATE CURSOR
If your PK's aren't named based on tablename, then you'll have to modify this to also query based on sysconstraints or sysindexes to get the actual PK name.