There's a couple data migration scripts in my SSDT project.
First one stores data from one table to another temporary table:
IF EXISTS
(
SELECT *
FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_name = N'DocumentEvent'
AND column_name = N'Thumbprint'
)
BEGIN
IF NOT EXISTS
(
SELECT * FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_NAME = N'tmp_DocumentEventCertificates'
)
BEGIN
CREATE TABLE tmp_DocumentEventCertificates
(
[EventId] UNIQUEIDENTIFIER NOT NULL,
[Thumbprint] nvarchar(100)
)
END
INSERT INTO
tmp_DocumentEventCertificates
SELECT
[EventId],
[Thumbprint]
FROM
[DocumentEvent]
WHERE
[Thumbprint] IS NOT NULL
END
Second one transfers data from temporary table to another table:
IF EXISTS
(
SELECT * FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_NAME = N'tmp_DocumentEventCertificates'
)
BEGIN
UPDATE
[DocumentAttachment]
SET
[DocumentAttachment].[Certificate_Thumbprint] = tmp.[Thumbprint]
FROM
tmp_DocumentEventCertificates AS tmp
WHERE
([DocumentAttachment].[EventId] = tmp.[EventId]) AND
([DocumentAttachment].[ParentDocumentAttachmentId] IS NOT NULL)
DROP TABLE tmp_DocumentEventCertificates
END
Column [Thumbprint] is being removed from [DocumentEvent] table.
Column [Certificate_Thumbprint] is being added to [DocumentAttachment] table.
Data must be transferred from [DocumentEvent].[Thumbprint] to [DocumentAttachment].[Certificate_Thumbprint].
These scripts works as expected, when database is in the state, which requires migration from above, that is, [DocumentEvent].[Thumbprint] exists, and [DocumentAttachment].[Certificate_Thumbprint] does not exist.
But when database is migrated, all attempts to deploy dacpac fail because of
"Invalid column name 'Thumbprint'" error.
I'm almost sure, that this happens because SQLCMD tries to compile deploy script in whole, and this could be done successfully only when [DocumentEvent].[Thumbprint] exists.
But what is the workaround?
Looks like IF EXISTS in first script can't help.
Yes you are right, it's compilation error.
If the column does not exist your script cannot be compiled.
IF Exists and other data flow constructions are not analyzed.
You should wrap your code producing compilation error in EXEC:
exec(
'INSERT INTO
tmp_DocumentEventCertificates
SELECT
[EventId],
[Thumbprint]
FROM
[DocumentEvent]
WHERE
[Thumbprint] IS NOT NULL')
Related
I am trying to execute a procedure with a parameter, and depending on the value of the parameter, three different IF conditions will be evaluated to verify which query it will execute from a linked server.
But when I execute the query, it seems to be checking if the tables inside all the IF exists before starting the query. And I know that only one of the table exists, that is why I am using the parameter, so it shouldn't fail. but I anyhow get the following error:
Msg 7314, Level 16, State 1, Line 25
The OLE DB provider "Microsoft.ACE.OLEDB.16.0" for linked server "LinkedServer" does not contain the table "D100". The table either does not exist or the current user does not have permissions on that table.
So in this code, assume that the parameter is 300. then I get the message above.
Do you know, if there is a way, to limit the query to do not check all the tables, but only the one where the IF condition will be met?
ALTER PROCEDURE[dbo].[Import_data]
#p1 int = 0
AS
BEGIN
SET NOCOUNT ON;
IF(#p1 = 100)
BEGIN
DROP TABLE IF EXISTS Table1
SELECT [Field1], [Field2], [Field3], [Field4], [Field5], [Field6]
INTO Table1
FROM[LinkedServer]...[D100]
END
IF(#p1 = 200)
BEGIN
DROP TABLE IF EXISTS Table2
SELECT[Field1], [Field2], [Field3], [Field4], [Field5], [Field6]
INTO Table2
FROM[LinkedServer]...[D200]
END
IF(#p1 = 300)
BEGIN
DROP TABLE IF EXISTS Table3
SELECT[Field1], [Field2], [Field3], [Field4], [Field5], [Field6]
INTO Table3
FROM[LinkedServer]...[D300]
END
END
I have tried googling it, but I found mostly workarounds as running a sub procedure, but it is not really a clean solution, I think.
Okay, it seems I that I found the answer. Even with an IF statement, the SQL Server validates the entire query before executing it, so the way to overcome it, is to use a Dynamic SQL Query.
"SQL Server Dynamic SQL is a programming technique that allows you to construct SQL statements dynamically at runtime. It allows you to create more general purpose and flexible SQL statement because the full text of the SQL statements may be unknown at compilation."
This is how the query looks now. so instead of multiple IF statements, the query changes dynamically depending on the parameter.
DECLARE #SQL NVARCHAR(MAX)
SET #SQL = N'DROP TABLE IF EXISTS Table1;
SELECT [Field1]
,[Field2]
,[Field3]
,[Field4]
,[Field5]
,[Field6]
INTO Table1
FROM [LinkedServer]...[D' + CONVERT(nvarchar(3),#p1) + N']'
EXEC sp_executesql #SQL
We're trying to write some automated reports to execute SQL statements we have stored in a table. The table data is normally used in a stored procedure called by the triggers and uses data passed in via temp tables (created in the trigger statements), and has a table name, then an SQL statement that works on #TempInserted and #TempDeleted, which correspond to the Inserted and Deleted objects from the trigger and then some e-mail columns that determine where to send the output.
This all works fine from the trigger statements, as each creates each temp table once, during execution:-
SELECT * INTO #TempInserted FROM INSERTED
SELECT * INTO #TempDeleted FROM DELETED
Then the trigger calls the TriggerHandler stored procedure, passing the table name through as a pararmeter.
..
However, when I try to create these dynamically from a general stored procedure in order to fire off these statements as reports (so we don't duplicate the statements), in a batch, I'm hitting a problem:-
SELECT * INTO #TempInserted FROM ...
works fine from a defined table, or object (e.g. "FROM INSERTED"), but I've found that it can't get it's schema from a dynamic query.
For example, I can do
SELECT TOP 1 * INTO #Test FROM TableA
SELECT * FROM #Test
DROP TABLE #Test
But I can't then do
EXECUTE sp_executesql N'SELECT TOP 1 * INTO #Test FROM TableA'
SELECT * FROM #Test
DROP TABLE #Test
because then #Test is local to the EXECUTE context, and not its parent.
I can, however, do the insert in the EXECUTE (or a stored procedure) because the temp table is in scope, if I've already created the table schema:-
SELECT * INTO #Test FROM TableA WHERE 1 = 2 -- create an empty schema
EXECUTE sp_executesql N'INSERT INTO #Test SELECT TOP 10 * FROM TableA'
SELECT * FROM #Test
DROP TABLE #Test
So, that's OK, but my problem comes when I want to dynamically create that schema, depending on the table name were running the reports for. The INSERT works:-
SELECT * INTO #Test FROM TableA WHERE 1 = 2 -- create an empty schema
DECLARE #Table NVARCHAR(20) = 'TableA'
DECLARE #SQL NVARCHAR(200) = N'INSERT INTO #Test SELECT TOP 10 * FROM ' + #Table
EXECUTE sp_executesql #SQL
SELECT * FROM #Test
DROP TABLE #Test
But only if the temp table already has a schema. If I try to conditionally create the schema, depending on the table selected, I get a parsing error:-
DECLARE #Table NVARCHAR(20) = 'TableA'
IF #Table = 'TableA'
SELECT * INTO #Test FROM TableA WHERE 1 = 2 -- create an empty schema
IF #Table = 'TableB'
SELECT * INTO #Test FROM TableB WHERE 1 = 2 -- create an empty schema
DECLARE #SQL NVARCHAR(200) = N'INSERT INTO #Test SELECT TOP 10 * FROM ' + #Table
EXECUTE sp_executesql #SQL
SELECT * FROM #Test
DROP TABLE #Test
gives "There is already an object named '#Test' in the database." - so the query parser isn't following the structure of the query, which only actually creates the temp table once. This also holds true if you do
SELECT * INTO #Test FROM ....
DROP TABLE #Test
SELECT * INTO #Test FROM ....
So, is there a way in SQL Server 2012, of either being able to do
SELECT * INTO #Test FROM (dynamic SQL statement)
or to bypass the parser thinking you're creating the object twice
DECLARE #Table NVARCHAR(20) = 'TableA'
IF #Table = 'TableA'
SELECT * INTO #Test FROM TableA WHERE 1 = 2 -- create an empty schema
IF #Table = 'TableB'
SELECT * INTO #Test FROM TableB WHERE 1 = 2 -- create an empty schema
or to dynamically create the locally scoped temp table, from an existing database table's schema, where the table name is stored in a variable (all the examples I've found of this use the "SELECT * INTO #Test" code, which as I mentioned requires a statically defined object to create from)?
-------edit--------
For a bit of context, here's an example of why we're doing this:-
A trigger may fire producing a warning e-mail if a certain item type is transacted into a certain location. This works with our current triggers. The reason we're doing this is so that we can, in future, write a UI so the users can add other item types to this list themselves, rather than us having to update the trigger - this also means that we can control/validate the SQL being generated, behind the scenes of a point-and-click interface so that our users don't need to know any SQL and that we can be sure that nothing malicious or that will cause errors will be used.
We also can't do this in the BLL because it's from our ERP system and this would then mean we'd have to make changes to base objects, which is obviously undesirable if it can be avoided.
There is the potential for some of these e-mails to be missed/ignored/forgotten/not-actioned, so the users requested the same information on a periodic basis, as well as as-at the transaction occurring:-
So, next, we want to produce, for some of these trigger statements, daily/weekly/monthly reports. Now, obviously, it would be ideal if we could use the existing SQL trigger statements we have set up as then if one were changed it would then automatically affect the periodical reports - stay DRY. It would also mean that if we set up a new trigger, we could automatically include it in the reports by merely inserting a reference to the trigger code, along with the table name, frequency, etc, into the table that drives the periodical reports stored procedure. Again, in future, we could then write a UI, so that users can then request and schedule these reports themselves, with no intervention required from us.
I suspect I'm stuck in a catch-22 situation here. However, I've found a way around it that isn't too messy. I extract the item processing code into another stored procedure, and then compound execution of that onto the dynamic "SELECT INTO" statement - that way it runs in the same execution instance and thus has access to the temp table created in, and local to, that instance:-
SET #SQL = 'SELECT * INTO #TestTable FROM ' + #Table + ' WHERE ' + #WhereClause
SET #SQL = #SQL + '; EXEC ReportProcess'
EXECUTE sp_executeSQL #SQL
the ReportProcess stored procedure then has access to the temporary table and can process it, accordingly
I'm creating a SSIS package to load data from a CSV file to SQL table. The sample CSV file is
EMP_ID,EMP_NAME,DEPT_ID,MANAGER_ID,SALARY
1801,SCOTT,20,1221,3000
1802,ALLEN,30,1221,3400
I need to load data into a SQL Server table, but while loading I need to load Department Name and Manager Name instead of their IDs. So I need to convert the CSV source to
1801,SCOTT,FINANCE,JOHNSON,3000
1802,ALLEN,HR,JOHNSON,3400
The values for Department Name and Manager name come from the SQL Server database only. But how do I query and convert ID to text values?
I'm new to SSIS, please suggest how can I achieve this.
Thanks
John
CREATE PROCEDURE [dbo].[BulkInsert]
(
-- Declare Parameters here for your CSV file
)
AS
BEGIN
SET NOCOUNT ON;
declare #query varchar(max)
CREATE TABLE #TEMP
(
[FieldName] [int] NOT NULL ,
[FieldName] int NOT NULL,
)
SET #query = 'BULK INSERT #TEMP FROM ''' + PathOfYourTextFile + ''' WITH ( FIELDTERMINATOR = '','',ROWTERMINATOR = ''\n'')'
--print #query
--return
execute(#query)
BEGIN TRAN;
MERGE TableName AS Target
-- Now here you can get the value Department Name and Manager Name by using Target.Id --in the table from where you mant to get the value of the Manager Name
USING (SELECT * FROM #TEMP) AS Source
ON (Target.YourTableId = Source.YourTextFileFieldId)
-- In the above line we are checking if the particular row exists in the table(Table1) then update the Table1 if not then insert the new row in Table-1.
WHEN MATCHED THEN
UPDATE SET
Target.SomeId= Source.SomeId
WHEN NOT MATCHED BY TARGET THEN
-- Insert statement
The above code is just an example for you by taking the help from this you can edit in your code. And one more important thing for you, Bulk Insert is one of the great way to save the CSV files. So try to use this..:)
In SSIS package from Data Flow tab use LOOKUP process from the Toolbox. You'll specify the table to get your string values from and which columns to use for the join and the column to substitue your IDs with.
I have SQL Server 2008 R2. I have around 150 tables in a database and for each table I have recently created triggers. It is working fine in my local environment.
Now I want to deploy them on my live environment. The question is I want to deploy only the triggers.
I tried the Generate Script wizard but it is creating script with table schema along with triggers, NOT triggers only.
Is there anyway to generate all the triggers drop and create type script?
Forget the wizard. I think you have to get your hands dirty with code. Script below prints all triggers code and stores it into table. Just copy the script's print output or get it from #triggerFullText.
USE YourDatabaseName
GO
SET NOCOUNT ON;
CREATE TABLE #triggerFullText ([TriggerName] VARCHAR(500), [Text] VARCHAR(MAX))
CREATE TABLE #triggerLines ([Text] VARCHAR(MAX))
DECLARE #triggerName VARCHAR(500)
DECLARE #fullText VARCHAR(MAX)
SELECT #triggerName = MIN(name)
FROM sys.triggers
WHILE #triggerName IS NOT NULL
BEGIN
INSERT INTO #triggerLines
EXEC sp_helptext #triggerName
--sp_helptext gives us one row per trigger line
--here we join lines into one variable
SELECT #fullText = ISNULL(#fullText, '') + CHAR(10) + [TEXT]
FROM #triggerLines
--adding "GO" for ease of copy paste execution
SET #fullText = #fullText + CHAR(10) + 'GO' + CHAR(10)
PRINT #fullText
--accumulating result for future manipulations
INSERT INTO #triggerFullText([TriggerName], [Text])
VALUES(#triggerName, #fullText)
--iterating over next trigger
SELECT #triggerName = MIN(name)
FROM sys.triggers
WHERE name > #triggerName
SET #fullText = NULL
TRUNCATE TABLE #triggerLines
END
DROP TABLE #triggerFullText
DROP TABLE #triggerLines
Just in generate scripts wizard in the second step ("Set Scripting Options) press Advanced button=> Table/View Options=> Set Script Triggers to True.
check also this link or this. If you want only triggers just select one table to proceed the next step.
We have a script that must allow for being re-run several times.
We have an MS-SQL script that updates a table if a (now obsolete) column exists, then deletes the column. To ensure that the script can be run several times, it first checks for the existence of a column before performing the updates.
The script works as expected on our dev database, updating the data on the first run, then displaying the message 'Not updating' on subsequent runs.
On our test database the script runs fine on the first run, but errors with "Invalid column name 'OldColumn'" on subsequent runs; if I comment out the UPDATE and ALTER statements it runs as expected.
Is there a way to force the script to run even if there's a potential error, or is it something to do with how the database was set-up? (fingers crossed I'm not looking like a complete noob!)
IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'MyTable' AND COLUMN_NAME = 'OldColumn')
BEGIN
PRINT 'Updating and removing old column...'
UPDATE MyTable SET NewColumn='X' WHERE OldColumn=1;
ALTER TABLE MyTable DROP COLUMN OldColumn;
END
ELSE
PRINT 'Not updating'
GO
As a work around you could do
IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'MyTable' AND COLUMN_NAME = 'OldColumn')
BEGIN
PRINT 'Updating and removing old column...'
EXEC ('UPDATE MyTable SET NewColumn=''X'' WHERE OldColumn=1;');
ALTER TABLE MyTable DROP COLUMN OldColumn;
END
ELSE
PRINT 'Not updating'