I need to write a query that will run on different versions of a database. Some of them have a column named Delflag and some don't. If the column exists, it should be included in the select statement and the WHERE clause.
I tried it with the following code:
IF EXISTS
(
SELECT *
FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_name = 'Bericht'
AND column_name = 'DelFlag'
)
...
ELSE
...
In the IF block, I included the column. In the ELSE block it is not mentioned. However, if the column doesn't exist, I just get the an error "Msg 207 - invalid column name". I checked that the column name is only included in the IF EXISTS block.
I tried to select and filter a column if it exists and if not just leave the part out. I expected the query to run on different databases where the column sometimes exists, sometimes not. If the column exists, it runs, but if it doesn't exist, the query throws
Msg 207
Invalid column name
The query is failing at the binding stage as a column you are referencing doesn't exist.
You are going to have use dynamic sql to build a string based on your logical check and then execute the resulting string using sp_executesql
This would look something like
DECLARE #sql NVARCHAR(MAX) = 'SELECT ColumnA, ColumnB';
IF EXISTS
(
SELECT *
FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_name = 'Bericht' AND
column_name = 'DelFlag'
)
BEGIN
SET #sql += ', DelFlag';
END
SET #sql += ' FROM Bericht;';
EXEC sp_executesql #command = #sql;
Related
I've created a dynamic SQL script to generate statements to replace empty strings with NULL for every column in every table in my database. I've stored the script to the variable #SQL.
When I run EXEC #SQL, it generates the following results:
(No column name)
UPDATE [TableX] SET [ColumnA] = NULL WHERE [ColumnA] =''
UPDATE [TableX] SET [ColumnB] = NULL WHERE [ColumnB] =''
UPDATE [TableX] SET [ColumnC] = NULL WHERE [ColumnC] =''
UPDATE [TableY] SET [ColumnA] = NULL WHERE [ColumnA] =''
UPDATE [TableY] SET [ColumnB] = NULL WHERE [ColumnB] =''
UPDATE [TableY] SET [ColumnB] = NULL WHERE [ColumnB] =''
And so on... (there is an inconsistent/unknown number of columns and tables, and therefore an inconsistent/unknown number of results).
My problem is that rather than simply returning these statements as results, I want to actually execute all of the individual statements. I would be very grateful if someone could advise me on the easiest way to do so.
Thanks in advance!
EXEC #SQL will not return the results in your question unless #SQL is the name of a stored procedure. It seems you are actually using EXEC (#SQL) to execute a batch of SELECT statements, each of which returns a single column, single row result with a column containing an UPDATE statement.
Below is example code you can add to the end of your existing script to concatenate the multiple result sets into a single batch of statements for execution.
DECLARE #UpdateStatementsBatch nvarchar(MAX);
DECLARE #UpdateStatementsTable TABLE(UpdateStatement nvarchar(MAX));
INSERT INTO #UpdateStatementsTable
EXEC(#SQL);
SELECT #UpdateStatementsBatch = STRING_AGG(UpdateStatement,N';') + N';'
FROM #UpdateStatementsTable;
EXEC (#UpdateStatementsBatch);
It's probably better to modify your existing code to build the batch of update statements rather than concatenate after the fact but I can't provide an example without seeing your existing code.
Here's a suggestion for something you can try to build a single update statement per table.
Obviously I have no idea what you've built to construct your existing sql but you should be able to tweak to your requirements.
Basically concatenate all columns for all tables where they are a varchar datatype. If they are an empty string, update to null, otherwise retain current value.
I don't know what version of Sql server you have so I've used for xml for compatability, replace with string_agg if supported. Add any additional filtering eg for only specific tables.
with c as (
select c.table_name, cols=Stuff(
(select concat(',',QuoteName(column_name),' = ','iif(', QuoteName(column_name), '='''', NULL, ', QuoteName(column_name), ')',Char(10))
from information_schema.columns c2
where data_type in ('varchar','nvarchar') and c2.table_name=c.table_name
for xml path('')
),1,1,'')
from information_schema.columns c
group by c.table_name
)
select concat('update ', QuoteName(c.table_name), ' set ', cols,';')
from c
where cols is not null
As this is presumably a one-off data fix you can just cut and paste the resulting sql into SSMS and run it.
Alternatively you could add another level of concatenation and exec it if you wanted to make it something you can repeat.
For my SQL Server unit tests, I am trying to suppress returning all rows when counting results of a "get all rows" stored procedures. To this end I am trying to write the results to a temp table.
I begin with a linked server entry pointing to the current server and database.
IF NOT EXISTS (SELECT * FROM sys.servers WHERE NAME = 'DB_LOCAL')
BEGIN
DECLARE #DBNAME VARCHAR(MAX) = (SELECT DB_NAME())
EXEC sp_addlinkedserver #server = 'DB_LOCAL', #srvproduct = '', #provider = 'SQLOLEDB', #datasrc = ##servername, #catalog=#DBNAME
END
This is so that I can use OPENQUERY to create a temp table with the output structure of the stored procedure on the fly without having to hard code the columns.
IF OBJECT_ID('tempdb..#ut_results') IS NOT NULL
DROP TABLE #ut_results
SELECT *
INTO #ut_results
FROM OPENQUERY (DB_LOCAL, 'EXEC p_AnzLookup_get #id = 0')
I want to use this temp table to exec my stored procs into, I can't use OPENQUERY again because I need to specify variables as part of the test. Also, the unit test is in a transaction and doing so creates locking issues. Once I have the structure I do this. I can't specify the column names without the timestamp column which I appreciate would work as they could be changed by 3rd parties.
TRUNCATE TABLE #ut_results
INSERT INTO #ut_results
EXEC p_AnzLookup_get #id = #record_id
This insert into is failing because I have a timestamp column returned by the stored procedure.
Msg 273, Level 16, State 1, Line 2
Cannot insert an explicit value into a timestamp column. Use INSERT with a column list to exclude the timestamp column, or insert a DEFAULT into the timestamp column.
I can't change the timestamp column in the temp table due to this error.
ALTER TABLE #ut_results
ALTER COLUMN TStamp BINARY(8)
Msg 4928, Level 16, State 1, Line 5
Cannot alter column 'TStamp' because it is 'timestamp'.
I can't drop and recreate the timestamp column in the temp table because it changes the column order.
ALTER TABLE #ut_results DROP COLUMN TStamp
ALTER TABLE #ut_results ADD TStamp BINARY(8)
Which leads to different errors when data inserts into the wrong columns:
Msg 257, Level 16, State 3, Procedure p_AnzLookup_get, Line 20
Implicit conversion from data type datetime to int is not allowed. Use the CONVERT function to run this query.
I can't make changes to stored procs these unit tests are for and I can't hard code the column names. I need to write this in a way that is both resilient an reactive to changes outside of my control.
This is just a small subset of one of the unit tests that I have extracted to demonstrate this problem. Any thoughts as to how I get round this sticky bit?
I think I have something that will work. Create temp table using OPENQUERY then mash that through XML to get a csv list of column names into a varchar. Then do a replace on this to cast the timestamp column as varbinary(8).
Using the column list from the above I can select the structure of my first temp table into the second with the fields in the correct order and the timestamp column defined as varbinary(8) that can accept timestamp data.
It isnt pretty tho:
IF OBJECT_ID('tempdb..#ut_results') IS NOT NULL DROP TABLE #ut_results
IF OBJECT_ID('tempdb..#ut_results2') IS NOT NULL DROP TABLE #ut_results2
SELECT * INTO #ut_results2
FROM OPENQUERY ( DB_LOCAL ,'EXEC p_AnzLookup_get #id=-1' )
DECLARE #cols VARCHAR(MAX) = ''
SELECT #cols = Stuff(( select ',' + name from tempdb.sys.columns where object_id =
object_id('tempdb..#ut_results2')
FOR XML PATH(''), TYPE
).value('.', 'NVARCHAR(MAX)')
, 1, 1, '')
SET #cols = REPLACE(#cols,'TStamp','CAST(TStamp AS VARBINARY(8)) AS TStamp')
DECLARE #SQl VARCHAR(MAX) = 'SELECT ' + #cols + ' INTO #ut_results FROM #ut_results2 WHERE 1=2'
EXEC (#SQl)
IF OBJECT_ID('tempdb..#ut_results2') IS NOT NULL DROP TABLE #ut_results2
EDIT:
One slight change to this. In the above I create the final temp table in dynamic SQL but it was not accessible afterwards. I have changed this to a regular table.
DECLARE #SQl VARCHAR(MAX) = 'SELECT ' + #cols + ' INTO ut_results FROM #ut_results2'
EXECUTE (#SQl)
I am trying to check whether a table exists or not in the database using AssertObjectExists. Actually I have 10 tables to check whether those tables exists or not. Since the test is verification of existence of tables. I want to put together in one test.
When I keep all assertions in one test, if any of the object assertion fails, the remaining assertions are not executing.
My goal is to check whether the tables are present from a set say 10 tables. And report the list of tables which doesn't exists. I am pasting the sample code below.
ALTER PROCEDURE [Test Tracker].[test TablesExists_01]
AS
BEGIN
-- Verify the existance of each table
EXEC tSQLt.AssertObjectExists #ObjectName = 'auth_user',
#Message = 'Unable to find auth_user Table'
EXEC tSQLt.AssertObjectExists #ObjectName = 'auth_permissions',
#Message = 'Unable to find auth_permissions Table'
EXEC tSQLt.AssertObjectExists #ObjectName = 'auth_groups',
#Message = 'Unable to find auth_groups Table'
END;
Can someone redirect me in right path.
Edit: Solution Given by Brian
IF (NOT EXISTS (SELECT * FROM INFORMATION_SCHEMA.Tables WHERE TABLE_NAME = 'auth_user' AND TABLE_SCHEMA = #schema))
SET #errorMessage = #errorMessage + 'Unable to find auth_user' + CHAR(10)
IF (NOT EXISTS (SELECT * FROM INFORMATION_SCHEMA.Tables WHERE TABLE_NAME = 'auth_group' AND TABLE_SCHEMA = #schema))
SET #errorMessage = #errorMessage + 'Unable to find auth_group' + CHAR(10)
IF LEN(#errorMessage) = 0
PRINT 'All the Tables in Authentication exists'
ELSE
EXEC tsqlt.Fail #Message = #errorMessage
In the above code CHAR(10) is new line code. I just modified it for having a nice console output
You might try this:
Declare #tableName as varchar(100)
set #tableName = 'auth_user'
IF (EXISTS (SELECT *
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_SCHEMA = 'dbo'
AND TABLE_NAME = #tableName ))
BEGIN
--Do Stuff
END
set #tableName = 'auth_permissions'
...
Then just iterate through the rest of the table names. To make it very easy, make this a stored procedure that takes a single string as a parameter that is delimited like :
'auth_user|auth_permission|etc.'
Then you could use a Split function to separate each inbound name into a virtual table you could then cursor through and get the answers to whether the table exist. Thus your stored procedure would be useful in any situation where you wanted to check the exist of 1 to many tables.
I would suggest a similar approach to Brian, but perhaps you can declare a table variable (#Expected) with a single column of the expected table names, then your test can be to select into a second table variable (#Actual) all those entries from INFORMATION_SCHEMA.tables inner joined to #Expected (specifiying schema, etc. in the where clause).
Then, you can use tSQLt.AssetEqualsTable to compare the contents of #Expected with #Actual - if they are the same (all objects exist) then your test will pass, but if not then the test will fail, and all mismatched rows (each indicating a missing object) will show up in the failure message.
Below is an excerpt of a SQL Query that I am using to update a table to the correct datatypes if needed.
If NOT Exists(Select * From Information_Schema.Columns
Where Table_Name = N'RXINFO'
And Table_Schema = N'scriptassist'
And Column_Name = N'LastChanged'
And DATA_Type = N'TIMESTAMP'
AND IsNull(CHARACTER_MAXIMUM_LENGTH, 0) = 0)
BEGIN
Print 'LastChanged Field needed type updating'
Alter Table [scriptassist].[RXINFO] Alter Column LastChanged TIMESTAMP
END
Currently the problem is as follows:
If I run the statement With the Alter Table present SQL Server throws this error at me.
Msg 4927, Level 16, State 1, Line 12
Cannot alter column 'LastChanged' to be data type timestamp.
The problem isn't that it can't change the Datatype the problem is that it is attempting to execute that code block regardless of the evaluation of the Condition.
It should evaluate to False in this case.
If I take it out, nothing happens, the print statement doesn't even fire.
The only thing that I can think of thus far is that somehow MS SQL is evaluation the SQL beforehand and determining if all the code paths can execute, and since they can't it throws the error. However this doesn't make that much sense.
SQL Server parses your SQL before it executes it. The error is raised during parsing.
To delay parsing until the line is actually run, use exec:
exec ('Alter Table [scriptassist].[RXINFO] Alter Column LastChanged TIMESTAMP')
I believe you're getting this error because SQL cannot perform a conversion from the previous datatype of your TimeStamp column to an actual TimeStamp datatype. You'll need to drop and then add the column instead.
If NOT Exists(Select * From Information_Schema.Columns
Where Table_Name = N'RXINFO'
And Table_Schema = N'scriptassist'
And Column_Name = N'LastChanged'
And DATA_Type = N'TIMESTAMP'
AND IsNull(CHARACTER_MAXIMUM_LENGTH, 0) = 0)
BEGIN
Print 'LastChanged Field needed type updating'
Alter Table [scriptassist].[RXINFO] Drop Column LastChanged
Alter Table [scriptassist].[RXINFO] Add LastChanged TimeStamp
END
I need to iterate through the fields on a table and do something if its value does not equal its default value.
I'm in a trigger and so I know the table name. I then loop through each of
the fields using this loop:
select #field = 0, #maxfield = max(ORDINAL_POSITION) from
INFORMATION_SCHEMA.COLUMNS where TABLE_NAME = #TableName
while #field < #maxfield
begin
...
I can then get the field name on each iteration through the loop:
select #fieldname = COLUMN_NAME from INFORMATION_SCHEMA.COLUMNS
where TABLE_NAME = #TableName
and ORDINAL_POSITION = #field
And I can get the default value for that column:
select #ColDefault = SUBSTRING(Column_Default,2,LEN(Column_Default)-2)
FROM INFORMATION_SCHEMA.COLUMNS
WHERE Table_Name = #TableName
AND Column_name = #fieldname
I have everything I need but I can't see how to then compare the 2. Because
I don't have the field name as a constant, only in a variable, I can't see
how to get the value out of the 'inserted' table (remember I'm in a trigger)
in order to see if it is the same as the default value (held now in
#ColDefault as a varchar).
First, remember that a trigger can be fired with multiple records coming in simultaneously. If I do this:
INSERT INTO dbo.MyTableWithTrigger
SELECT * FROM dbo.MyOtherTable
then my trigger on the MyTableWithTrigger will need to handle more than one record. The "inserted" pseudotable will have more than just one record in it.
Having said that, to compare the data, you can run a select statement like this:
DECLARE #sqlToExec VARCHAR(8000)
SET #sqlToExec = 'SELECT * FROM INSERTED WHERE [' + #fieldname + '] <> ' + #ColDefault
EXEC(sqlToExec)
That will return all rows from the inserted pseudotable that don't match the defaults. It sounds like you want to DO something with those rows, so what you might want to do is create a temp table before you call that #sqlToExec string, and instead of just selecting the data, insert it into the temp table. Then you can use those rows to do whatever exception handling you need.
One catch - this T-SQL only works for numeric fields. You'll probably want to build separate handling for different types of fields. You might have varchars, numerics, blobs, etc., and you'll need different ways of comparing those.
I suspect you can do this using and exec.
But why not just code generate once. It will be more performant