I've got a similar data structure
Parameter | Value | DateTime
----------------------------
Switch | "on" | 2019-10-13 15:01:25
Temp | 25 | 2019-10-13 15:01:37
Pressure | 1006 | 2019-10-13 15:01:53
...
Temp | 22 | 2019-10-13 15:04:41
Switch | "off" | 2019-10-13 15:04:59
...
Switch | "on" | 2019-10-13 17:14:51
Temp | 27 | 2019-10-13 17:15:07
...
Switch | "off" | 2019-10-13 17:17:43
Between each pair of Switch "on" and "off" I have to calculate the values for the parameters, i.e. average or max/min and so on. How can I get the different data sets to have multiple groups for the calculation?
I think this should be solvable with
- Stored Procedure (statement?)
- SSIS package (how?)
- .NET application.
What might be the best way to solve this issue?
Thanks in advance.
Update
This is the full structure of the table.
CREATE TABLE [schema].[foo]
(
[Id] UNIQUEIDENTIFIER NOT NULL PRIMARY KEY,
[Group] VARCHAR(20) NOT NULL,
[Parameter] VARCHAR(50) NOT NULL,
[Type] VARCHAR(50) NOT NULL,
[Timestamp] DATETIME NOT NULL,
[Value] NVARCHAR(255) NOT NULL,
[Unit] VARCHAR(10) NOT NULL,
// Only for logging. No logic for the use case.
[InsertedTimestampUtc] DATETIME NOT NULL DEFAULT(GetUtcDate()),
[IsProcessed] INT NOT NULL DEFAULT(0)
)
If I understand your question correctly, the next approach may help to get the expected results:
Table:
CREATE TABLE #Data (
[DateTime] datetime,
[Parameter] varchar(50),
[Value] varchar(10)
)
INSERT INTO #Data
([DateTime], [Parameter], [Value])
VALUES
('2019-10-13T15:01:25', 'Switch', 'on'),
('2019-10-13T15:01:37', 'Temp', '25'),
('2019-10-13T15:01:53', 'Pressure', '1006'),
('2019-10-13T15:04:41', 'Temp', '22'),
('2019-10-13T15:04:59', 'Switch', 'off'),
('2019-10-13T17:14:51', 'Switch', 'on'),
('2019-10-13T17:15:07', 'Temp', '27'),
('2019-10-13T17:17:43', 'Switch', 'off')
Statement:
;WITH ChangesCTE AS (
SELECT
*,
CASE WHEN [Parameter] = 'Switch' AND [Value] = 'on' THEN 1 ELSE 0 END AS ChangeIndex
FROM #Data
), GroupsCTE AS (
SELECT
*,
SUM(ChangeIndex) OVER (ORDER BY [DateTime]) AS GroupIndex
FROM ChangesCTE
)
SELECT [GroupIndex], [Parameter], AVG(TRY_CONVERT(int, [Value]) * 1.0) AS [AvgValue]
FROM GroupsCTE
WHERE [Parameter] <> 'Switch'
GROUP BY [GroupIndex], [Parameter]
Results:
GroupIndex Parameter AvgValue
1 Pressure 1006.000000
1 Temp 23.500000
2 Temp 27.000000
Related
I downloaded a database related to the stock market, where the values are expressed in the varchar(50) data type. I wanted to convert them to money so that I could perform actions on them.
Unfortunately, I can't do this either through the following command, or through the design option.
ALTER TABLE dbo.NASDAQ100
ALTER COLUMN High money;
What can I do?
Cannot convert a char value to money. The char value has incorrect syntax.
About all you can do here is identify which values aren't going to convert to a MONEY data type, then make whatever adjustments you need to do to fix them.
As an example, I'll create a table with some dummy data, setting the [low] and [high] column data types to VARCHAR(50):
CREATE TABLE dbo.NASDAQ100
(
[stock_code] CHAR(3) NULL
, [low] VARCHAR(50) NULL
, [high] VARCHAR(50) NULL
) ;
GO
INSERT
INTO dbo.NASDAQ100 ( [stock_code], [low], [high] )
VALUES ( 'ABB' , '101.3348' , '103.2577' )
, ( 'FRG' , '4.5098' , '4.5663' )
, ( 'PLA' , '40.0001' , '4O.2121' )
, ( 'RDG' , 'USD8.7890' , 'USD11.2345' )
, ( 'ZZT' , '2.8q87' , '5.6996' ) ;
GO
A simple SELECT * FROM dbo.NASDAQ100 query returns these results:
stock_code | low | high
------------------------------------
ABB | 101.3348 | 103.2577
FRG | 4.5098 | 4.5663
PLA | 40.0001 | 4O.2121
RDG | USD8.7890 | USD11.2345
ZZT | 2.8q87 | 5.6996
As you can see, the data's pretty dirty.
If I try to change the [low] and [high] column data types to MONEY now:
ALTER TABLE dbo.NASDAQ100 ALTER COLUMN [low] MONEY ;
ALTER TABLE dbo.NASDAQ100 ALTER COLUMN [high] MONEY ;
I get a Cannot convert a char value to money. The char value has incorrect syntax error.
As mentioned, the only real way to fix this is to identify which values need to be corrected, then manually correct them yourself.
The following query -- which utilises the TRY_CAST built-in function -- should identify which values won't successfully convert to a MONEY data type:
WITH cte_Nasdaq100 AS
(
SELECT [stock_code]
, [low] AS [low_original_value]
, TRY_CAST ( [low] AS MONEY ) AS [low_as_money]
, [high] AS [high_original_value]
, TRY_CAST ( [high] AS MONEY ) AS [high_as_money]
FROM dbo.NASDAQ100
)
SELECT [stock_code]
, CASE
WHEN [low_as_money] IS NULL
THEN [low_original_value]
ELSE '-'
END AS [low_values_to_be_fixed]
, CASE
WHEN [high_as_money] IS NULL
THEN [high_original_value]
ELSE '-'
END AS [high_values_to_be_fixed]
FROM cte_Nasdaq100
WHERE [high_as_money] IS NULL
OR [low_as_money] IS NULL ;
GO
Running this query over my sample data, I get the following results:
stock_code | low_values_to_be_fixed | high_values_to_be_fixed
-------------------------------------------------------------
PLA | - | 4O.2121
RDG | USD8.7890 | USD11.2345
ZZT | 2.8q87 | -
Now, although the dirty values are identified, there's no way to determine what they should be. This is were you need to do some leg work and look them up.
Once you've got the correct values, run some UPDATE statements to make the corrections:
UPDATE dbo.NASDAQ100
SET [high] = '40.2121'
WHERE [stock_code] = 'PLA' ;
UPDATE dbo.NASDAQ100
SET [low] = '8.7890'
, [high] = '11.2345'
WHERE [stock_code] = 'RDG' ;
UPDATE dbo.NASDAQ100
SET [low] = '2.8987'
WHERE [stock_code] = 'ZZT' ;
GO
Now a simple SELECT * FROM dbo.NASDAQ100 query returns clean data:
stock_code | low | high
--------------------------------
ABB | 101.3348 | 103.2577
FRG | 4.5098 | 4.5663
PLA | 40.0001 | 40.2121
RDG | 8.7890 | 11.2345
ZZT | 2.8987 | 5.6996
And if I run the query to identify the dirty data again (i.e. WITH cte_Nasdaq100 AS...), it returns no results.
I can now change the data types on the [low] and [high] columns to MONEY without SQL Server spitting the dummy:
ALTER TABLE dbo.NASDAQ100 ALTER COLUMN [low] MONEY ;
ALTER TABLE dbo.NASDAQ100 ALTER COLUMN [high] MONEY ;
To test, I run a query with a computation in the results:
SELECT [stock_code]
, [low]
, [high]
, [high] - [low] AS [difference]
FROM dbo.NASDAQ100 ;
GO
And here's the results:
stock_code | low | high | difference
---------------------------------------------
ABB | 101.3348 | 103.2577 | 1.9229
FRG | 4.5098 | 4.5663 | 0.0565
PLA | 40.0001 | 40.2121 | 0.212
RDG | 8.7890 | 11.2345 | 2.4455
ZZT | 2.8987 | 5.6996 | 2.8009
Hope this helps.
This is my 1st attempt at openjson & I'm trying to setup a stored procedure that passes in multiple records and either inserts or updates the record into the table. I can setup the basic insert or update query using my openjson, however my problem is I don't know how to determine if the current record needs to be inserted or updated based on json values.
Here's a quick, basic example. I pass in json data with 2 orders that I want to insert/update in the dbo.Orders table.
The 1st order I pass in has OrderId = 123, so since I know the record already exists, I need to update it. The 2nd order however, has an OrderId = 0. So this one doesn't exist in the database & needs to be inserted.
How would I do that?
DECLARE #json NVARCHAR(2048) = N'[
{
"Order": {
"OrderId":123,
"Number":"SO43659",
"Date":"2011-05-31T00:00:00"
},
"AccountNumber":"AW29825",
"Item": {
"Price":2024.9940,
"Quantity":1
}
},
{
"Order": {
"Number":"SO43661",
"Date":"2011-06-01T00:00:00"
},
"AccountNumber":"AW73565",
"Item": {
"Price":2024.9940,
"Quantity":3
}
}
]'
SELECT * FROM OpenJson(#json);
--Here's where I need to do an insert/update. Not sure how, but here's the gist:
--If Json's Order.OrderId > 0
--BEGIN
-- UPDATE dbo.Orders WHERE OrderId = <Json's Order.OrderId>
--END
--ELSE
--BEGIN
-- INSERT INTO dbo.Orders (all values)
--END
Thanks for any help
As Larnu mentioned in his comment, MERGE is what you're looking for.
Here's an example that you can run in SSMS:
DECLARE #json nvarchar(2048) = N'[
{"Order":{"OrderId":123,"Number":"SO43659","Date":"2011-05-31T00:00:00"},"AccountNumber":"AW29825","Item":{"Price":2024.9940,"Quantity":1}},
{"Order":{"Number":"SO43661","Date":"2011-06-01T00:00:00"},"AccountNumber":"AW73565","Item":{"Price":2024.9940,"Quantity":3}}
]';
/* ORDERS TABLE MOCK-UP */
DECLARE #Orders table (
OrderId int, OrderNumber varchar(10), OrderDate datetime, AccountNumber varchar(10), ItemPrice decimal(18,4), ItemQuantity int
);
/* INSERT A RECORD THAT WILL BE UPDATED BY THE MERGE */
INSERT INTO #Orders VALUES
( 123, 'SO43659', '2011-05-31 00:00:00.000', 'AW29825', 2024.9940, 5 ); -- Quantity will be updated to 1 via the MERGE.
/* SHOW ORDERS STARTING RESULTSET */
SELECT * FROM #Orders;
/* PERFORM ORDERS MERGE TO UPDATE/INSERT ROWS */
MERGE #Orders AS ord
USING (
SELECT * FROM OPENJSON( #json ) WITH (
OrderId int '$.Order.OrderId',
OrderNumber varchar(10) '$.Order.Number',
OrderDate datetime '$.Order.Date',
AccountNumber varchar(10) '$.AccountNumber',
ItemPrice decimal(18,4) '$.Item.Price',
ItemQuantity int '$.Item.Quantity'
)
) AS jsn
ON
ord.OrderId = jsn.OrderId
WHEN MATCHED THEN
UPDATE SET
OrderNumber = jsn.OrderNumber,
OrderDate = jsn.OrderDate,
AccountNumber = jsn.AccountNumber,
ItemPrice = jsn.ItemPrice,
ItemQuantity = jsn.ItemQuantity
WHEN NOT MATCHED THEN
INSERT ( OrderId, OrderNumber, OrderDate, AccountNumber, ItemPrice, ItemQuantity )
VALUES ( jsn.OrderId, jsn.OrderNumber, jsn.OrderDate, jsn.AccountNumber, jsn.ItemPrice, jsn.ItemQuantity );
/* SHOW THE MERGED RESULTSET */
SELECT * FROM #Orders;
The initial resultset of #Orders is:
+---------+-------------+-------------------------+---------------+-----------+--------------+
| OrderId | OrderNumber | OrderDate | AccountNumber | ItemPrice | ItemQuantity |
+---------+-------------+-------------------------+---------------+-----------+--------------+
| 123 | SO43659 | 2011-05-31 00:00:00.000 | AW29825 | 2024.9940 | 5 |
+---------+-------------+-------------------------+---------------+-----------+--------------+
After performing the MERGE, the updated #Orders resultset is:
+---------+-------------+-------------------------+---------------+-----------+--------------+
| OrderId | OrderNumber | OrderDate | AccountNumber | ItemPrice | ItemQuantity |
+---------+-------------+-------------------------+---------------+-----------+--------------+
| 123 | SO43659 | 2011-05-31 00:00:00.000 | AW29825 | 2024.9940 | 1 |
| NULL | SO43661 | 2011-06-01 00:00:00.000 | AW73565 | 2024.9940 | 3 |
+---------+-------------+-------------------------+---------------+-----------+--------------+
You can see that the MERGE inserted the new record for Number SO43661 and updated OrderId 123's ItemQuantity from 5 (its initial value) to 1.
I have an EAV table with attributes and would like to do a hybrid selection of the items based on variables that are passed into a stored procedure.
Sample table:
| group_id | item_id | key | value |
+----------+---------+--------+-------+
| 1 | AA | length | 10 |
| 1 | AA | width | 10 |
| 1 | AA | color | white |
| 1 | AA | brand | beta |
| 1 | BB | length | 25 |
| 1 | BB | brand | alpha |
| 2 | CC | brand | alpha |
Sample query:
declare #attributes nvarchar(max) = 'brand name, length'
declare #attributeValues nvarchar(max) = 'alpha, beta, 25'
declare #id int = 1
select *
into #allProductsFromGroup
from items
where group_id = #id
select item_id
from #allProductsFromGroup #all
where [key] in (select value from string_split(#attributes, ','))
and [value] in (select value from string_split(#attributeValues, ','))
Expected output:
| item_id |
+---------+
| BB |
I could hard-code in and and or statements for each key, but there are many, and I am looking for a more scalable solution.
Passing in and parsing JSON would be good, like:
[
{ "brand": "aplha" },
{ "brand": "beta" },
{ "length": 25 }
]
How can I write the second select to dynamically return a subset of allProductsFromGroup that dynamically include multiple results from the same group (multi-select brand or multi-select length), but exclude from other groups (color, length, etc.)?
The target query might look something like this:
with q as
(
select item_id,
max( case when [key] = 'brand' then [value] end ) brand,
max( case when [key] = 'length' then cast( [value] as int ) end ) length,
from #allProductsFromGroup
group by Item_id
)
select item_id
from q
where brand in ('alpha','beta') and length=25
You just have to build it from the incoming data (yuck). A simpler query form to generate might be something like
select item_id
from #allProductsFromGroup
where [key] = 'brand' and [value] in ('alpha','beta')
intersect
select item_id
from #allProductsFromGroup
where [key] = 'length' and [value] = 25
mapping and criteria to intersect, and or criteria to union. It's likely to be cheaper too, as each query can seek an index on (key,value).
It's probably a late answer, but if you can pass conditions as JSON, the next approach is also a possible solution. The JSON must be in the same format as in the answer and you may use more than two conditions:
Table:
CREATE TABLE Data (
group_id int,
item_id varchar(2),
[key] varchar(100),
[value] varchar(100)
)
INSERT INTO Data (group_id, item_id, [key], [value])
VALUES
(1, 'AA', 'length', '10'),
(1, 'AA', 'width', '10'),
(1, 'AA', 'color', 'white'),
(1, 'AA', 'brand', 'beta'),
(1, 'BB', 'length', '25'),
(1, 'BB', 'brand', 'alpha'),
(2, 'CC', 'brand', 'alpha')
Conditions as JSON:
DECLARE #conditions varchar(max) = N'
[
{"key": "brand", "values": ["alpha", "beta"]},
{"key": "length", "values": ["25"]}
]
'
Statement:
SELECT d.item_id
FROM Data d
JOIN (
SELECT j1.[key], j2.[value]
FROM OPENJSON(#conditions) WITH (
[key] varchar(100) '$.key',
[values] nvarchar(max) '$.values' AS JSON
) j1
CROSS APPLY OPENJSON(j1.[values]) j2
) o ON d.[key] = o.[key] AND d.[value] = o.[value]
GROUP BY d.item_id
HAVING COUNT(*) = (SELECT COUNT(*) FROM OPENJSON(#conditions))
Result:
item_id
BB
I would like to know when UserId was changed to the current value.
Say we got a table Foo:
Foo
Id | UserId
---+-------
1 | 1
2 | 2
Now I would need to be able to execute a query like:
SELECT UserId, UserIdModifiedAt FROM Foo
Luckily I have logged all the changes in history to table FooHistory:
FooHistory
Id | FooId | UserId | FooModifiedAt
---|-------+--------+---------------
1 | 1 | NULL | 1.1.2019 02:00
2 | 1 | 2 | 1.1.2019 02:01
3 | 1 | 1 | 1.1.2019 02:02
4 | 1 | 1 | 1.1.2019 02:03
5 | 2 | 1 | 1.1.2019 02:04
6 | 2 | 2 | 1.1.2019 02:05
7 | 2 | 2 | 1.1.2019 02:06
So all the data we need is available (above the user of Foo #1 was last modified 02:02 and the user of Foo #2 02:05). We will add a new column UserIdModifiedAt to Foo
Foo v2
Id | UserId | UserIdModifiedAt
---+--------|-----------------
1 | 1 | NULL
2 | 2 | NULL
... and set its values using a trigger. Fine. But how to migrate the history? What script would fill UserIdModifiedAt for us?
See an example of the table structure:
DROP TABLE IF EXISTS [Foo]
DROP TABLE IF EXISTS [FooHistory]
CREATE TABLE [Foo]
(
[Id] INT NOT NULL CONSTRAINT [PK_Foo] PRIMARY KEY,
[UserId] INT,
[UserIdModifiedAt] DATETIME2 -- Automatically updated based on a trigger
)
CREATE TABLE [FooHistory]
(
[Id] INT IDENTITY NOT NULL CONSTRAINT [PK_FooHistory] PRIMARY KEY,
[FooId] INT,
[UserId] INT,
[FooModifiedAt] DATETIME2 NOT NULL CONSTRAINT [DF_FooHistory_FooModifiedAt] DEFAULT (sysutcdatetime())
)
GO
CREATE TRIGGER [trgFoo]
ON [dbo].[Foo]
AFTER INSERT, UPDATE
AS
BEGIN
IF EXISTS (SELECT [UserId] FROM inserted EXCEPT SELECT [UserId] FROM deleted)
BEGIN
UPDATE [Foo] SET [UserIdModifiedAt] = SYSUTCDATETIME() FROM [inserted] WHERE [Foo].[Id] = [inserted].[Id]
END
INSERT INTO [FooHistory] ([FooId], [UserId])
SELECT [Id], [UserId] FROM inserted
END
GO
/* Test data */
INSERT INTO [Foo] ([Id], [UserId]) VALUES (1, NULL)
WAITFOR DELAY '00:00:00.010'
UPDATE [Foo] SET [UserId] = NULL
WAITFOR DELAY '00:00:00.010'
UPDATE [Foo] SET [UserId] = 1
WAITFOR DELAY '00:00:00.010'
UPDATE [Foo] SET [UserId] = 1
WAITFOR DELAY '00:00:00.010'
SELECT * FROM [Foo]
SELECT * FROM [FooHistory]
Related question: Select first row in each GROUP BY group?.
If I understand your question right, it looks like you have already answered it yourself by the way you created your trigger on dbo.Foo.
It looks like the UserIdModifiedAt is modified the first time the UserId changes and not modified when it does not change, in which case your answer is simply dbo.Foo.UserIdModifiedAt.
If you did not mean to write this trigger like that, I think it is possible to retrieve that value from FooHistory but it's much more complicated.
The code below might do what I think you were asking for
;WITH FooHistoryRanked
AS (
SELECT FH.Id, FH.FooId, FH.FooModifiedAt, FH.UserId
, RankedASC = ROW_NUMBER() OVER(PARTITION BY FH.FooId ORDER BY FooModifiedAt ASC) -- 1 = first change to that Foo record
FROM [FooHistory] FH
)
,Matches AS
(
SELECT FHR1.*
, PreviousUserId = FHR2.UserId
, PreviousFooModifiedAt = FHR2.FooModifiedAt
, PreviousHistoryId = FHR2.Id
FROM FooHistoryRanked FHR1
-- join on Foo filters on current value
INNER JOIN [Foo] F ON F.Id = FHR1.FooId
AND ( FHR1.UserId = F.UserId
OR (FHR1.UserId IS NULL AND F.UserId IS NULL)
)
-- Find preceding changes to a different value
LEFT JOIN FooHistoryRanked FHR2 ON FHR2.FooId = FHR1.FooId
AND FHR2.RankedASC = FHR1.RankedASC - 1 -- previous change
AND ( FHR2.UserId <> FHR1.UserId
OR ( FHR2.UserId IS NULL AND FHR1.UserId IS NOT NULL )
OR ( FHR2.UserId IS NOT NULL AND FHR1.UserId IS NULL )
)
)
,MatchesRanked AS
(
-- select the modifications that had a different value before OR that are the only modification
SELECT *, MatchRanked = ROW_NUMBER() OVER(PARTITION BY FooId ORDER BY Id DESC)
FROM Matches
WHERE RankedASC = 1 OR PreviousFooModifiedAt IS NOT NULL
)
SELECT *
FROM MatchesRanked
WHERE MatchRanked = 1 -- just get the last qualifying record
ORDER BY FooId, FooModifiedAt DESC, UserId;
PS:
1) Performance could be a problem if these tables were big...
2) you could probably use LAG instead of the LEFT JOIN but I am just used to do things this way...
Imagine I have 200 columns in one INSERT statement, and I occasionally get an "Cannot convert" error for one of columns. Things is, I do not know which column causes this error.
Is there any way in T-SQL or mybatis to check WHICH column has the incorrect format? (I have just date, char, numeric). I can use ISNUMERIC, ISDATE for every column, but this is not so elegant.
I'm using mybatis in Java, so I cannot use any PreparedStatement or so.
You could build a query that tries to convert each of the suspected columns.
And limit the query to where one of the attempts to convert fails.
Mostly the bad data will be in CHAR's or VARCHAR's when trying to cast or convert them to a datetime or number type.
So you can limit your research to those.
Also, from the error you should see which value failed to convert to which type. Which can also help to limit which fields you research.
A simplified example using table variables:
declare #T1 table (id int identity(1,1) primary key, field1 varchar(30), field2 varchar(30), field3 varchar(30));
declare #T2 table (id int identity(1,1) primary key, field1_int int, field2_date date, field3_dec decimal(10,2));
insert into #T1 (field1, field2, field3) values
('1','2018-01-01','1.23'),
('not an int','2018-01-01','1.23'),
('1','not a date','1.23'),
('1','2018-01-01','not a decimal'),
(null,'2018-01-01','1.23'),
('1',null,'1.23'),
('1','2018-01-01',null)
;
select top 1000
id,
case when try_convert(int, field1) is null then field1 end as field1,
case when try_convert(date, field2) is null then field2 end as field2,
case when try_convert(decimal(10,4), field3) is null then field3 end as field3
from #T1
where
try_convert(int, coalesce(field1, '0')) is null
or try_convert(date, coalesce(field2, '1900-01-01')) is null
or try_convert(decimal(10,4), coalesce(field3, '0.0')) is null;
Returns:
id field1 field2 field3
-- ---------- ----------- -------------
2 not an int NULL NULL
3 NULL not a date NULL
4 NULL NULL not a decimal
If the origin data doesn't have to much bad data you could try to fix the origin data first.
Or use the try_convert for the problematic columns with bad data.
For example:
insert into #T2 (field1_int, field2_date, field3_dec)
select
try_convert(int, field1),
try_convert(date, field2),
try_convert(decimal(10,4), field3)
from #T1;
With larger imports - especially when you expect issues - a two-stepped approach is highly recommended.
import the data to a very tolerant staging table (all NVARCHAR(MAX))
check, evaluate, manipulate, correct whatever is needed and do the real insert from here
Here is a generic approach you might adapt to your needs. It will check all tables values against a type-map-table and output all values, which fail in TRY_CAST (needs SQL-Server 2012+)
A table to mockup the staging table (partly borrowed from LukStorms' answer - thx!)
CREATE TABLE #T1 (id INT IDENTITY(1,1) PRIMARY KEY
,fldInt VARCHAR(30)
,fldDate VARCHAR(30)
,fldDecimal VARCHAR(30));
GO
INSERT INTO #T1 (fldInt, fldDate, fldDecimal) values
('1','2018-01-01','1.23'),
('blah','2018-01-01','1.23'),
('1','blah','1.23'),
('1','2018-01-01','blah'),
(null,'2018-01-01','1.23'),
('1',null,'1.23'),
('1','2018-01-01',null);
--a type map (might be taken from INFORMATION_SCHEMA of an existing target table automatically)
DECLARE #type_map TABLE(ColumnName VARCHAR(100),ColumnType VARCHAR(100));
INSERT INTO #type_map VALUES('fldInt','int')
,('fldDate','date')
,('fldDecimal','decimal(10,2)');
--The staging table's name
DECLARE #TableName NVARCHAR(100)='#T1';
--dynamically created statements for each column
DECLARE #columnSelect NVARCHAR(MAX)=
(SELECT
' UNION ALL SELECT id ,''' + tm.ColumnName + ''',''' + tm.ColumnType + ''',' + QUOTENAME(tm.ColumnName)
+ ',CASE WHEN TRY_CAST(' + QUOTENAME(tm.ColumnName) + ' AS ' + tm.ColumnType + ') IS NULL THEN 0 ELSE 1 END ' +
'FROM ' + QUOTENAME(#TableName)
FROM #type_map AS tm
FOR XML PATH('')
);
-The final dynamically created statement
DECLARE #cmd NVARCHAR(MAX)=
'SELECT tbl.*
FROM
(
SELECT 0 AS id,'''' AS ColumnName,'''' AS ColumnType,'''' AS ColumnValue,0 AS IsValid WHERE 1=0 '
+ #columnSelect +
') AS tbl
WHERE tbl.IsValid = 0;'
--Execution with EXEC()
EXEC(#cmd);
The result:
+----+------------+---------------+-------------+---------+
| id | ColumnName | ColumnType | ColumnValue | IsValid |
+----+------------+---------------+-------------+---------+
| 2 | fldInt | int | blah | 0 |
+----+------------+---------------+-------------+---------+
| 5 | fldInt | int | NULL | 0 |
+----+------------+---------------+-------------+---------+
| 3 | fldDate | date | blah | 0 |
+----+------------+---------------+-------------+---------+
| 6 | fldDate | date | NULL | 0 |
+----+------------+---------------+-------------+---------+
| 4 | fldDecimal | decimal(10,2) | blah | 0 |
+----+------------+---------------+-------------+---------+
| 7 | fldDecimal | decimal(10,2) | NULL | 0 |
+----+------------+---------------+-------------+---------+
The statement created is like here:
SELECT tbl.*
FROM
(
SELECT 0 AS id,'' AS ColumnName,'' AS ColumnType,'' AS ColumnValue,0 AS IsValid WHERE 1=0
UNION ALL SELECT id
,'fldInt'
,'int'
,[fldInt]
,CASE WHEN TRY_CAST([fldInt] AS int) IS NULL THEN 0 ELSE 1 END
FROM [#T1]
UNION ALL SELECT id
,'fldDate'
,'date',[fldDate]
,CASE WHEN TRY_CAST([fldDate] AS date) IS NULL THEN 0 ELSE 1 END
FROM [#T1]
UNION ALL SELECT id
,'fldDecimal'
,'decimal(10,2)'
,[fldDecimal]
,CASE WHEN TRY_CAST([fldDecimal] AS decimal(10,2)) IS NULL THEN 0 ELSE 1 END
FROM [#T1]
) AS tbl
WHERE tbl.IsValid = 0;