Related
I have two tables whose structures as follows:
table_A
CREATE TABLE table_A
(
col_a varchar(100),
col_b bigint,
col_c datetime
)
table_b
--Note that columns are same--
CREATE TABLE table_B
(
col_a varchar(10),
col_b varchar(10),
col_c varchar(20)
)
Now I want to INSERT data into table_A from table_B with proper data type conversion.
Below is the SQL string:
INSERT INTO table_A(col_a,col_b,col_c)
SELECT CONVERT(varchar,col_a),CONVERT(INT,col_b),CONVERT(datetime,col_c) FROM table_B
So far so good.
Now I want generate the SQL dynamically with the help of INFORMATION_SCHEMA.COLUMNS.
For this I have followed the below steps:
Step 1:
Join the Information Schema for the above two tables viz table_A and table_B and store them in a #TempTable. Lets assume that #TempTable has an ID column that is IDENTITY(1,1) but that doesn't follow any sequence like 1,2,3...(Typically this happens in Synapse SQL)
INSERT INTO #TempTable
SELECT S.COLUMN_NAME AS Src_Col,
S.DATA_TYPE AS Src_dtype,
D.COLUMN_NAME AS Dest_Col,
D.DATA_TYPE AS Dest_dtype,
CASE WHEN S.DATA_TYPE NOT LIKE D.DATA_TYPE THEN
'CONVERT('+ '''' + D.DATA_TYPE + '''' + ',' + '''' + S.DATA_TYPE + '''' + ')'
ELSE S.DATA_TYPE AS Modified_Col
FROM INFORMATION_SCHEMA S
JOIN INFORMATION_SCHEMA.COLUMNS D
ON S.COLUMN_NAME = D.COLUMN_NAME AND S.TABLE_NAME = REPLACE(D.TABLE_NAME,'_B','_A')
Step 2:
Iterate over #TempTable to fetch the Modified_Col values
SET #Max_ID = (SELECT MAX(ID) FROM #TempTable);
SET #Min_ID = (SELECT MIN(ID) FROM #TempTable);
SET #ColToInsert = '';
SET #Dest_Col = '';
WHILE #Min_ID <= #Max_ID
BEGIN
SET #ColToInsert = (SELECT #ColToInsert + Modified_Col FROM #TempTable T WHERE T.ID = #Min_ID);
SET #Dest_Col = (SELECT #Dest_Col + Dest_Col FROM #TempTable T WHERE T.ID = #Min_ID);
SET #Min_ID = #Min_ID + 1;
END
Step 3:
Use that #ColToInsert in the below Dynamic SQL
SET #DySQL = 'INSERT INTO Table_A(' + #Dest_Col + ') SELECT ' + #ColToInsert + ' FROM table_B';
exec (#DySQL);
Now at this step 3 I am not getting the expected result. No data is getting inserted into table_A. I can understand that in the CASE statement I have to make some fixes so that convert... portion becomes a string. And I am not able to do so.
Any clue would be appreciated.
I don't understand why you need the temp table at all. You just need to aggregate using STRING_AGG.
You also need to quote the objects and columns using QUOTENAME, and you should use sys.columns etc rather than INFORMATION_SCHEMA, which is for compatibility only.
DECLARE #tableA sysname = 't';
DECLARE #tableB sysname = 's';
DECLARE #sql nvarchar(max) = (
SELECT CONCAT(
'INSERT INTO ',
QUOTENAME(#tableA),
'(',
STRING_AGG(CAST(QUOTENAME(cA.name) AS nvarchar(max)), ', '),
')
SELECT ',
STRING_AGG(
CASE WHEN cA.user_type_id <> cB.user_type_id THEN
CONCAT(
'CONVERT(',
typ.name,
CASE
WHEN typ.name IN ('varchar','nvarchar','char','nchar','varbinary','binary')
THEN CONCAT('(', CASE WHEN cA.max_length = -1 THEN 'max' END, NULLIF(cA.max_length, -1), ')')
WHEN typ.name IN ('datetime2','datetimeoffset','time','float','real')
THEN CONCAT('(', cA.scale, ')')
WHEN typ.name IN ('float','real')
THEN CONCAT('(', cA.precision, ')')
WHEN typ.name IN ('decimal','numeric')
THEN CONCAT('(', cA.precision, ',', cA.scale, ')')
END,
', ',
CAST(QUOTENAME(cB.name) AS nvarchar(max)),
')'
)
ELSE
CAST(QUOTENAME(cB.name) AS nvarchar(max))
END
, ', '),
'
FROM ',
QUOTENAME(#tableB)
)
FROM sys.columns cA
JOIN sys.tables tA ON ta.object_id = cA.object_id AND tA.name = #tableA
JOIN sys.types typ ON typ.user_type_id = cA.user_type_id
JOIN sys.columns cB ON cB.name = cA.name
JOIN sys.tables tB ON tB.object_id = cB.object_id AND tB.name = #tableB
);
PRINT #sql; -- your friend
EXEC sp_executesql #sql;
db<>fiddle
I would like to merge two rows of data, by keeping a row based on its ID and only updating data if it is a NULL value.
As an example I want to "merge" row 1 and 2 and delete row 2:
From :
ID date col1 col2 col3
---------------------------------------------------------------
1 31/12/2017 1 NULL 1
2 31/12/2015 3 2 NULL
3 31/12/2014 4 5 NULL
To:
ID date col1 col2 col3
---------------------------------------------------------------
1 31/12/2017 1 2 1
3 31/12/2014 4 5 NULL
In the example I want to keep row 1, and fill NULL values in row 1 by values that are in row 2. Then I will delete row 2. See below the code I have made for the date column.
UPDATE MyTable
SET
date = newdata.date
FROM
(
SELECT
date
FROM MyTable
WHERE
ID = 2
)
newdata
WHERE
ID = 1 AND MyTable.date IS NULL ;
I would like to perform the same operation on very large tables so I'm looking for a way to apply the above operation automatically (or a better workaround?) to every column of a table for two specific rows.
To be clear, the column name (date) shouldn't be hardcoded as in the above example as I have plenty of different tables.
The table has many rows but I only want to merge two rows (this will always be two rows)
Could you help me with this ?
I'm posting this an an answer now, as the comments from the OP do seem to infer this really is as simple as I thought it wasn't. Although their table has a lot of rows, they are only interested in correcting/merging the values of row 1 and 2. As these rows are simplistic then you can simply UPDATE the value of ID 1, and then DELETE row 2.
As there's only a few columns, then you could simply use literal values, as we can visually see that only Col2 on ID 1 needs to be updated:
UPDATE YourTable
SET col2 = 2
WHERE ID = 1;
Now ID 1 has the correct value, you can DELETE ID 2:
DELETE
FROM YourTable
WHERE ID = 2;
You could, however, do the following, if you're data is (a little) over simplified.
UPDATE YT1
SET Col1 = ISNULL(YT1.Col1,YT2.Col1),
Col2 = ISNULL(YT1.Col2,YT2.Col2),
Col3 = ISNULL(YT1.Col3,YT2.Col3),
...
FROM YourTable YT1
JOIN YourTable YT2 ON YT2.ID = 2
WHERE YT1.ID = 1;
DELETE
FROM YourTable
WHERE ID = 2;
This is based on all the comments under the OP's question, that give some more (but not enough) detail. This is a dynamic SQL solution that is scalable, as it writes out the ISNULL expressions for the OP. Of course, if this doesn't help then once again I have the suggest they update their post to actually help us help them. Anyway, this should be self explanatory:
CREATE TABLE YourTable (ID int,
[date] date,
col1 int,
col2 int,
col3 int,
col4 int,
col5 int);
GO
INSERT INTO YourTable
VALUES (1,'20171231',1,NULL,1 ,2 ,NULL),
(2,'20151231',3,2 ,NULL,NULL,4),
(3,'20141231',4,5 ,NULL,2 ,7);
SELECT *
FROM YourTable;
GO
DECLARE #SQL nvarchar(MAX);
DECLARE #TableName sysname = N'YourTable'
DECLARE #CopyToId int = 1;
DECLARE #DeleteID int = 2;
SET #SQL = N'UPDATE YT1' + NCHAR(10) +
N'SET ' + STUFF((SELECT N',' + NCHAR(10) +
N' ' + QUOTENAME(c.[name]) + N' = ISNULL(YT1.' + QUOTENAME(c.[name]) + N',YT2.' + QUOTENAME(c.[name]) + N')'
FROM sys.tables t
JOIN sys.columns c ON t.[object_id] = c.[object_id]
WHERE t.[name] = #TableName
AND c.name NOT IN (N'ID',N'date')
FOR XML PATH(N'')),1,6,N'') + NCHAR(10) +
N'FROM ' + QUOTENAME(#TableName) + N' YT1' + NCHAR(10) +
N' JOIN ' + QUOTENAME(#TableName) + N' YT2 ON YT2.ID = #dDeleteID' + NCHAR(10) +
N'WHERE YT1.ID = #dCopyToId;' + NCHAR(10) + NCHAR(10) +
N'DELETE' + NCHAR(10) +
N'FROM ' + QUOTENAME(#TableName) + NCHAR(10) +
N'WHERE ID = #dDeleteID;';
PRINT #SQL; --Your Best friend
EXEC sp_executesql #SQL, N'#dCopyToID int, #dDeleteID int', #dCopyToId = #CopyToId, #dDeleteID = #DeleteID;
GO
SELECT *
FROM YourTable;
GO
DROP TABLE YourTable;
Does anyone know how to check a a variable against all database table with columns storing the same type of information? I have a poorly designed database that stores ssn in over 60 tables within one database. some of the variations of columns in the various tables include:
app_ssn
ca_ssn
cand_ssn
crl_ssn
cu_ssn
emtaddr_ssn
re_ssn
sfcart_ssn
sfordr_ssn
socsecno
ssn
Ssn
SSN
I want to create a stored procedure that will accept a value and check it against every table that has 'ssn' in the name.Does anyone have idea as to how to do this?
-- I assume that table/column names don't need to be surrounded by square braces. You may want to save matches in a table - I just select them. I also assume ssn is a char.
alter proc proc1
#search1 varchar(500)
as
begin
set nocount on
declare #strsql varchar(500)
declare #curtable sysname
declare #prevtable sysname
declare #column sysname
select top 1 #curtable= table_schema+'.'+table_name, #column=column_name
from INFORMATION_SCHEMA.COLUMNS
where CHARINDEX('ssn',column_name) > 0
order by table_schema+'.'+table_name +column_name
-- make sure that at least one column has ssn in the column name
if #curtable is not null
begin
while (1=1)
begin
set #strsql = 'select * from ' +#curtable +' where '+''''+#search1+''''+ ' = '+#column
print #strsql
-- any matches for passed in ssn will match here...
exec (#strsql)
set #prevtable = #curtable+#column
select top 1 #curtable= table_schema+'.'+table_name, #column=column_name
from INFORMATION_SCHEMA.COLUMNS
where CHARINDEX('ssn',column_name) > 0
and table_schema+'.'+table_name +column_name> #prevtable
order by table_schema+'.'+table_name +column_name
-- when we run out of columns that contain ssn we are done...
if ##ROWCOUNT = 0
break
end
end
end
What you will need to do is some research. But here is where you can start;
SELECT tbl.NAME AS TableName
,cl.NAME AS ColumnName
,IDENTITY(INT, 1, 1) AS ID
INTO #ColumnsToLoop
FROM sys.tables tbl
JOIN sys.columns cl ON cl.object_id = tbl.object_id
This will give you the table / column relation then you can simply build a dynamic SQL string based on each row in the query above (basically loop it) and use EXEC or sp_execsql. So basically;
DECLARE #Loop int = (select min(ID) From #ColumnsToLoop),#MX int = (Select MAX(ID) From #ColumnsToLoop)
WHILE(#Loop<=#MX)
BEGIN
DECLARE #SQL nvarchar(MAX) = 'SQL String'
//Construct the dynamic SQL String
EXEC(#SQL);
SET #Loop += 1
END
Perhaps I went a little too crazy with this one, but let me know. I thought it would best the primary key of the search results with the table name so you could join it to your tables. I also managed to do it without a single cursor or loop.
DECLARE #SSN VARCHAR(25) = '%99%',
#SQL VARCHAR(MAX);
WITH CTE_PrimaryKeys
AS
(
SELECT TABLE_CATALOG,
TABLE_SCHEMA,
TABLE_NAME,
column_name
FROM INFORMATION_SCHEMA.KEY_COLUMN_USAGE D
WHERE OBJECTPROPERTY(OBJECT_ID(constraint_name), 'IsPrimaryKey') = 1
),
CTE_Columns
AS
(
SELECT A.*,
CONCAT(A.TABLE_CATALOG,'.',A.TABLE_SCHEMA,'.',A.TABLE_NAME) AS FullTableName,
CASE WHEN B.COLUMN_NAME IS NOT NULL THEN 1 ELSE 0 END AS IsPrimaryKey
FROM INFORMATION_SCHEMA.COLUMNS A
LEFT JOIN CTE_PrimaryKeys B
ON A.TABLE_CATALOG = B.TABLE_CATALOG
AND A.TABLE_SCHEMA = B.TABLE_SCHEMA
AND A.TABLE_NAME = B.TABLE_NAME
AND A.COLUMN_NAME = B.COLUMN_NAME
),
CTE_Select
AS
(
SELECT
'SELECT ' +
--This returns the pk_col casted as Varchar and the table name in another columns
STUFF((SELECT ',CAST(' + COLUMN_NAME + ' AS VARCHAR(MAX)) AS pk_col,''' + B.TABLE_NAME + ''' AS Table_Name'
FROM CTE_Columns B
WHERE A.Table_Name = B.TABLE_NAME
AND B.IsPrimaryKey = 1
FOR XML PATH ('')),1,1,'')
+ ' FROM ' + fullTableName +
--This is where I list the columns where LIKE desired SSN
' WHERE ' +
STUFF((SELECT COLUMN_NAME + ' LIKE ''' + #SSN + ''' OR '
FROM CTE_Columns B
WHERE A.Table_Name = B.TABLE_NAME
--This is where I filter so I only get desired columns
AND (
--Uncomment the Collate if your database is case sensitive
COLUMN_NAME /*COLLATE SQL_Latin1_General_CP1_CI_AS*/ LIKE '%ssn%'
--list your column Names that don't have ssn in them
--OR COLUMN_NAME IN ('col1','col2')
)
FOR XML PATH ('')),1,0,'') AS Selects
FROM CTE_Columns A
GROUP BY A.FullTableName,A.TABLE_NAME
)
--Unioning them all together and getting rid of last trailing "OR "
SELECT #SQL = COALESCE(#sql,'') + SUBSTRING(selects,1,LEN(selects) - 3) + ' UNION ALL ' + CHAR(13) --new line for easier debugging
FROM CTE_Select
WHERE selects IS NOT NULL
--Look at your code
SELECT SUBSTRING(#sql,1,LEN(#sql) - 11)
I have a large table with 500 columns and 100M rows. Based on a small sample, I believe only about 50 of the columns contain any values, and the other 450 contain only NULL values. I want to list the columns that contain no data.
On my current hardware, it would take about 24 hours to query every column (select count(1) from tab where col_n is not null)
Is there a less expensive way to determine that a column is completely empty/NULL?
What about this:
SELECT
SUM(CASE WHEN column_1 IS NOT NULL THEN 1 ELSE 0) column_1_count,
SUM(CASE WHEN column_2 IS NOT NULL THEN 1 ELSE 0) column_2_count,
...
FROM table_name
?
You can easily create this query if you use INFORMATION_SCHEMA.COLUMNS table.
EDIT:
Another idea:
SELECT MAX(column_1), MAX(column_2),..... FROM table_name
If result contains value, column is populated. It should require one table scan.
Try this one -
DDL:
IF OBJECT_ID ('dbo.test2') IS NOT NULL
DROP TABLE dbo.test2
CREATE TABLE dbo.test2
(
ID BIGINT IDENTITY(1,1) PRIMARY KEY
, Name VARCHAR(10) NOT NULL
, IsCitizen BIT NULL
, Age INT NULL
)
INSERT INTO dbo.test2 (Name, IsCitizen, Age)
VALUES
('1', 1, NULL),
('2', 0, NULL),
('3', NULL, NULL)
Query 1:
DECLARE
#TableName SYSNAME
, #ObjectID INT
, #SQL NVARCHAR(MAX)
SELECT
#TableName = 'dbo.test2'
, #ObjectID = OBJECT_ID(#TableName)
SELECT #SQL = 'SELECT' + CHAR(13) + STUFF((
SELECT CHAR(13) + ', [' + c.name + '] = ' +
CASE WHEN c.is_nullable = 0
THEN '0'
ELSE 'CASE WHEN ' + totalrows +
' = SUM(CASE WHEN [' + c.name + '] IS NULL THEN 1 ELSE 0 END) THEN 1 ELSE 0 END'
END
FROM sys.columns c WITH (NOWAIT)
CROSS JOIN (
SELECT totalrows = CAST(MIN(p.[rows]) AS VARCHAR(50))
FROM sys.partitions p
WHERE p.[object_id] = #ObjectID
AND p.index_id IN (0, 1)
) r
WHERE c.[object_id] = #ObjectID
FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 2, ' ') + CHAR(13) + 'FROM ' + #TableName
PRINT #SQL
EXEC sys.sp_executesql #SQL
Output 1:
SELECT
[ID] = 0
, [Name] = 0
, [IsCitizen] = CASE WHEN 3 = SUM(CASE WHEN [IsCitizen] IS NULL THEN 1 ELSE 0 END) THEN 1 ELSE 0 END
, [Age] = CASE WHEN 3 = SUM(CASE WHEN [Age] IS NULL THEN 1 ELSE 0 END) THEN 1 ELSE 0 END
FROM dbo.test2
Query 2:
DECLARE
#TableName SYSNAME
, #SQL NVARCHAR(MAX)
SELECT #TableName = 'dbo.test2'
SELECT #SQL = 'SELECT' + CHAR(13) + STUFF((
SELECT CHAR(13) + ', [' + c.name + '] = ' +
CASE WHEN c.is_nullable = 0
THEN '0'
ELSE 'CASE WHEN '+
'MAX(CAST([' + c.name + '] AS CHAR(1))) IS NULL THEN 1 ELSE 0 END'
END
FROM sys.columns c WITH (NOWAIT)
WHERE c.[object_id] = OBJECT_ID(#TableName)
FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 2, ' ') + CHAR(13) + 'FROM ' + #TableName
PRINT #SQL
EXEC sys.sp_executesql #SQL
Output 2:
SELECT
[ID] = 0
, [Name] = 0
, [IsCitizen] = CASE WHEN MAX(CAST([IsCitizen] AS CHAR(1))) IS NULL THEN 1 ELSE 0 END
, [Age] = CASE WHEN MAX(CAST([Age] AS CHAR(1))) IS NULL THEN 1 ELSE 0 END
FROM dbo.test2
Results:
ID Name IsCitizen Age
----------- ----------- ----------- -----------
0 0 0 1
Could you check if colums idexing will help you reach some performance improve
CREATE UNIQUE NONCLUSTERED INDEX IndexName ON dbo.TableName(ColumnName)
WHERE ColumnName IS NOT NULL;
GO
SQL server query to get the list of columns in a table along with Data types, NOT NULL, and PRIMARY KEY constraints
Run SQL in best answer of above questions and generate a new query like below.
Select ISNULL(column1,1), ISNULL(column2,1), ISNULL(column3,1) from table
You would not need to 'count' all of the 100M records. When you simply back out of the query with a TOP 1 as soon as you hit a column with a not-null value, would save a lot of time while providing the same information.
500 Columns?!
Ok, the right answer to your question is: normalize your table.
Here's what happening for the time being:
You don't have an index on that column so SQL Server has to do a full scan of your humongous table.
SQL Server will certainly fully read every row (it means every columns even if you're only interested in one).
And since your row are most likely over 8kb... http://msdn.microsoft.com/en-us/library/ms186981%28v=sql.105%29.aspx
Seriously, normalize your table and if needed split it horizontally (put "theme grouped" columns inside separate table, to only read them when you need them).
EDIT: You can rewrite your query like this
select count(col_n) from tab
and if you want to get all columns at once (better):
SELECT
COUNT(column_1) column_1_count,
COUNT(column_2) column_2_count,
...
FROM table_name
If most records are not null maybe you can mix some of the approach suggested (for example check only nullable fields) with this:
if exists (select * from table where field is not null)
this should speed up the search because exists stops the search as soon as condition is met, in this example a single not null record is enough to decide the status of the field.
If the field has an index this should be almost instant.
Normally adding top 1 to this query is not needed because the query optimizer knows that you do not need to retrieve all the matching records.
You can use this stored procedure to the trick You need to provide the table name you wish to query note that if you'll pass to procedure the #exec parameter = 1 it will execute the select query
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [dbo].[SP_SELECT_NON_NULL_COLUMNS] ( #tablename varchar (100)=null, #exec int =0)
AS BEGIN
SET NOCOUNT ON
IF #tablename IS NULL
RAISERROR('CANT EXECUTE THE PROC, TABLE NAME IS MISSING',16 ,1)
ELSE
BEGIN
IF OBJECT_ID('tempdb..#table') IS NOT NULL DROP TABLE #table
DECLARE #i VARCHAR (max)=''
DECLARE #sentence VARCHAR (max)=''
DECLARE #SELECT VARCHAR (max)
DECLARE #LocalTableName VARCHAR(50) = '['+#tablename+']'
CREATE TABLE #table (ColumnName VARCHAR (max))
SELECT #i+=
' IF EXISTS ( SELECT TOP 1 '+column_name+' FROM ' +#LocalTableName+' WHERE ' +column_name+
' '+'IS NOT NULL) INSERT INTO #table VALUES ('''+column_name+''');'
FROM INFORMATION_SCHEMA.COLUMNS WHERE table_name=#tablename
INSERT INTO #table
EXEC (#i)
SELECT #sentence = #sentence+' '+columnname+' ,' FROM #table
DROP TABLE #table
IF #exec=0
BEGIN
SELECT 'SELECT '+ LTRIM (left (#sentence,NULLIF(LEN (#sentence)-1,-1)))+
+' FROM ' +#LocalTableName
END
ELSE
BEGIN
SELECT #SELECT= 'SELECT '+ LTRIM (left (#sentence,NULLIF(LEN (#sentence)-1,-1)))+
+' FROM '+#LocalTableName
EXEC (#SELECT)
END
END
END
Use it like this:
EXEC [dbo].[SP_SELECT_NON_NULL_COLUMNS] 'YourTableName' , 1
Is there a way to select the row from a temp table (table has only one row anyway), into another table that has some columns with differenet names? For example:
TempTable
FirstName LastName Column1 Column2
------------ ------------ ----------- -----------
Joe Smith OKC TX
OtherTable
FirstName LastName Region1 Region2 Region3
------------ ------------ ----------- ----------- ----------
NULL NULL NULL NULL NULL
I need to copy the data, in the same order as the columns from TempTable into OtherTable. TempTable will not always be the same....as in sometimes it will have 3 columns, sometimes just 2...etc. If it does not have the same number of columns as OtherTable, the the remaining "Region" columns should stay null.
The end result should be:
OtherTable
FirstName LastName Region1 Region2 Region3
------------ ------------ ----------- ----------- ----------
Joe Smith OKC TX NULL
PLUS the column names in TEMPTable will NEVER be the same...as in one time it will be "Column1"...the next time it could be "XXXXX1". That's why I just want to copy data only...the data will always be in the correct order...
LOL...does this even make sense? This is for SQL Server 2005
EDIT ........ Dynamic SQL Generation added
This code will generate INSERT statements to INSERT from #TEMP into #TEMP. You can tweak it to suit your purpose if you are going from #temp to regular tables.
SET NOCOUNT ON
DROP Table #TempTable1
DROP Table #TempTable2
GO
DROP Function GenerateInserts
GO
Create Function GenerateInserts
(
#SourceTable VarChar (100),
#DestinationTable VarChar (100)
)
Returns VarChar (MAX)
AS
BEGIN
DECLARE #SelectString VarChar (MAX)
DECLARE #InsertString VarChar (MAX)
DECLARE #SQLString VarChar (MAX)
DECLARE #SourceColumnCount INTEGER
DECLARE #DestinationColumnCount INTEGER
DECLARE #ColumnCount INTEGER
DECLARE #Counter INTEGER
SELECT #SourceColumnCount = COUNT (*)
FROM tempdb..syscolumns
WHERE id=object_id(#SourceTable)
SELECT #DestinationColumnCount = COUNT (*)
FROM tempdb..syscolumns
WHERE id=object_id(#DestinationTable)
SET #ColumnCount = #SourceColumnCount
IF #DestinationColumnCount < #ColumnCount
SET #ColumnCount = #DestinationColumnCount
SET #Counter = 0
SET #SelectString = ' INSERT INTO ' + #DestinationTable + ' '
SET #InsertString = ' INSERT INTO ' + #DestinationTable + ' '
SET #SelectString = ''
SET #InsertString = ''
WHILE #Counter <= #ColumnCount
BEGIN
SELECT #SelectString = #SelectString + ', ' + Name
FROM TempDB..SysColumns
WHERE Id = Object_Id (#SourceTable)
AND ColOrder = #Counter
SELECT #InsertString = #InsertString + ', ' + Name
FROM TempDB..SysColumns
WHERE Id = Object_Id (#DestinationTable)
AND ColOrder = #Counter
SET #Counter = #Counter + 1
END
SET #InsertString = 'INSERT INTO ' + #DestinationTable + ' (' + STUFF ( #InsertString, 1, 2, '') + ') '
SET #SelectString = 'SELECT ' + STUFF ( #SelectString, 1, 2, '') + ' FROM ' + #SourceTable
SET #SQLString = #InsertString + '
'+ #SelectString
RETURN #SQLString
END
GO
Create Table #TempTable1
(
Col1 VarChar (10),
Col2 VarChar (10),
Col3 VarChar (10),
Col4 VarChar (10),
Col5 VarChar (10)
)
Create Table #TempTable2
(
MyCol1 VarChar (10),
MyCol2 VarChar (10),
MyCol3 VarChar (10),
MyCol4 VarChar (10),
MyCol5 VarChar (10),
MyCol6 VarChar (10)
)
SELECT dbo.GenerateInserts ('tempdb..#TempTable1', 'tempdb..#TempTable2')
OLD ANSWER
Yes you can do this but you have to write different statements for each type of INSERT. You do have to specify column names in both places - the INSERT INTO and the SELECT
If you have the same number of columns in your Source and Destination tables, do this
INSERT INTO Table1 (Column1, Column2, Column3)
SELECT MyColumn01, MyColumn02, MyColumn03
FROM MyTable
What this will do is map as follows:
MyTable.MyColumn01 -> Table1.Column1
MyTable.MyColumn02 -> Table1.Column2
MyTable.MyColumn03 -> Table1.Column3
If the Source has less columns, you can use a NULL value in place of the column name
INSERT INTO Table1 (Column1, Column2, Column3)
SELECT MyColumn01, MyColumn02, NULL AS MyColumn03
FROM MyTable
OR you can just use two column names
INSERT INTO Table1 (Column1, Column2)
SELECT MyColumn01, MyColumn02
FROM MyTable
If the destination table has less columns than the source, then you have to ignore columns from the source
INSERT INTO Table1 (Column1, Column2, Column3)
SELECT MyColumn01, MyColumn02, NULL AS MyColumn03 /* MyColumn04, MyColumn05 are ignored */
FROM MyTable
You could do something with dynamic SQL.
I recommend reading "The Curse and Blessings of Dynamic SQL -
Dealing with Dynamic Table and Column Names" if this is new to you.
Example follows. You could improve this to be sure that source and destination columns are of compatible types or to exclude identity or computed columns for example but it should give you an idea.
DECLARE #SourceTable sysname
DECLARE #DestTable sysname
SET #SourceTable = '[dbo].[#TempTable]'
SET #DestTable = '[dbo].[OtherTable]'
DECLARE #DynSQL1 NVARCHAR(MAX)
DECLARE #DynSQL2 NVARCHAR(MAX)
SELECT
#DynSQL1 = ISNULL(#DynSQL1 + ',','') + QUOTENAME(sc1.name),
#DynSQL2 = ISNULL(#DynSQL2 + ',','') + QUOTENAME(sc2.name)
FROM tempdb..syscolumns sc1
JOIN syscolumns sc2
ON sc1.colorder = sc2.colorder /*Match up the columns by column order*/
WHERE sc1.id=OBJECT_ID('tempdb.' + #SourceTable) AND sc2.id=OBJECT_ID(#DestTable)
IF ##ROWCOUNT = 0
RETURN
SET #DynSQL1 = 'INSERT INTO ' + #DestTable + ' (' + #DynSQL2 + ')
SELECT ' + #DynSQL1 + ' FROM '+ #SourceTable +';'
EXEC sp_executesql #DynSQL1
You can specify the columns of the target table:
INSERT INTO OtherTable (FirstName, LastName, Region1, Region2)
SELECT FirstName, LastName, Column1, Column2 FROM TempTable;
In this example, OtherTable.Region3 will end up NULL (or if it has a DEFAULT value, it'll use that).
The count of columns in the INSERT must match the count of columns in the SELECT. So you must know how many columns in TempTable and make the list of columns for the insert match.
But there's no way to do it with implicit columns, if you're just throwing SELECT * around.
Re your comment: You can use the INFORMATION_SCHEMA to query the count and the names of the columns in the table.
SELECT COLUMN_NAME
FROM tempdb.INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_CATALOG = 'tempdb' AND TABLE_SCHEMA = 'MySchema'
AND TABLE_NAME LIKE 'MyTempTable%';
Then you would write application code to create your SQL INSERT...SELECT statement based on the results from this information_schema query.
Note: Querying temp tables through the information_schema requires special handling.