I would like to merge two rows of data, by keeping a row based on its ID and only updating data if it is a NULL value.
As an example I want to "merge" row 1 and 2 and delete row 2:
From :
ID date col1 col2 col3
---------------------------------------------------------------
1 31/12/2017 1 NULL 1
2 31/12/2015 3 2 NULL
3 31/12/2014 4 5 NULL
To:
ID date col1 col2 col3
---------------------------------------------------------------
1 31/12/2017 1 2 1
3 31/12/2014 4 5 NULL
In the example I want to keep row 1, and fill NULL values in row 1 by values that are in row 2. Then I will delete row 2. See below the code I have made for the date column.
UPDATE MyTable
SET
date = newdata.date
FROM
(
SELECT
date
FROM MyTable
WHERE
ID = 2
)
newdata
WHERE
ID = 1 AND MyTable.date IS NULL ;
I would like to perform the same operation on very large tables so I'm looking for a way to apply the above operation automatically (or a better workaround?) to every column of a table for two specific rows.
To be clear, the column name (date) shouldn't be hardcoded as in the above example as I have plenty of different tables.
The table has many rows but I only want to merge two rows (this will always be two rows)
Could you help me with this ?
I'm posting this an an answer now, as the comments from the OP do seem to infer this really is as simple as I thought it wasn't. Although their table has a lot of rows, they are only interested in correcting/merging the values of row 1 and 2. As these rows are simplistic then you can simply UPDATE the value of ID 1, and then DELETE row 2.
As there's only a few columns, then you could simply use literal values, as we can visually see that only Col2 on ID 1 needs to be updated:
UPDATE YourTable
SET col2 = 2
WHERE ID = 1;
Now ID 1 has the correct value, you can DELETE ID 2:
DELETE
FROM YourTable
WHERE ID = 2;
You could, however, do the following, if you're data is (a little) over simplified.
UPDATE YT1
SET Col1 = ISNULL(YT1.Col1,YT2.Col1),
Col2 = ISNULL(YT1.Col2,YT2.Col2),
Col3 = ISNULL(YT1.Col3,YT2.Col3),
...
FROM YourTable YT1
JOIN YourTable YT2 ON YT2.ID = 2
WHERE YT1.ID = 1;
DELETE
FROM YourTable
WHERE ID = 2;
This is based on all the comments under the OP's question, that give some more (but not enough) detail. This is a dynamic SQL solution that is scalable, as it writes out the ISNULL expressions for the OP. Of course, if this doesn't help then once again I have the suggest they update their post to actually help us help them. Anyway, this should be self explanatory:
CREATE TABLE YourTable (ID int,
[date] date,
col1 int,
col2 int,
col3 int,
col4 int,
col5 int);
GO
INSERT INTO YourTable
VALUES (1,'20171231',1,NULL,1 ,2 ,NULL),
(2,'20151231',3,2 ,NULL,NULL,4),
(3,'20141231',4,5 ,NULL,2 ,7);
SELECT *
FROM YourTable;
GO
DECLARE #SQL nvarchar(MAX);
DECLARE #TableName sysname = N'YourTable'
DECLARE #CopyToId int = 1;
DECLARE #DeleteID int = 2;
SET #SQL = N'UPDATE YT1' + NCHAR(10) +
N'SET ' + STUFF((SELECT N',' + NCHAR(10) +
N' ' + QUOTENAME(c.[name]) + N' = ISNULL(YT1.' + QUOTENAME(c.[name]) + N',YT2.' + QUOTENAME(c.[name]) + N')'
FROM sys.tables t
JOIN sys.columns c ON t.[object_id] = c.[object_id]
WHERE t.[name] = #TableName
AND c.name NOT IN (N'ID',N'date')
FOR XML PATH(N'')),1,6,N'') + NCHAR(10) +
N'FROM ' + QUOTENAME(#TableName) + N' YT1' + NCHAR(10) +
N' JOIN ' + QUOTENAME(#TableName) + N' YT2 ON YT2.ID = #dDeleteID' + NCHAR(10) +
N'WHERE YT1.ID = #dCopyToId;' + NCHAR(10) + NCHAR(10) +
N'DELETE' + NCHAR(10) +
N'FROM ' + QUOTENAME(#TableName) + NCHAR(10) +
N'WHERE ID = #dDeleteID;';
PRINT #SQL; --Your Best friend
EXEC sp_executesql #SQL, N'#dCopyToID int, #dDeleteID int', #dCopyToId = #CopyToId, #dDeleteID = #DeleteID;
GO
SELECT *
FROM YourTable;
GO
DROP TABLE YourTable;
Related
Does anyone know how to check a a variable against all database table with columns storing the same type of information? I have a poorly designed database that stores ssn in over 60 tables within one database. some of the variations of columns in the various tables include:
app_ssn
ca_ssn
cand_ssn
crl_ssn
cu_ssn
emtaddr_ssn
re_ssn
sfcart_ssn
sfordr_ssn
socsecno
ssn
Ssn
SSN
I want to create a stored procedure that will accept a value and check it against every table that has 'ssn' in the name.Does anyone have idea as to how to do this?
-- I assume that table/column names don't need to be surrounded by square braces. You may want to save matches in a table - I just select them. I also assume ssn is a char.
alter proc proc1
#search1 varchar(500)
as
begin
set nocount on
declare #strsql varchar(500)
declare #curtable sysname
declare #prevtable sysname
declare #column sysname
select top 1 #curtable= table_schema+'.'+table_name, #column=column_name
from INFORMATION_SCHEMA.COLUMNS
where CHARINDEX('ssn',column_name) > 0
order by table_schema+'.'+table_name +column_name
-- make sure that at least one column has ssn in the column name
if #curtable is not null
begin
while (1=1)
begin
set #strsql = 'select * from ' +#curtable +' where '+''''+#search1+''''+ ' = '+#column
print #strsql
-- any matches for passed in ssn will match here...
exec (#strsql)
set #prevtable = #curtable+#column
select top 1 #curtable= table_schema+'.'+table_name, #column=column_name
from INFORMATION_SCHEMA.COLUMNS
where CHARINDEX('ssn',column_name) > 0
and table_schema+'.'+table_name +column_name> #prevtable
order by table_schema+'.'+table_name +column_name
-- when we run out of columns that contain ssn we are done...
if ##ROWCOUNT = 0
break
end
end
end
What you will need to do is some research. But here is where you can start;
SELECT tbl.NAME AS TableName
,cl.NAME AS ColumnName
,IDENTITY(INT, 1, 1) AS ID
INTO #ColumnsToLoop
FROM sys.tables tbl
JOIN sys.columns cl ON cl.object_id = tbl.object_id
This will give you the table / column relation then you can simply build a dynamic SQL string based on each row in the query above (basically loop it) and use EXEC or sp_execsql. So basically;
DECLARE #Loop int = (select min(ID) From #ColumnsToLoop),#MX int = (Select MAX(ID) From #ColumnsToLoop)
WHILE(#Loop<=#MX)
BEGIN
DECLARE #SQL nvarchar(MAX) = 'SQL String'
//Construct the dynamic SQL String
EXEC(#SQL);
SET #Loop += 1
END
Perhaps I went a little too crazy with this one, but let me know. I thought it would best the primary key of the search results with the table name so you could join it to your tables. I also managed to do it without a single cursor or loop.
DECLARE #SSN VARCHAR(25) = '%99%',
#SQL VARCHAR(MAX);
WITH CTE_PrimaryKeys
AS
(
SELECT TABLE_CATALOG,
TABLE_SCHEMA,
TABLE_NAME,
column_name
FROM INFORMATION_SCHEMA.KEY_COLUMN_USAGE D
WHERE OBJECTPROPERTY(OBJECT_ID(constraint_name), 'IsPrimaryKey') = 1
),
CTE_Columns
AS
(
SELECT A.*,
CONCAT(A.TABLE_CATALOG,'.',A.TABLE_SCHEMA,'.',A.TABLE_NAME) AS FullTableName,
CASE WHEN B.COLUMN_NAME IS NOT NULL THEN 1 ELSE 0 END AS IsPrimaryKey
FROM INFORMATION_SCHEMA.COLUMNS A
LEFT JOIN CTE_PrimaryKeys B
ON A.TABLE_CATALOG = B.TABLE_CATALOG
AND A.TABLE_SCHEMA = B.TABLE_SCHEMA
AND A.TABLE_NAME = B.TABLE_NAME
AND A.COLUMN_NAME = B.COLUMN_NAME
),
CTE_Select
AS
(
SELECT
'SELECT ' +
--This returns the pk_col casted as Varchar and the table name in another columns
STUFF((SELECT ',CAST(' + COLUMN_NAME + ' AS VARCHAR(MAX)) AS pk_col,''' + B.TABLE_NAME + ''' AS Table_Name'
FROM CTE_Columns B
WHERE A.Table_Name = B.TABLE_NAME
AND B.IsPrimaryKey = 1
FOR XML PATH ('')),1,1,'')
+ ' FROM ' + fullTableName +
--This is where I list the columns where LIKE desired SSN
' WHERE ' +
STUFF((SELECT COLUMN_NAME + ' LIKE ''' + #SSN + ''' OR '
FROM CTE_Columns B
WHERE A.Table_Name = B.TABLE_NAME
--This is where I filter so I only get desired columns
AND (
--Uncomment the Collate if your database is case sensitive
COLUMN_NAME /*COLLATE SQL_Latin1_General_CP1_CI_AS*/ LIKE '%ssn%'
--list your column Names that don't have ssn in them
--OR COLUMN_NAME IN ('col1','col2')
)
FOR XML PATH ('')),1,0,'') AS Selects
FROM CTE_Columns A
GROUP BY A.FullTableName,A.TABLE_NAME
)
--Unioning them all together and getting rid of last trailing "OR "
SELECT #SQL = COALESCE(#sql,'') + SUBSTRING(selects,1,LEN(selects) - 3) + ' UNION ALL ' + CHAR(13) --new line for easier debugging
FROM CTE_Select
WHERE selects IS NOT NULL
--Look at your code
SELECT SUBSTRING(#sql,1,LEN(#sql) - 11)
I have a large table with 500 columns and 100M rows. Based on a small sample, I believe only about 50 of the columns contain any values, and the other 450 contain only NULL values. I want to list the columns that contain no data.
On my current hardware, it would take about 24 hours to query every column (select count(1) from tab where col_n is not null)
Is there a less expensive way to determine that a column is completely empty/NULL?
What about this:
SELECT
SUM(CASE WHEN column_1 IS NOT NULL THEN 1 ELSE 0) column_1_count,
SUM(CASE WHEN column_2 IS NOT NULL THEN 1 ELSE 0) column_2_count,
...
FROM table_name
?
You can easily create this query if you use INFORMATION_SCHEMA.COLUMNS table.
EDIT:
Another idea:
SELECT MAX(column_1), MAX(column_2),..... FROM table_name
If result contains value, column is populated. It should require one table scan.
Try this one -
DDL:
IF OBJECT_ID ('dbo.test2') IS NOT NULL
DROP TABLE dbo.test2
CREATE TABLE dbo.test2
(
ID BIGINT IDENTITY(1,1) PRIMARY KEY
, Name VARCHAR(10) NOT NULL
, IsCitizen BIT NULL
, Age INT NULL
)
INSERT INTO dbo.test2 (Name, IsCitizen, Age)
VALUES
('1', 1, NULL),
('2', 0, NULL),
('3', NULL, NULL)
Query 1:
DECLARE
#TableName SYSNAME
, #ObjectID INT
, #SQL NVARCHAR(MAX)
SELECT
#TableName = 'dbo.test2'
, #ObjectID = OBJECT_ID(#TableName)
SELECT #SQL = 'SELECT' + CHAR(13) + STUFF((
SELECT CHAR(13) + ', [' + c.name + '] = ' +
CASE WHEN c.is_nullable = 0
THEN '0'
ELSE 'CASE WHEN ' + totalrows +
' = SUM(CASE WHEN [' + c.name + '] IS NULL THEN 1 ELSE 0 END) THEN 1 ELSE 0 END'
END
FROM sys.columns c WITH (NOWAIT)
CROSS JOIN (
SELECT totalrows = CAST(MIN(p.[rows]) AS VARCHAR(50))
FROM sys.partitions p
WHERE p.[object_id] = #ObjectID
AND p.index_id IN (0, 1)
) r
WHERE c.[object_id] = #ObjectID
FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 2, ' ') + CHAR(13) + 'FROM ' + #TableName
PRINT #SQL
EXEC sys.sp_executesql #SQL
Output 1:
SELECT
[ID] = 0
, [Name] = 0
, [IsCitizen] = CASE WHEN 3 = SUM(CASE WHEN [IsCitizen] IS NULL THEN 1 ELSE 0 END) THEN 1 ELSE 0 END
, [Age] = CASE WHEN 3 = SUM(CASE WHEN [Age] IS NULL THEN 1 ELSE 0 END) THEN 1 ELSE 0 END
FROM dbo.test2
Query 2:
DECLARE
#TableName SYSNAME
, #SQL NVARCHAR(MAX)
SELECT #TableName = 'dbo.test2'
SELECT #SQL = 'SELECT' + CHAR(13) + STUFF((
SELECT CHAR(13) + ', [' + c.name + '] = ' +
CASE WHEN c.is_nullable = 0
THEN '0'
ELSE 'CASE WHEN '+
'MAX(CAST([' + c.name + '] AS CHAR(1))) IS NULL THEN 1 ELSE 0 END'
END
FROM sys.columns c WITH (NOWAIT)
WHERE c.[object_id] = OBJECT_ID(#TableName)
FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 2, ' ') + CHAR(13) + 'FROM ' + #TableName
PRINT #SQL
EXEC sys.sp_executesql #SQL
Output 2:
SELECT
[ID] = 0
, [Name] = 0
, [IsCitizen] = CASE WHEN MAX(CAST([IsCitizen] AS CHAR(1))) IS NULL THEN 1 ELSE 0 END
, [Age] = CASE WHEN MAX(CAST([Age] AS CHAR(1))) IS NULL THEN 1 ELSE 0 END
FROM dbo.test2
Results:
ID Name IsCitizen Age
----------- ----------- ----------- -----------
0 0 0 1
Could you check if colums idexing will help you reach some performance improve
CREATE UNIQUE NONCLUSTERED INDEX IndexName ON dbo.TableName(ColumnName)
WHERE ColumnName IS NOT NULL;
GO
SQL server query to get the list of columns in a table along with Data types, NOT NULL, and PRIMARY KEY constraints
Run SQL in best answer of above questions and generate a new query like below.
Select ISNULL(column1,1), ISNULL(column2,1), ISNULL(column3,1) from table
You would not need to 'count' all of the 100M records. When you simply back out of the query with a TOP 1 as soon as you hit a column with a not-null value, would save a lot of time while providing the same information.
500 Columns?!
Ok, the right answer to your question is: normalize your table.
Here's what happening for the time being:
You don't have an index on that column so SQL Server has to do a full scan of your humongous table.
SQL Server will certainly fully read every row (it means every columns even if you're only interested in one).
And since your row are most likely over 8kb... http://msdn.microsoft.com/en-us/library/ms186981%28v=sql.105%29.aspx
Seriously, normalize your table and if needed split it horizontally (put "theme grouped" columns inside separate table, to only read them when you need them).
EDIT: You can rewrite your query like this
select count(col_n) from tab
and if you want to get all columns at once (better):
SELECT
COUNT(column_1) column_1_count,
COUNT(column_2) column_2_count,
...
FROM table_name
If most records are not null maybe you can mix some of the approach suggested (for example check only nullable fields) with this:
if exists (select * from table where field is not null)
this should speed up the search because exists stops the search as soon as condition is met, in this example a single not null record is enough to decide the status of the field.
If the field has an index this should be almost instant.
Normally adding top 1 to this query is not needed because the query optimizer knows that you do not need to retrieve all the matching records.
You can use this stored procedure to the trick You need to provide the table name you wish to query note that if you'll pass to procedure the #exec parameter = 1 it will execute the select query
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [dbo].[SP_SELECT_NON_NULL_COLUMNS] ( #tablename varchar (100)=null, #exec int =0)
AS BEGIN
SET NOCOUNT ON
IF #tablename IS NULL
RAISERROR('CANT EXECUTE THE PROC, TABLE NAME IS MISSING',16 ,1)
ELSE
BEGIN
IF OBJECT_ID('tempdb..#table') IS NOT NULL DROP TABLE #table
DECLARE #i VARCHAR (max)=''
DECLARE #sentence VARCHAR (max)=''
DECLARE #SELECT VARCHAR (max)
DECLARE #LocalTableName VARCHAR(50) = '['+#tablename+']'
CREATE TABLE #table (ColumnName VARCHAR (max))
SELECT #i+=
' IF EXISTS ( SELECT TOP 1 '+column_name+' FROM ' +#LocalTableName+' WHERE ' +column_name+
' '+'IS NOT NULL) INSERT INTO #table VALUES ('''+column_name+''');'
FROM INFORMATION_SCHEMA.COLUMNS WHERE table_name=#tablename
INSERT INTO #table
EXEC (#i)
SELECT #sentence = #sentence+' '+columnname+' ,' FROM #table
DROP TABLE #table
IF #exec=0
BEGIN
SELECT 'SELECT '+ LTRIM (left (#sentence,NULLIF(LEN (#sentence)-1,-1)))+
+' FROM ' +#LocalTableName
END
ELSE
BEGIN
SELECT #SELECT= 'SELECT '+ LTRIM (left (#sentence,NULLIF(LEN (#sentence)-1,-1)))+
+' FROM '+#LocalTableName
EXEC (#SELECT)
END
END
END
Use it like this:
EXEC [dbo].[SP_SELECT_NON_NULL_COLUMNS] 'YourTableName' , 1
does anyone have ideas how to compare the result of 2 queries, that have the same columns names, but in different order?
I know that if I had both queries returning the same columns in the same order, I could use except, this isn't the case.
[EDIT]
To be more specific, I need to compare the value of each row, and each column (with the same name) from 2 different queries.
Example:
result query 1:
A|B|C|D
1|4|7|11
2|5|8|21
3|**6**|9|31
result query 2:
A|B |D
1|4 |11
2|5 |21
3|**99**|31
In this case, I would like to detect that Query2 on 3ยบ row in column B, have a different value.
I don't care that Query2 don't have the column C, I just want that all common columns, between both queries, have the same values.
Thanks
Given these tables and data:
USE tempdb;
GO
CREATE TABLE dbo.TableA
(
A INT,
B INT,
C INT,
D INT
);
CREATE TABLE dbo.TableB
(
A INT,
D INT,
B INT
);
INSERT dbo.TableA SELECT 1,4,7,11
UNION ALL SELECT 2,5,8,21
UNION ALL SELECT 3,6,9,31;
INSERT dbo.TableB SELECT 1,11,4
UNION ALL SELECT 2,21,5
UNION ALL SELECT 3,31,99;
What you seem to be looking for is one of the following:
-- those where at least one column doesn't match:
SELECT A,B,D FROM dbo.TableA
EXCEPT
SELECT A,B,D FROM dbo.TableB;
Results (from the A side):
A B D
---- ---- ----
3 6 31
OR
-- those where all columns DO match:
SELECT A,B,D FROM dbo.TableA
INTERSECT
SELECT A,B,D FROM dbo.TableB;
Results:
A B D
---- ---- ----
1 4 11
2 5 21
If you don't know the columns or don't want to write them out manually, you can do this with dynamic SQL by just passing the two table names (with schema) into variables. Note that this doesn't trap for the errors that will occur if no columns are shared by the two tables, or if the same column names exist but are of incompatible data types. That error handling is easy to add if you want to make the solution more robust.
DECLARE
#sql NVARCHAR(MAX),
#cols NVARCHAR(MAX),
#t1 NVARCHAR(511),
#t2 NVARCHAR(511);
SELECT
#sql = N'',
#cols = N'',
#t1 = N'dbo.TableA',
#t2 = N'dbo.TableB';
SELECT #cols = #cols + ',' + a.name
FROM sys.columns AS a
INNER JOIN sys.columns AS b
ON a.name = b.name
WHERE a.[object_id] = OBJECT_ID(#t1)
AND b.[object_id] = OBJECT_ID(#t2);
SET #cols = STUFF(#cols, 1, 1, N'');
-- those where at least one column doesn't match:
SELECT #sql = N'SELECT ' + #cols + '
FROM ' + #t1 + ' EXCEPT
SELECT ' + #cols + ' FROM ' + #t2 + ';';
EXEC sp_executesql #sql;
-- those where all columns DO match:
SELECT #sql = N'SELECT ' + #cols + '
FROM ' + #t1 + ' INTERSECT
SELECT ' + #cols + ' FROM ' + #t2 + ';';
EXEC sp_executesql #sql;
Don't forget to clean up:
DROP TABLE dbo.TableA, dbo.TableB;
You can wrap your queries as subqueries and then reselect the columns in any order you want.
You can do it in only one step:
SELECT *
FROM (
--compare query a vs query b
SELECT ad.id_addetto,'not in b'y
FROM addetti ad
WHERE ad.id_addetto < 125 -- query a
EXCEPT
SELECT ad.id_addetto,'not in b'y
FROM addetti ad
WHERE ad.id_addetto < 166 -- query b
UNION
--compare query b vs query a
SELECT ad.id_addetto, 'not in a'y
FROM addetti ad
WHERE ad.id_addetto < 166 -- query b
EXCEPT
SELECT ad.id_addetto ,'not in a'y
FROM addetti ad
WHERE ad.id_addetto < 125 -- query a
) xx
This solution is for an unbounded Gridview paging and having problem with the syntax of this query:
> #currTable varchar(20),
#startRowIndex int,
#maximumRows int,
#totalRows int OUTPUT
AS
DECLARE #first_id int, #startRow int
IF #startRowIndex = 1
SET #startRowIndex = 1
ELSE
SET #startRowIndex = ((#startRowIndex - 1) * #maximumRows)+1
SET ROWCOUNT #startRowIndex
DECLARE #sql varchar(250);
SET #sql = 'SELECT ID, StringID_from_Master, GUID, short_Text, lang_String, date_Changed, prev_LangString, needsTranslation, displayRecord, brief_Descrip FROM ' + #currTable + ' ';
EXECUTE(#sql);
PRINT #first_id
SET ROWCOUNT #maximumRows
SELECT #sql = 'SELECT ' + CAST(#first_id as varchar(20)) + ' = ID FROM ' + QUOTENAME(#currTable) + ' ORDER BY ID ' ;
EXEC (#sql);
SET ROWCOUNT 0
-- Get the total rows
SET #sql = 'SELECT ' + + CAST(#totalRowsas varchar(20)) + ' = COUNT(ID) FROM ' + #currTable + ' ';
EXECUTE(#sql);
RETURN
<
The errors is:
Conversion failed when converting the varchar value ''SELECT ' to data type int.
Tried also
nvarchar and varchar. = + CAST(#first_id as varchar(10)) +
If you're trying to implement paging, this is wrong in so many ways. First, you're using SET ROWCOUNT to limit to #startRowIndex, but then you're selecting ALL n rows (with no ORDER BY), then getting the first ID, then counting the total rows by selecting from the table? Might I suggest a better approach?
CREATE PROCEDURE dbo.PageSmarter
#Table NVARCHAR(128), -- table names should not be varchar(20)
#FirstRow INT,
#PageSize INT,
#TotalRows INT OUTPUT
AS
BEGIN
SET NOCOUNT ON; -- always, in every stored procedure
DECLARE
#first_id INT,
#startRow INT,
#sql NVARCHAR(MAX);
SET #sql = N'WITH x AS
(
SELECT
ID,
rn = ROW_NUMBER() OVER (ORDER BY ID)
FROM
' + #Table + '
)
SELECT rn, ID
INTO #x FROM x
WHERE rn BETWEEN ' + CONVERT(VARCHAR(12), #FirstRow)
+ 'AND (' + CONVERT(VARCHAR(12), #FirstRow)
+ ' + ' + CONVERT(VARCHAR(12), #PageSize) + ' - 1);
SELECT first_id = MIN(ID) FROM #x;
SELECT
ID, StringID_from_Master, GUID, short_Text, lang_String, date_Changed,
prev_LangString, needsTranslation, displayRecord, brief_Descrip
FROM ' + #Table + ' AS src
WHERE EXISTS
(
SELECT 1 FROM #x
WHERE ID = src.ID
);';
EXEC sp_executeSQL #sql;
SELECT #totalRows = SUM(row_count)
FROM sys.dm_db_partition_stats
WHERE [object_id] = OBJECT_ID(#Table);
END
GO
DECLARE #tr INT;
EXEC dbo.PageSmarter 'dbo.tablename', 10, 2, #tr OUTPUT;
SELECT #tr;
I haven't tested all edge cases with this specific implementation. I will confess, there are much better ways to do this, but they usually aren't complicated with the additional requirement of dynamic table names. This suggests that there is something inherently wrong with your design if you can run the exact same queries against any number of tables and get similar results.
In any case, you can review some of the (quite lengthy) discussion about various approaches to paging over at SQL Server Central:
http://www.sqlservercentral.com/articles/T-SQL/66030/
There are 62 comments following up on the article:
http://www.sqlservercentral.com/Forums/Topic672980-329-1.aspx
I am guessing your #first_id field is an int. If so, then you need to CAST/Convert your #first_id value to a string/varchar.
CAST(#first_id as varchar(10))
or
Convert(varchar(10), #first_id)
MSDN documentation on CAST/Convert for SQL server
EDIT: After looking at your query again, I notice that you are setting your #first_id = ID, This is incorrect syntax, the correct syntax would be below.
SELECT #sql = 'SELECT ID AS ' + CAST(#first_id as varchar(10)) + ' FROM ' +
QUOTENAME(#currTable) + ' ORDER BY ID ' ;
EXEC (#sql);
It appears you're trying to create an alias for your column ID. The string you're building won't result in a valid SQL statement if it contains a number. It would come out to something like this:
SELECT 123 = ID FROM dbo.MyTable ORDER BY ID
Try this:
SELECT ID AS '123' FROM dbo.MyTable ORDER BY ID
To achieve that:
SELECT #sql = 'SELECT ID AS ''' + CAST(#first_id as varchar(10)) +
''' FROM ' + QUOTENAME(#currTable) +
' ORDER BY ID ' ;
I would do it this way
create table #e (a int)
SET #sql = 'insert #e SELECT COUNT(ID) FROM ' + #currTable + ' ';
exec(#sql)
select #totalRows = a from #e
drop table #e
Is there a way to select the row from a temp table (table has only one row anyway), into another table that has some columns with differenet names? For example:
TempTable
FirstName LastName Column1 Column2
------------ ------------ ----------- -----------
Joe Smith OKC TX
OtherTable
FirstName LastName Region1 Region2 Region3
------------ ------------ ----------- ----------- ----------
NULL NULL NULL NULL NULL
I need to copy the data, in the same order as the columns from TempTable into OtherTable. TempTable will not always be the same....as in sometimes it will have 3 columns, sometimes just 2...etc. If it does not have the same number of columns as OtherTable, the the remaining "Region" columns should stay null.
The end result should be:
OtherTable
FirstName LastName Region1 Region2 Region3
------------ ------------ ----------- ----------- ----------
Joe Smith OKC TX NULL
PLUS the column names in TEMPTable will NEVER be the same...as in one time it will be "Column1"...the next time it could be "XXXXX1". That's why I just want to copy data only...the data will always be in the correct order...
LOL...does this even make sense? This is for SQL Server 2005
EDIT ........ Dynamic SQL Generation added
This code will generate INSERT statements to INSERT from #TEMP into #TEMP. You can tweak it to suit your purpose if you are going from #temp to regular tables.
SET NOCOUNT ON
DROP Table #TempTable1
DROP Table #TempTable2
GO
DROP Function GenerateInserts
GO
Create Function GenerateInserts
(
#SourceTable VarChar (100),
#DestinationTable VarChar (100)
)
Returns VarChar (MAX)
AS
BEGIN
DECLARE #SelectString VarChar (MAX)
DECLARE #InsertString VarChar (MAX)
DECLARE #SQLString VarChar (MAX)
DECLARE #SourceColumnCount INTEGER
DECLARE #DestinationColumnCount INTEGER
DECLARE #ColumnCount INTEGER
DECLARE #Counter INTEGER
SELECT #SourceColumnCount = COUNT (*)
FROM tempdb..syscolumns
WHERE id=object_id(#SourceTable)
SELECT #DestinationColumnCount = COUNT (*)
FROM tempdb..syscolumns
WHERE id=object_id(#DestinationTable)
SET #ColumnCount = #SourceColumnCount
IF #DestinationColumnCount < #ColumnCount
SET #ColumnCount = #DestinationColumnCount
SET #Counter = 0
SET #SelectString = ' INSERT INTO ' + #DestinationTable + ' '
SET #InsertString = ' INSERT INTO ' + #DestinationTable + ' '
SET #SelectString = ''
SET #InsertString = ''
WHILE #Counter <= #ColumnCount
BEGIN
SELECT #SelectString = #SelectString + ', ' + Name
FROM TempDB..SysColumns
WHERE Id = Object_Id (#SourceTable)
AND ColOrder = #Counter
SELECT #InsertString = #InsertString + ', ' + Name
FROM TempDB..SysColumns
WHERE Id = Object_Id (#DestinationTable)
AND ColOrder = #Counter
SET #Counter = #Counter + 1
END
SET #InsertString = 'INSERT INTO ' + #DestinationTable + ' (' + STUFF ( #InsertString, 1, 2, '') + ') '
SET #SelectString = 'SELECT ' + STUFF ( #SelectString, 1, 2, '') + ' FROM ' + #SourceTable
SET #SQLString = #InsertString + '
'+ #SelectString
RETURN #SQLString
END
GO
Create Table #TempTable1
(
Col1 VarChar (10),
Col2 VarChar (10),
Col3 VarChar (10),
Col4 VarChar (10),
Col5 VarChar (10)
)
Create Table #TempTable2
(
MyCol1 VarChar (10),
MyCol2 VarChar (10),
MyCol3 VarChar (10),
MyCol4 VarChar (10),
MyCol5 VarChar (10),
MyCol6 VarChar (10)
)
SELECT dbo.GenerateInserts ('tempdb..#TempTable1', 'tempdb..#TempTable2')
OLD ANSWER
Yes you can do this but you have to write different statements for each type of INSERT. You do have to specify column names in both places - the INSERT INTO and the SELECT
If you have the same number of columns in your Source and Destination tables, do this
INSERT INTO Table1 (Column1, Column2, Column3)
SELECT MyColumn01, MyColumn02, MyColumn03
FROM MyTable
What this will do is map as follows:
MyTable.MyColumn01 -> Table1.Column1
MyTable.MyColumn02 -> Table1.Column2
MyTable.MyColumn03 -> Table1.Column3
If the Source has less columns, you can use a NULL value in place of the column name
INSERT INTO Table1 (Column1, Column2, Column3)
SELECT MyColumn01, MyColumn02, NULL AS MyColumn03
FROM MyTable
OR you can just use two column names
INSERT INTO Table1 (Column1, Column2)
SELECT MyColumn01, MyColumn02
FROM MyTable
If the destination table has less columns than the source, then you have to ignore columns from the source
INSERT INTO Table1 (Column1, Column2, Column3)
SELECT MyColumn01, MyColumn02, NULL AS MyColumn03 /* MyColumn04, MyColumn05 are ignored */
FROM MyTable
You could do something with dynamic SQL.
I recommend reading "The Curse and Blessings of Dynamic SQL -
Dealing with Dynamic Table and Column Names" if this is new to you.
Example follows. You could improve this to be sure that source and destination columns are of compatible types or to exclude identity or computed columns for example but it should give you an idea.
DECLARE #SourceTable sysname
DECLARE #DestTable sysname
SET #SourceTable = '[dbo].[#TempTable]'
SET #DestTable = '[dbo].[OtherTable]'
DECLARE #DynSQL1 NVARCHAR(MAX)
DECLARE #DynSQL2 NVARCHAR(MAX)
SELECT
#DynSQL1 = ISNULL(#DynSQL1 + ',','') + QUOTENAME(sc1.name),
#DynSQL2 = ISNULL(#DynSQL2 + ',','') + QUOTENAME(sc2.name)
FROM tempdb..syscolumns sc1
JOIN syscolumns sc2
ON sc1.colorder = sc2.colorder /*Match up the columns by column order*/
WHERE sc1.id=OBJECT_ID('tempdb.' + #SourceTable) AND sc2.id=OBJECT_ID(#DestTable)
IF ##ROWCOUNT = 0
RETURN
SET #DynSQL1 = 'INSERT INTO ' + #DestTable + ' (' + #DynSQL2 + ')
SELECT ' + #DynSQL1 + ' FROM '+ #SourceTable +';'
EXEC sp_executesql #DynSQL1
You can specify the columns of the target table:
INSERT INTO OtherTable (FirstName, LastName, Region1, Region2)
SELECT FirstName, LastName, Column1, Column2 FROM TempTable;
In this example, OtherTable.Region3 will end up NULL (or if it has a DEFAULT value, it'll use that).
The count of columns in the INSERT must match the count of columns in the SELECT. So you must know how many columns in TempTable and make the list of columns for the insert match.
But there's no way to do it with implicit columns, if you're just throwing SELECT * around.
Re your comment: You can use the INFORMATION_SCHEMA to query the count and the names of the columns in the table.
SELECT COLUMN_NAME
FROM tempdb.INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_CATALOG = 'tempdb' AND TABLE_SCHEMA = 'MySchema'
AND TABLE_NAME LIKE 'MyTempTable%';
Then you would write application code to create your SQL INSERT...SELECT statement based on the results from this information_schema query.
Note: Querying temp tables through the information_schema requires special handling.