So, I'm trying to rewrite some of the stored procedures we're using to use better set-based logic and reduce or eliminate the use of cursors due to performance issues. However, I can't come up with a more efficient way to do the below without resorting to cursor use.
Presently, what i'm doing is basically selecting an initial result set into a temporary table that looks something like
INSERT INTO #tmptable
SELECT stuff.id
,stuff.datapoint
,stuff.database
,'' AS missingdata
FROM STUFF
Which usually returns anywhere from 250-500 rows of information. The '' is a datapoint that lives in any one of several hundred other databases - the name of which is specified by stuff.database. Despite there being hundreds of possible options, there's usually only three or four unique databases in each result set. As a result, what I'm currently doing is:
DECLARE #dbname VARCHAR(255)
DECLARE a_cursor CURSOR LOCAL
FOR
SELECT DISTINCT database
FROM #tmptable
OPEN a_cursor
FETCH NEXT
FROM a_cursor
INTO #dbname
WHILE ##FETCH_STATUS = 0
BEGIN
SET #SQL = 'UPDATE #tmptable
SET missingdata = bin.dataaggregate
FROM ((SELECT pd.id id
,STUFF((SELECT '','' + pdd.bin
FROM server.[' + #dbname + '].dbo.proddetails pdd
WHERE pdd.id= pd.id
GROUP BY pdd.id, pdd.bin
FOR XML PATH(''''), TYPE).value(''.'', ''VARCHAR(max)''), 1, 1, '''') dataaggregate
FROM server.[' + #dbname + '].dbo.proddetails pd) bin
INNER JOIN #tmptable tir ON tir.id= bin.id
EXEC sp_executesql #SQL
FETCH NEXT
FROM a_cursor
INTO #dbname
END
CLOSE a_cursor
DEALLOCATE a_cursor
Since there are usually only a handful of databases needed in each result set, the cursor has to loop only a handful of times and the performance hit isn't awful. Still, I don't like using them and feel like there has to be a more efficient way to do this. Any ideas?
Use this:
INSERT INTO #tmptable
SELECT stuff.id
,stuff.datapoint
,stuff.[database]
,'' AS missingdata
,row_number() over (order by stuff.id) as rn
FROM STUFF
declare #i int= 1;
declare #max int = (select max(rn) from #tmptable)
while #i <= #max
begin
declare #dbname sysname
select #dbname = [database] from #tmptable where rn = #i
--exec your dynamic sql hear
set #i +=1
end
Related
I did this. But unfortunately that return all in many table. I want to return all in one unique table. Maybe using "UNION" but I don't know the way to do.
This is my code:
EXEC sp_msforeachdb 'select ''?''AS "DataBase", s.name, t.name AS "Tables",max(si.rows) as "Rows Line"
from [?].sys.tables t inner join [?].sys.schemas s
on t.schema_id = s.schema_id
inner join [?].sys.partitions si on t.object_id = si.object_id
where t.name like "%ATTACH" group by s.name,t.name'`
You cannot do it in a single query.
You could query the sys.databases table to get a temporary table of all your databases, and then run a dynamic query on each database to store the results of the query in your question all in another temporary table.
Then at the end, you just select all rows from the last temporary table.
I have found finally a solution.i just used a Stored Procedures for having the result i was looking for.So decided to post the answer here maybe that will help someone else.
DECLARE #banco_nome nvarchar(MAX), #tabela_nome nvarchar(MAX)
DECLARE #banco_cursor CURSOR
DECLARE #sqlstatement nvarchar(MAX)
DECLARE #count_sql nvarchar(MAX)
DECLARE #total int
DECLARE #RegistrosFotograficos TABLE
(
DatabaseName nvarchar(max),
TableName nvarchar(max),
Total int
)
SET #banco_cursor = CURSOR FORWARD_ONLY FOR
SELECT name FROM sys.databases
OPEN #banco_cursor
FETCH NEXT FROM #banco_cursor INTO #banco_nome
WHILE ##FETCH_STATUS = 0
BEGIN
SET #sqlstatement = 'DECLARE tabela_cursor CURSOR FORWARD_ONLY FOR SELECT TABLE_NAME FROM ' + #banco_nome + '.INFORMATION_SCHEMA.TABLES WHERE TABLE_TYPE = ''BASE TABLE'' AND TABLE_NAME LIKE ''%ATTACH'' ORDER BY TABLE_NAME'
EXEC sp_executesql #sqlstatement
OPEN tabela_cursor
FETCH NEXT FROM tabela_cursor INTO #tabela_nome
WHILE ##FETCH_STATUS = 0
BEGIN
SET #count_sql = 'USE ' + #banco_nome + '; SELECT #total=COUNT(1) FROM ' + #tabela_nome;
EXECUTE sp_executesql #count_sql, N'#total int OUTPUT', #total=#total OUTPUT
INSERT INTO #RegistrosFotograficos (DatabaseName, TableName, Total) VALUES (#banco_nome, #tabela_nome, #total);
FETCH NEXT FROM tabela_cursor INTO #tabela_nome
END
CLOSE tabela_cursor;
DEALLOCATE tabela_cursor;
FETCH NEXT FROM #banco_cursor INTO #banco_nome
END
CLOSE #banco_cursor;
DEALLOCATE #banco_cursor;
SELECT * FROM #RegistrosFotograficos
This will list all the tables in a given database and the number of rows in each. Notice the results are in a table named #results:
set nocount on
declare #curtable sysname
declare #prevtable sysname
declare #curcount int
declare #tsql varchar(500)
if object_ID('tempdb..#curtables','U') is not null
drop table #curtables
select name into #curtables
from sys.objects
where type='U'
order by 1
if object_id('tempdb..#results','U') is not null
drop table #results
create table #results(name sysname,numrows int)
select top 1 #curtable=name from #curtables order by name
while (1=1)
begin
set #tsql = 'select '''+quotename(#curtable) +''',count(*) numrows from '+quotename(#curtable)
print #tsql
insert into #results
exec (#tsql)
set #prevtable= #curtable
select top 1 #curtable = name
from #curtables
where name > #prevtable
order by name
if #curtable=#prevtable
break
end
I didn't find any appropriate solution for my problem so I want to ask here if someone could help me.
I have a stored procedure named spImportWord which downloads word files from a file location on another server to a local folder and saves values from each word file to a table in the database. For each file I'm calling a console application that saves the files to the local folder. For downloading I'm using a while loop. Before I used a cursor but as far as I know you should stay away from cursors.
Since I changed from cursor to while I can't even alter my stored procedure. It takes an eternity to finish. Is there any way to improve my stored procedure? (Note: With the cursor the SP could be altered but execution displayed that the subquery returned more than 1 value etc.)
My code so far:
ALTER PROCEDURE spImportWord
#fileId INT
AS
DECLARE #cmd VARCHAR(200)
DECLARE #filename VARCHAR(200)
DECLARE #foldername VARCHAR(200)
DECLARE #year VARCHAR(200)
DECLARE #remotePath VARCHAR(MAX)
DECLARE #localPath VARCHAR(MAX)
DECLARE #organisationId INT
DECLARE #folderId INT
DECLARE #import TABLE(ImportId INT)
DECLARE #counter INT
SELECT #filename = Files.FileName
, #foldername = Files.FolderName
FROM Files WHERE Files.FileId = #fileId
CREATE TABLE #organisation ([OrganisationId] INT, [FolderId] INT, [Year] VARCHAR(4) NULL)
INSERT INTO #organisation
SELECT [tabInstitution].[OrganisationId]
, [tabInstitution].[FolderId]
, [tabInstitution].[UploadDate] AS [Year]
FROM [tabInstitution]
SET #organisationId = 0
SET #counter = 0
WHILE (#counter <= (SELECT COUNT(*) FROM #organisation))
BEGIN
SELECT #organisationId = MIN([#organisation].[OrganisationId])
, #folderId = [#organisation].[FolderId]
, #year = [#organisation].[Year]
FROM [#organisation]
WHERE [#organisation].[OrganisationId] > #organisationId
GROUP BY [FolderId], [Year]
SET #remotePath = '\\somepath.path.com\somefolder\' + #organisationId + '\' + #folderId + '\' + #filename + '.docx'
SET #localPath = 'C:\Files\' + #year + '\' + #foldername + '\' + #organisationId + '.docx'
SET #cmd = 'C:\App\ImportWordFiles.exe --SourcePath ' + #remotePath + ' --TargetPath ' + #localPath
EXEC xp_cmdshell #cmd, no_output
-- Log into database
INSERT INTO WordImport
OUTPUT Inserted.ImportId INTO #import
VALUES(GETDATE())
INSERT INTO WordImportItem
VALUES((SELECT ImportId FROM #import), #organisationId, #folderId, #localPath)
SET #counter = #counter + 1
END
Instead of the while loop I had before:
DECLARE MY_CURSOR CURSOR LOCAL STATIC READ_ONLY FORWARD_ONLY
FOR
SELECT [#organisation].[OrganisationId]
, [#organisation].[FolderId]
, [#organisation].[Year]
FROM [#organisation]
OPEN MY_CURSOR
FETCH NEXT FROM MY_CURSOR INTO #organisationId, #folderId, #year
WHILE ##FETCH_STATUS = 0
BEGIN
[...]
FETCH NEXT FROM MY_CURSOR INTO #organisationId, #folderId, #year
END
I hope I explained my question understandable.
You know, you should really understand the reasons why cursors are considered bad. In your case, there's no reason not to use cursors, and in fact, your while solution is much uglier anyway.
As for your "subquery problem", (SELECT ImportId FROM #import) is to blame - you're continuously adding rows to #import, and the subquery returns all the ImportIds, not just the last one (or whatever). That will fail once #import has more than a single row. It's hard to tell what exactly you're trying to do - I assume you're just trying to get the last inserted ID, and there's no point in maintaining a whole table of all the import IDs. Just delete all the data from #import after you read it.
My users are trying to find records in my SQL db by providing simple text strings like this:
SCRAP 000000152 TMB-0000000025
These values can be in any order and any may be excluded. For example, they may enter:
SCRAP
TMB-0000000025 SCRAP
000000152 SCRAP
SCRAP 000000152
TMB-0000000025 000000152
All should work and include the same record as the original search, but they may also contain additional records because fewer columns are used in the match.
Here is a sample table to use for the results:
DECLARE #search1 varchar(50) = 'SCRAP 000000152 TMB-0000000025'
DECLARE #search2 varchar(50) = 'SCRAP'
DECLARE #search3 varchar(50) = 'TMB-0000000025 SCRAP'
DECLARE #search4 varchar(50) = '000000152 SCRAP'
DECLARE #search5 varchar(50) = 'SCRAP 000000152'
DECLARE #search6 varchar(50) = 'TMB-0000000025 000000152'
DECLARE #table TABLE (WC varchar(20),WO varchar(20),PN varchar(20))
INSERT INTO #table
SELECT 'SCRAP','000000152','TMB-0000000025' UNION
SELECT 'SCRAP','000012312','121-0000121515' UNION
SELECT 'SM01','000000152','121-0000155' UNION
SELECT 'TH01','000123151','TMB-0000000025'
SELECT * FROM #table
One additional wrinkle, the user does not have to enter 000000152, they can enter 152 and it should find the same results.
I can use patindex, but it requires the users to enter the search terms in a specific order, or for me to have an exponentially larger string to compare as I try to put them in all possible arrangements.
What is the best way to do this in SQL? Or, is this outside the capabilities of SQL? It is quite possible that the table will have well over 10,000 records (for some instances even over 100,000), so the query has to be efficient.
Agree with #MitchWheat (as usual). This database is not designed for queries like that, nor would any kind of "basic query" help. Best way would be to build a list of strings appearing in any column of the database, mapped back to the source column and row, and search that lookup table for your strings. This is pretty much what Lucene and any other full-text search library will do for you. SQL has a native implementation, but if the pros say go with a third party implementation, I'd say it's worth a look-see.
You can try this SP:
USE master
GO
CREATE PROCEDURE sp_FindStringInTable #stringToFind VARCHAR(100), #schema sysname, #table sysname
AS
DECLARE #sqlCommand VARCHAR(8000)
DECLARE #where VARCHAR(8000)
DECLARE #columnName sysname
DECLARE #cursor VARCHAR(8000)
BEGIN TRY
SET #sqlCommand = 'SELECT * FROM [' + #schema + '].[' + #table + '] WHERE'
SET #where = ''
SET #cursor = 'DECLARE col_cursor CURSOR FOR SELECT COLUMN_NAME
FROM ' + DB_NAME() + '.INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_SCHEMA = ''' + #schema + '''
AND TABLE_NAME = ''' + #table + '''
AND DATA_TYPE IN (''char'',''nchar'',''ntext'',''nvarchar'',''text'',''varchar'')'
EXEC (#cursor)
OPEN col_cursor
FETCH NEXT FROM col_cursor INTO #columnName
WHILE ##FETCH_STATUS = 0
BEGIN
IF #where <> ''
SET #where = #where + ' OR'
SET #where = #where + ' [' + #columnName + '] LIKE ''' + #stringToFind + ''''
FETCH NEXT FROM col_cursor INTO #columnName
END
CLOSE col_cursor
DEALLOCATE col_cursor
SET #sqlCommand = #sqlCommand + #where
--PRINT #sqlCommand
EXEC (#sqlCommand)
END TRY
BEGIN CATCH
PRINT 'There was an error. Check to make sure object exists.'
IF CURSOR_STATUS('variable', 'col_cursor') <> -3
BEGIN
CLOSE col_cursor
DEALLOCATE col_cursor
END
END CATCH
This will have results as follow:
USE AdventureWorks
GO
EXEC sp_FindStringInTable 'Irv%', 'Person', 'Address'
USE AdventureWorks
GO
EXEC sp_FindStringInTable '%land%', 'Person', 'Address'
That's all there is to it. Once this has been created you can use this against any table and any database on your server.(Read More)
I am using SQL Server 2008 R2 on dev, and SQL Azure for test and live.
I wish to write a little procedure to reset the identity seeds since SQL Azure does not support DBCC.
I have some workaround code which works, but I do not want to write it out for each table, so was trying to write a routine that iterates through the DB tables.
Tables:
SELECT * FROM information_schema.tables
Code:
delete from TABLE_NAME where Id>150000
GO
SET IDENTITY_INSERT [TABLE_NAME] ON
GO
INSERT INTO [TABLE_NAME](Id) VALUES(150000)
GO
delete from TABLE_NAME where Id=150000
GO
SET IDENTITY_INSERT [TABLE_NAME] OFF
GO
I guess I need to wrap this in a loop. Sorry my T-SQL is not that strong, hence the request for help.
Also it would be helpful to omit all tables with TABLE_NAME starting with aspnet_ and use only TABLE_TYPE = "BASE TABLE"
Any help hugely appreciated.
Unless somebody else knows a trick that I don't, you're probably stuck using dynamic SQL and iterating through a list of table names using either a cursor or a temporary table. The cursor approach would look something like this:
declare #TableName nvarchar(257);
declare #sql nvarchar(max);
declare TableCursor cursor read_only for
select
TABLE_SCHEMA + '.' + TABLE_NAME
from
INFORMATION_SCHEMA.TABLES
where
TABLE_NAME not like 'aspnet\_%' escape '\' and
TABLE_TYPE = 'BASE TABLE';
open TableCursor;
fetch next from TableCursor into #TableName;
while ##fetch_status = 0
begin
set #sql = 'select top 1 * from ' + #TableName;
exec sp_executesql #sql;
fetch next from TableCursor into #TableName;
end
close TableCursor;
deallocate TableCursor;
You can read more about cursors here. Alternatively, you could do it with an in-memory table like this:
declare #Tables table (RowId int identity(1, 1), TableName nvarchar(257));
declare #TableName nvarchar(257);
declare #Index int;
declare #TableCount int;
declare #sql nvarchar(max);
insert into #Tables (TableName)
select
TABLE_SCHEMA + '.' + TABLE_NAME
from
INFORMATION_SCHEMA.TABLES
where
TABLE_NAME not like 'aspnet\_%' escape '\' and
TABLE_TYPE = 'BASE TABLE';
set #TableCount = ##rowcount;
set #Index = 1
while #Index <= #TableCount
begin
select #TableName = TableName from #Tables where RowId = #Index;
set #sql = 'select top 1 * from ' + #TableName;
exec sp_executesql #sql;
set #Index = #Index + 1;
end
In the interest of brevity, my examples use a much simpler SQL statement than yours—I'm just selecting one record from each table—but this ought to be enough to illustrate how you can get this done.
source : MS-SQL 2005
destination : MS-SQL 2012
somehow I need to change developing database from one to another but unfortunately the "to database" does have had some tables and SPs already and the worse, those objects such as tables might have some columns with different name or types or even descriptions inconsistent between.
what am I supposed to do to achieve sth like, maybe easier or smarter,
append new columns to tables which already existed (and also put col's default value and description from source)
change types of columns consistent to source
prevent overwriting contents of SPs already appeared in destination (but will review manually later)
So far I can figure out some statistics by the follwing scripts
select name from sys.Tables order by name (export to left.txt and right.txt and compare them between)
select * from sys.all_objects where type='p' and is_ms_shipped=0 order by name (also compare them between)
get all column names in one line per table (and compare them between),
e.g.
--sth like SELECT * FROM INFORMATION_SCHEMA.COLUMNS, but select into only ONE line per table
declare #temp_table_list table(
id int identity not null,
name varchar(100) not null
)
insert #temp_table_list(name) select name from sys.tables
declare #id int
declare #name varchar(100)
declare #result as nvarchar(max)
set #result = N''
while 1 = 1
begin
select #id = min(id)
FROM #temp_table_list
where id > isnull(#id,0)
if #id is null break
select #name = name
FROM #temp_table_list
where id = #id
declare #tbName as nvarchar(max)
declare #sql as nvarchar(max)
declare #col as nvarchar(max)
Set #tbName = #name
DECLARE T_cursor CURSOR FOR
select c.name from sys.columns c
inner join sys.tables t on c.object_id = t.object_id
inner join sys.types tp on tp.user_type_id = c.system_type_id
where t.name =#tbName
OPEN T_cursor
FETCH NEXT FROM T_cursor into #col
set #sql = N'select '
WHILE ##FETCH_STATUS = 0
BEGIN
set #sql = #sql+#col+','
FETCH NEXT FROM T_cursor into #col
END
set #sql =substring( #sql,0,len(#sql)) +' from '+ #tbName
CLOSE T_cursor
DEALLOCATE T_cursor
set #result = #result + #sql + '/r/n'
end
select #result
This isn't free software, but it does have a trial period:
http://www.red-gate.com/products/sql-development/sql-compare/
As it sounds like your requirement is a one off sync, then that should do what you want.