Finding Updated Columns from within the Stored Procedures - sql-server

I am able to find out the columns updated within the trigger of the table. However the trigger is kind of big, I want to reduce its size as much as possible. So now, I want to create a generic stored procedure and find out the updated columns from within the stored procedure.
Here is the SQL query that finds out the updated columns
SELECT #idTable = T.id
FROM sysobjects P JOIN sysobjects T ON P.parent_obj = T.id
WHERE P.id = ##PROCID
---- Get COLUMNS_UPDATED if update
DECLARE #Columns_Updated VARCHAR(50)
SELECT #Columns_Updated = ISNULL(#Columns_Updated + ', ', '') + name
FROM syscolumns
WHERE id = #idTable
AND CONVERT(VARBINARY,REVERSE(COLUMNS_UPDATED())) & POWER(CONVERT(BIGINT, 2), colorder - 1) > 0
Could some one help me out as to what am I suppose to do to achieve my goal

If you want to create an sp that will execute whenever you want and see what was updated database-wide since the last run of this sp, then I don't think it can be done. I would advise to either use the built-in sql server 2008 audit functionality or use triggers as Yuriy Galanter already pointed out.

Related

Executing dynamic SQL with return value as a column value for each rows

I have a rather simple query that I started to modify in order to remove temp table as we have concurrency issues over many different systems and clients.
Right now the simple solution was to break up the query in multiple separate queries to replicate what SQL was doing before.
I am trying to figure out a way to return the result of a dynamic SQL query as a column value. The new query is quite simple, it look in the system objects for all table with specific format and output. What i am missing is that for each record i need to output the result of a dynamic query on each of those table.
The query :
SELECT [name] as 'TableName'
FROM SYSOBJECTS WHERE xtype = 'U'
AND (CHARINDEX('_PCT', [name]) <> 0
OR CHARINDEX('_WHT', [name]) <> 0)
All these table have a common column called Result which is a float. What i am trying to do is return the count of this column under some WHERE clause that is generic and will work will all tables as well.
A desired query (i know it's not valid) would be :
SELECT [name] as 'TableName',
sp_executesql 'SELECT COUNT(*) FROM ' + [name] + ' WHERE Result > 0 OR (Result < 139 AND CurrentIndex < 15)' as 'ResultValue'
FROM SYSOBJECTS WHERE xtype = 'U'
AND (CHARINDEX('_PCT', [name]) <> 0
OR CHARINDEX('_WHT', [name]) <> 0)
Before it used to be easy. We had a temp table with 2 columns and were filling the table name first. Then we iterate on the temp table and execute the dynamic sql and return the value in an OUTPUT variable and simply update the record of the temp table and finally return the table.
I have tried a scalar function but it doesn't support dynamic SQL so it doesn't work. I would rather not create the 13,000~ different queries for the 13,000~ tables.
I have tried using a reference table and use trigger to update the status but it slow the system way to much. The average tables insert and delete 28 millions records. The original temp table query only took 5-6 minutes to execute due to very good indexing and now we are reaching 25-30 minutes.
Is there any other solution available than Querying the table list then the Client query each table one by one to know it status ?
We are using SQL Server 2017 if some new features are available now
You can use this script for your purpose (tested in SQL Server 2016).
Updated: It should work now as the results are a single set now.
EXEC sp_msforeachtable
#precommand = 'CREATE TABLE ##Statistics
(TableName varchar(128) NOT NULL,
NumOfRows int)',
#command1 ='INSERT INTO ##Statistics (TableName, NumOfRows)
SELECT ''?'' Table_Name, COUNT(*) Row_Count FROM ? WHERE Result > 0 OR (Result < 139 AND CurrentIndex < 15)',
#postcommand = 'SELECT TableName, NumOfRows FROM ##Statistics;
DROP TABLE ##Statistics'
,#whereand = ' And Object_id In (Select Object_id From sys.objects
Where name like ''%_PCT%'' OR name like ''%_WHT%'')'
For more details on sp_msforeachtable Please visit this link

Dynamic table in SQL Server

I have a really weird and complex requirement that I need help with. I have a table let's say Tasks that contains all the tasks for a user/system. I need to filter out the tasks per user and show it in UI. But here is the scene, the Tasks table contains a column base_table that stores the table name (real SQL Server table) on which it is based. It also stores the base table id which navigates to a particular record in the base table. Now I need to add some filter in the base table and if it satisfies the task would get retrieved.
I did try to put up a procedure which would hit a select query against base table and also check conditions.
CREATE PROCEDURE gautam_dtTable_test
(#TableName AS nvarchar(max))
AS
BEGIN try
declare #sql nvarchar(max)
declare #ret tinyint
set #ret = 0
set #sql = 'select 1 where exists (Select top 1 Id from ' + #TableName+' where some_condition)';
Exec sp_executesql #sql, N'#var tinyint out', #ret out
return #ret
end try
begin catch
return 0
end catch
I have used the procedure to input table name and hit some conditions and return a flag, 1/0 kind of thing. I also want to use try catch so that if there is any error, it would return false.
That's why I have used the procedure, not function. But seems like we can use this procedure into sql statement. Overall what I have in my mind is
Select *
from tasks
where some_conditions
and procedure/function_to_check(tasks.base_table)
Key issues with my approach
The base_table name could be invalid, so are some columns in it. So, I would love to use a try-catch.
Need to Embed it as sub-query to avoid parallel operations. But it seems tough when your procedure/function have EXEC and sp_executesql defined.
Any kind of help is appreciated. Thanks in advance!
The question as stated is a bit unclear so I am going to make some assumptions here. It looks like you are trying achieve the following:
First it seems you are trying to only return task in your task table where the ‘base_table’ column value references a valid SQL Server table.
Secondly if I understand the post correctly, based on the where clause condition passed to the tasks table you are trying to determine if the same columns exists in your base table.
The first part is certainly doable. However, the second part is not since it would require the query to somehow parse itself to determine what columns are being filtered on.
The following query show how you can retrieve only tasks for which there is a valid corresponding table.
SELECT *
FROM [dbo].[tasks] ts
CROSS APPLY (
SELECT [name]
FROM sys.objects WHERE object_id = OBJECT_ID('[dbo].' + QUOTENAME(ts.base_table)) AND type in (N'U')
) tb
If the field(s) you are trying to filter on is known up front (i.e. you are not trying to parse based of the tasks table) then you can modify the above query to pass the desired columns you want to check as follow:
DECLARE #columnNameToCheck NVARCHAR(50) = 'col2'
SELECT ts.*
FROM [dbo].[tasks] ts
CROSS APPLY (
SELECT [name]
FROM sys.objects WHERE object_id = OBJECT_ID('[dbo].' + QUOTENAME(ts.base_table)) AND type in (N'U')
) tb
CROSS APPLY (
SELECT [name]
FROM sys.columns WHERE object_id = OBJECT_ID('[dbo].' + QUOTENAME(ts.base_table)) AND [name] = #columnName

Keeping the design of 2 tables in sync

The problem:
I have 2 tables in a database:
TableA TableB
X, Y, Z X, Y, Z
If I add column W to table A I want to copy it automatically to Table B with the same name and data type (without writing it explicitly).
Constraints on deployment:
The tables are updated using update scripts (so must be able to be called / executed in tsql).
There are multiple tables that could be updated (however I can hand code a mapping if needed)
Example update script:
IF NOT EXISTS (SELECT 1 FROM SYS.COLUMNS C INNER JOIN SYS.TABLES T ON C.OBJECT_ID = T.OBJECT_ID
WHERE C.NAME = 'W' AND T.NAME = 'TableA')
BEGIN
ALTER TABLE TableA ADD W [Int] NULL
END
GO
At the end of this script I want to add my ‘SyncMyTables’ SQL
So far from my research I have found 3 possible ways to tackle this:
Call a function at the end of the script which syncs the table designs
Some form of table trigger (but I don’t think triggers are that clever)
Some inline sql that builds up an update string and then runs it against the database.
Option 1 seems the most sensible to me.
What help I need:
Some guidance on how best to tackle this.
An example to point me in the right direction.
Cheers
please note, I don't want to keep the content of the columns in sync, I need to keep the table DESIGN
The column duplication can be achieved with a DDL trigger:
CREATE TRIGGER DDL_TableA_TableB
ON database
FOR ALTER_TABLE
AS
BEGIN
Declare #CommandText nvarchar(1000)
SELECT #CommandText = EVENTDATA().value('(/EVENT_INSTANCE/TSQLCommand/CommandText)[1]','SYSNAME');
if left(#CommandText, 26) = 'ALTER TABLE dbo.TableA ADD'
begin
set #CommandText = replace( #CommandText, 'TableA', 'TableB')
exec sp_executesql #CommandText
end
end

Need help on SQL Server 2008 programming

I am very new to SQL Server 2008 programming and trying to create a procedure.
Well, the requirement is 'The procedure returns data either based on the input parameter OR if no input data is given-it should do a default select and return all data qualifying'
I tried out with something like this-
CREATE PROCEDURE [dbo].[Proc_sampletestproc]
(#testid int)
AS
BEGIN
SET NOCOUNT OFF; ----I worked with Oracle PL/SQL and so do we need for mandatory this declarations
Here, I need to put a check that if the input testid has a value or not. If it exists, we have one case with a select or we do a default select.
Also, I am putting a direct SELECT with no joins on tables. How would I JOIN the tables along with OUTER JOINS because a testid may or may not have an insurance? I mean syntactically - syntax is quite different in SQL Server.
SELECT
T.TESTID, T.NAME, TI.INSURENAME
FROM
testinsured ti, test t, testinsuredHistory tih
WHERE
t.testid = ti.testid -----This entry may be there or not IN THE testinsured TABLE
AND tih.testinsuredid = ti.testinsuredid --A testid might have 2 Insurers whose history is stored here.
AND TIH.STARTDATE IS NOT NULL
AND TIH.ENDDATE IS NOT NULL --TO CHECK ACTIVE DATES FOR COVERAGE
Also, I want to do a group by on testid so that the name comes once but the InsuredPlanname comes accordingly once, twice as many as each Testid has.
if (#testid is not null)
begin
/* ... */
end
else
begin
SELECT T.TESTID,T.NAME,TI.INSURENAME
FROM test t
left outer join testinsured ti on t.testid = ti.testid -----This entry may be there or not IN THE testinsured TABLE
left outer join testinsuredHistory tih on ti.testinsuredid = tih.testinsuredid --A testid might have 2 Insurers whose history is stored here.
where TIH.STARTDATE IS NOT null AND TIH.ENDDATE IS NOT NULL
/* group by ... */
end

Dealing with large amounts of data, and a query with 12 inner joins in SQL Server 2008

There is an old SSIS package that pulls a lot of data from oracle to our Sql Server Database everyday. The data is inserted into a non-normalized database, and I'm working on a stored procedure to select that data, and insert it into a normalized database. The Oracle databases were overly normalized, so the query I wrote ended up having 12 inner joins to get all the columns I need. Another problem is that I'm dealing with large amounts of data. One table I'm selecting from has over 12 million records. Here is my query:
Declare #MewLive Table
(
UPC_NUMBER VARCHAR(50),
ITEM_NUMBER VARCHAR(50),
STYLE_CODE VARCHAR(20),
COLOR VARCHAR(8),
SIZE VARCHAR(8),
UPC_TYPE INT,
LONG_DESC VARCHAR(120),
LOCATION_CODE VARCHAR(20),
TOTAL_ON_HAND_RETAIL NUMERIC(14,0),
VENDOR_CODE VARCHAR(20),
CURRENT_RETAIL NUMERIC(14,2)
)
INSERT INTO #MewLive(UPC_NUMBER,ITEM_NUMBER,STYLE_CODE,COLOR,[SIZE],UPC_TYPE,LONG_DESC,LOCATION_CODE,TOTAL_ON_HAND_RETAIL,VENDOR_CODE,CURRENT_RETAIL)
SELECT U.UPC_NUMBER, REPLACE(ST.STYLE_CODE, '.', '')
+ '-' + SC.SHORT_DESC + '-' + REPLACE(SM.PRIM_SIZE_LABEL, '.', '') AS ItemNumber,
REPLACE(ST.STYLE_CODE, '.', '') AS Style_Code, SC.SHORT_DESC AS Color,
REPLACE(SM.PRIM_SIZE_LABEL, '.', '') AS Size, U.UPC_TYPE, ST.LONG_DESC, L.LOCATION_CODE,
IB.TOTAL_ON_HAND_RETAIL, V.VENDOR_CODE, SD.CURRENT_RETAIL
FROM MewLive.dbo.STYLE AS ST INNER JOIN
MewLive.dbo.SKU AS SK ON ST.STYLE_ID = SK.STYLE_ID INNER JOIN
MewLive.dbo.UPC AS U ON SK.SKU_ID = U.SKU_ID INNER JOIN
MewLive.dbo.IB_INVENTORY_TOTAL AS IB ON SK.SKU_ID = IB.SKU_ID INNER JOIN
MewLive.dbo.LOCATION AS L ON IB.LOCATION_ID = L.LOCATION_ID INNER JOIN
MewLive.dbo.STYLE_COLOR AS SC ON ST.STYLE_ID = SC.STYLE_ID INNER JOIN
MewLive.dbo.COLOR AS C ON SC.COLOR_ID = C.COLOR_ID INNER JOIN
MewLive.dbo.STYLE_SIZE AS SS ON ST.STYLE_ID = SS.STYLE_ID INNER JOIN
MewLive.dbo.SIZE_MASTER AS SM ON SS.SIZE_MASTER_ID = SM.SIZE_MASTER_ID INNER JOIN
MewLive.dbo.STYLE_VENDOR AS SV ON ST.STYLE_ID = SV.STYLE_ID INNER JOIN
MewLive.dbo.VENDOR AS V ON SV.VENDOR_ID = V.VENDOR_ID INNER JOIN
MewLive.dbo.STYLE_DETAIL AS SD ON ST.STYLE_ID = SD.STYLE_ID
WHERE (U.UPC_TYPE = 1) AND (ST.ACTIVE_FLAG = 1)
That query pretty much crashes our server. I tried to fix the problem by breaking the query up into smaller queries, but the temp table variable I use causes the tempdb database to fill the hard drive. I figure this is because the server runs out of memory, and crashes. Is there anyway to solve this problem?
Have you tried using a real table instead of a temporary one. You can use SELECT INTO to create a real table to store the results instead of a temporary one.
Syntax would be:
SELECT
U.UPC_NUMBER,
REPLACE(ST.STYLE_CODE, '.', '').
....
INTO
MEWLIVE
FROM
MewLive.dbo.STYLE AS ST INNER JOIN
...
The command will create the table, and may help with the memory issues you are seeing.
Additionally try looking at the execution plan in query analyser or try the index tuning wizard to suggest some indexes that may help speed up the query.
Try running the query from the Oracle server rather than from the SQL server. As it stands, there's most likely going to be a lot of communication over the wire as the query tries to process.
By pre-processing the joins (maybe with a view), you'll only be sending over the results.
Regarding the over-normalization: have you tested whether or not it's an issue in terms of speed? I find it hard to believe that it could be too normalized.
Proper indexing will definitely help
IF
amount of rows in this query not over "zillions" of rows.
Try the following:
Join on dbo.COLOR is excessive if there is FKey on dbo.STYLE_COLOR(COLOR_ID)=>dbo.COLOR(COLOR_ID)
Proper index (excessive, should be reviewed)
USE MewLive
CREATE INDEX ix1 ON dbo.STYLE_DETAIL (STYLE_ID)
INCLUDE (STYLE_CODE, LONG_DESC)
WHERE ACTIVE_FLAG = 1
GO
CREATE INDEX ix2 ON dbo.UPC (SKU_ID)
INCLUDE(UPC_NUMBER)
WHERE UPC_TYPE = 1
GO
CREATE INDEX ix3 ON dbo.SKU(STYLE_ID)
INCLUDE(SKU_ID)
GO
CREATE INDEX ix3_alternative ON dbo.SKU(SKU_ID)
INCLUDE(STYLE_ID)
GO
CREATE INDEX ix4 ON dbo.IB_INVENTORY_TOTAL(SKU_ID, LOCATION_ID)
INCLUDE(TOTAL_ON_HAND_RETAIL)
GO
CREATE INDEX ix5 ON dbo.LOCATION(LOCATION_ID)
INCLUDE(LOCATION_CODE)
GO
CREATE INDEX ix6 ON dbo.STYLE_COLOR(STYLE_ID)
INCLUDE(SHORT_DESC,COLOR_ID)
GO
CREATE INDEX ix7 ON dbo.COLOR(COLOR_ID)
GO
CREATE INDEX ON dbo.STYLE_SIZE(STYLE_ID)
INCLUDE(SIZE_MASTER_ID)
GO
CREATE INDEX ix8 ON dbo.SIZE_MASTER(SIZE_MASTER_ID)
INCLUDE(PRIM_SIZE_LABEL)
GO
CREATE INDEX ix9 ON dbo.STYLE_VENDOR(STYLE_ID)
INCLUDE(VENDOR_ID)
GO
CREATE INDEX ixA ON dbo.VENDOR(VENDOR_ID)
INCLUDE(VENDOR_CODE)
GO
CREATE INDEX ON dbo.STYLE_DETAIL(STYLE_ID)
INCLUDE(CURRENT_RETAIL)
In SELECT list replace U.UPC_TYPE, to 1 as UPC_TYPE,
Can you segregate the imports - batch them by SKU/location/vendor/whatever and run multiple queries to get the data over? Is there a particular reason it all needs to go across in one hit? (apart from the ease of writing the query)

Resources