sql query to find all the updated column of a table - sql-server

I need a dynamic sql query that can get a column values of a table on the basis of any case/condition. I want to do that while update any record. And I need updated value of column and for that I am using Inserted and Deleted tables of SQL Server.
I made one query which is working fine with one column but I need a generic query that should work for all of the columns.
SELECT i.name
FROM inserted i
INNER JOIN deleted d ON i.id = d.id
WHERE d.name <> i.name
With the help of above query we can get "Name" column value if it's updated. But as it's specific to one column same thing I want for all the columns of a table in which there should be no need to define any column name it should be generic/dynamic query.
I am trying to achieve that by adding one more inner join with PIVOT of "INFORMATION_SCHEMA.COLUMNS" for columns but I am not sure about it whether we can do that or not by this.
SELECT COLUMN_NAME
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME LIKE '%table1%'

You can't get that kind of information using just a query. It sounds like you need to be running an update trigger. In there you can code the criteria get your columns, etc. From your question, it sounds like only one column can be updated, you are just not sure which one it will be. If multiple columns are being updated, that complicates things, but not that much
There are several ways that you can get the data you need. Off the top, I'd suggest some simple looping once you get the column names.
What has worked for me in similar scenarios is to do a simple query to INFORMAATION_SCHEMA.Columns and put the results in a table variable. From there, iterate through the table variable to get one column name at a time and run a simple dynamic query to see if the value has changed.
Hope this helps.
:{)

Related

Number of rows updated in a oracle table

I have a table called t1 which is already updated by a file. I have table t2 which is created as backup for table t1 before modifications. Now I want to know how many records got updated in table t1. Is there anyway that I can do join with back up table and know how many records got altered? Or how to use sql%rowcount function on a already updated table? Or how should i proceed with ALL_TAB_MODIFICATIONS?
You can join the tables on their primary key (cos you didn't update that, hopefully!) and then compare every column.. you'll have to check for nulls too, and it'll make quite a lot of typing. You could use all_tab_cols and a bit of sql to create your query though (write an sql that creates sql as its output )
Actually, thinking about it, you might be able to get away with less typing by doing a natural join the tables together to get a set of rows that didn't change and removing that set from the original full set:
select * from original
Minus
select original.* from original natural inner join backup
Ive never done it, but the theory is that natural join joins on all equal column names so every column of each table will feature in the join condition. It's an inner join so only columns that have not changed will be represented. Any columns that have become null or become valued from null will also disappear. This is hence the set of rows that have not changed. If all you're after is a count, do a count of the original table less the count of this join result. If you want to know which rows changed, do the result set minus.
Ideally you shouldn't do this; instead at the point the update is run, capture the number of rows it affected. However, this technique could be used long after the update was performed (but before some other update was run)

Can I use a cursor in the select part of a SQL query?

I would like to select columns from two tables and add an extra and put this into a table. My question is can I use a cursor to loop through a table and calculate a value and then assign this to the new column in the select part like below
SELECT a.cola,
a.colB,
b.colC,
b.colD,
(CURSOR TO LOOP THROUGH a table and then calculate some value) as new column
INTO NEWTABLE
FROM a
INNER JOIN b
ON a.id=b.id
WHERE etc
I just need to know that this is possible?
You can use a correlated subquery (it does mean you will have to ensure only one value per record is possible to be returned though) but it is far better to do this through joins, if possible, for performance reasons. You should never consider looping as a way to get data if a set-based alternative works.

Why this procedure is not working?

this is my first question here. I am very new into SQL Server and T-SQL.
I would like to create a table, with a column that is using data from another column. I thought I could use select function, but it is not allowed.
How to do it?
It is very simple to create view in this way, but I would like to have a table not view.
It should look like
Column A, ColumnB,
Column C=select count(*) from [another table] where....
Could you please advise?
SELECT [COLUMN A],[COLUMN B],COUNT(*) as [COLUMN C]
INTO [destination table] FROM [another table] where...
You should use an alias
You create a table using the create table syntax because you will need to define the field names and sizes. Look the syntax up in Books Online. Do not ever use SELECT INTO unless you are creating a staging table for one-time use or a temp table. It is not a good choice for creating a new table. Plus, you don't say where any of the other columns come from except the column one, so it is may be impossible to properly set up the correct field sizes from the initial insert. Further, well frankly you should take the time to think about what columns you need and what data types they should be, it is irresponsible to avoid doing this for a table that will be permanently used.
To populate you use the Insert statement with a select instead of the values statement. If only column c come from another table, then it might be something like":
Insert table1 (colA, Colb, colC)
select 'test', 10, count(*)
from tableb
where ...
If you have to get the data from multiple tables, then you may need a join.
If you need to maintain the computed column as the values change in TableB, then you may need to write triggers on TableB or better (easier to develop and maintain and less likely to be buggy or create a data integrity problem) use a view for this instead of a separate table.

How can I use a table.column value for a join using dynamic sql?

I'm creating a data validation procedure for a database, using various models, etc. in the database.
I've created a temp table with a model, a sequence, and 3 columns.
In each of these columns I have the qualified column name (table.column) to use in my query, or a null value. Here's the temp_table structure:
create table #temp_table(model nvarchar(50), seq nvarchar(50), col nvarchar(100), col2 nvarchar(100) , col3 nvarchar(100))
In my dynamic sql I have a join something like this (exteremely simplified):
select *
from
original_table
inner join
...
#temp_table
on
original_table.models = #temp_table.models
inner join
set_model
on
original_tables.models = set_model.models
and #temp_table.col = set_model.val
and #temp_table.col2 = set_model.val2
and #temp_table.col3 = set_model.val3
What I'm working on has many more tables (hence the ... in the middle of the query), so, we'll just assume that all the tables are present and all the columns are valid.
Because #temp_table.col stores a value, when being join to set_model.val the comparison will look something like 'Buildings.year_id' = 2014.
Is there a way to force my dynamic query to use the value in #temp_table.col as part of the join condition?
For example:
If in the query above #temp_table.col = 'Buildings.year_id'
how do I make the join evaluate Buildings.year_id = set_model.val
rather than 'Buildings.year_id' = 2014?
I was trying to create a query which had a different query plan based upon the row queried.
I found a workaround (creating a cursor and looping through n different tables and appending each dynamic query with a ' union '), but I came back and thought about the problem I ran into for a little while.
I realized that I was trying to dynamically create a query based upon data from the query I was using. As a result, no effective query plan could be created, as it would need to run a unique query based upon each row.
If the query had been able to work, it would've been extremely inefficient (as each row would make its own 'select' statement).
Regardless, the question itself was based on bad/incomplete logic.

Create SQL trigger query to dump all column changes into single variable

For some background... I have a collection of tables, and I would like a trigger on each table for INSERT, UPDATE, DELETE. SQL Server version is SQL 2005.
I have an audit table Audit that contains a column called Detail. My end goal is to create a trigger that will get the list of columns of its table, generate a dynamic select query from either Inserted, Updated, or Deleted, do some string concatenation, and dump that value into the Detail column of Audit.
This is the process I was thinking:
Get columns names in table for sys.columns
Generate dynamic sql SELECT query based on column names
Select from Inserted
foreach row in results, concatenate column values into single variable
Insert variable data into Detail column
So, the questions:
Is this the best way to accomplish what I'm looking to do? And the somewhat more important question, how do I write this query?
You could use FOR XML for this, and just store the results as an XML document.
SELECT *
FROM Inserted
FOR XML RAW
will give you attibute-centric xml, and
SELECT *
FROM Inserted
FOR XML PATH('row')
will give you element-centric xml. Much easier than trying to identify the columns and concatenate them.

Resources