How to update two tables in a single query in MS SQL - sql-server

Is it be possible to update two tables writing a single query?
So that i do not have to execute two queries and to track whether both are successful?

You can't do it in a query but you can do it as a transaction when all queries within the transaction will either succeed or fail.

You can write a stored procedure that updates the two tables and returns whatever you need it to in order to determine success. This stored proc can then be called from a single command. However, it will still have to contain two queries.

No, that is not possible AFAIK.
EDIT: What is the reason for you to achieve this in a single query?

You could use transactions, however you are still required to update the tables separately and check the results before committing or rolling back.

Ofcourse you can by using triggers

Related

use same table multiple times in SP performance

I have a very long lines of code in SP that performs poorly. This sp requires me to validate data from multiple select statement with the same table multiple times.
Is it a good idea to dump data from physical table into temp table first or is it ok to reference it multiple times in multiple select statement within the same SP?
Per your description , you would like to improve the performance. Could you please show us your script of your SP and your execution plan? So that we have a right direction and make some test.
There are some simple yet useful tips and optimization to improve stored procedure performance.
Use SET NOCOUNT ON
Use fully qualified procedure name
sp_executesql instead of Execute for dynamic queries
Using IF EXISTS AND SELECT
Avoid naming user stored procedure as sp_procedurename.
Use set based queries wherever possible.
Keep transaction short and crisp
For more details , you can refer to it : https://www.sqlservergeeks.com/improve-stored-procedure-performance-in-sql-server/
If you think it does not satisfy your requirement, please share us more information.
Best Regards,
Rachel
Is it a good idea to dump data from physical table into temp table first or is it ok to reference it multiple times in multiple select statement within the same SP?
If it is a local temp table, each session using this stored procedure will create a separate temp table for themselves, although it will reduce the weight on original table, but it will increase the usage of memory and tempdb.
If it is a global temp table, we can only create one for all session, then we will need to create one manually before someone using it and then delete it if it is useless.
For me, I will use the Indexed Views, https://learn.microsoft.com/en-us/sql/relational-databases/views/create-indexed-views?view=sql-server-2017
It's hard to answer without the detail. However with such a large SP and such a small table it is likely that a particular select or join is slow rather than just repeatedly hitting the table (SQL server is perfectly happy to cache bits of tables or indexes in memory).
If possible can you get the execution plan of each part of the SP? or log some timings? or run each bit with statistics on?
That will tell you which bit is slow and we can help you fix it.

How to get a list of updated/inserted rows into a SQL Server database after multiple stored procedure have executed?

Consider Java application reading/modifying data from a SQL Server database using only stored procedures.
I am interested in knowing exactly what rows were inserted/updated after execution of some code.
Code that is executing could trigger multiple stored procedures and these procedures are working with different tables in general case.
My current solution is to debug low level Java code executed before any of stored procedures is called and inspecting parameters passed, to manually reconstruct impacts.
This seem to be ineffective and unreliable.
Is there a better approach?
To know exactly what rows were inserted/updated after execution of some code, you can implement triggers for UPDATE, DELETE and INSERT operations for the tables involved. These triggers should be almost the same for every table, changing just the name and the association with its table.
For this suggestion, these tables should have audit columns, like one for the datetime when they rows were inserted and one for datetime when they rows were updated - at least. You can search for more audit ideas if you want (and need), like a column to know wich user triggered the insert/update, or how many times the row was altered, an so on.
You should elaborate a different approach to achieve this depending of how much data you intend to generate with these triggers.
I'm assuming you know how to do this with best practices (for example, you can [and should, IMHO] create these triggers dinamically to facilitate maintenance).
Finally, you will be able to elaborate a query from sys tables that contains information about tables and rows and return only the rows involved, ordered by these new columns (just an idea that I hope fits in your particular case).

Stored procedure to update different columns

I have an API that i'm trying to read that gives me just the updated field. I'm trying to take that and update my tables using a stored procedure. So far the only way I have been able to figure out how to do this is with dynamic SQL but i would prefer to not do that if there is a way not to.
If it was just a couple columns, I'd just write a proc for each but we are talking about 100 fields and any of them could be updated together. One ticket might just need a timestamp updated at this time, but the next ticket might be a timestamp and who modified it while the next one might just be a note.
Everything I've read and have been taught have told me that dynamic SQL is bad and while I'll write it if I have too, I'd prefer to have a proc.
YOU CAN PERHAPS DO SOMETHING LIKE THIS:::
IF EXISTS (SELECT * FROM NEWTABLE NOT IN (SELECT * FROM OLDTABLE))
BEGIN
UPDATE OLDTABLE
SET OLDTABLE.OLDRECORDS = NEWTABLE.NEWRECORDS
WHERE OLDTABLE.PRIMARYKEY= NEWTABLE.PRIMARYKEY
END
The best way to solve your problem is using MERGE:
Performs insert, update, or delete operations on a target table based on the results of a join with a source table. For example, you can synchronize two tables by inserting, updating, or deleting rows in one table based on differences found in the other table.
As you can see your update could be more complex but more efficient as well. Using MERGE requires some proficiency, but when you start to use it you'll use it with pleasure again and again.
I am not sure how your business logic works that determines what columns are updated at what time. If there are separate business functions that require updating different but consistent columns per function, you will probably want to have individual update statements for each function. This will ensure that each process updates only the columns that it needs to update.
On the other hand, if your API is such that you really don't know ahead of time what needs to be updated, then building a dynamic SQL query is a good idea.
Another option is to build a save proc that sets every user-configurable field. As long as the calling process has all of that data, it can call the save procedure and pass every updateable column. There is no harm in having a UPDATE MyTable SET MyCol = #MyCol with the same values on each side.
Note that even if all of the values are the same, the rowversion (or timestampcolumns) will still be updated, if present.
With our software, the tables that users can edit have a widely varying range of columns. We chose to create a single save procedure for each table that has all of the update-able columns as parameters. The calling processes (our web servers) have all the required columns in memory. They pass all of the columns on every call. This performs fine for our purposes.

Fire triggers on SELECT

I'm new to triggers and I need to fire a trigger when selecting values from a database table in sql server. I have tried firing triggers on insert/update and delete. is there any way to fire trigger when selecting values?
There are only two ways I know that you can do this and neither are trigger.
You can use a stored procedure to run the query and log the query to a table and other information you'd like to know.
You can use the audit feature of SQL Server.
I've never used the latter, so I can't speak of the ease of use.
No there is no provision of having trigger on SELECT operation. As suggested in earlier answer, write a stored procedure which takes parameters that are fetched from SEECT query and call this procedure after desired SELECT query.
SpectralGhost's answer assumes you are trying to do something like a security audit of who or what has looked at which data.
But it strikes me if you are new enough to sql not to know that a SELECT trigger is conceptually daft, you may be trying to do something else, in which case you're really talking about locking rather than auditing - i.e. once one process has read a particular record you want to prevent other processes accessing it (or possibly some other related records in a different table) until the transaction is either committed or rolled back. In that case, triggers are definitely not your solution (they rarely are). See BOL on transaction control and locking

MS SQL Server Trigger after another trigger

I would like to create trigger in MS SQL Server that would be call after finishing another trigger. This trigger that is called first is insert trigger. Is it possible?
Yes, it is.
But the tricky part is to track the order they are executed. It is strongly advised to use multiple triggers of the same type only if they are fully independent and hence could be executed in any order. In your case, better use multiple stored procedure calls in single trigger. You'll thank yourself later.
For more information on the subject, see previous question in SO:
SQL Server triggers - order of execution
Yes it is possible. You can do nesting of trigger up to 32 level.
If you want more nesting then you can use CTE (about 100 level of nesting).

Resources