I have a stored procedure that does some inserts into a table. If I have to execute that same stored procedure repeatedly, each of these executions reflect the Inserts in the table after it ends or it could happen that each insert occurs after the end of the stored procedure execution and overlaps with the execution of the second instance of that stored procedure.
I hope I was clear, if not please correct me
Thanks
Whatever work is done in a Stored Procedure, unless explicitly rolled-back or automatically rolled-back due to an error, will be there when the Stored Procedure exits. Once a Stored Procedure exits, there is no more work that it could be doing.
This means that within a single session, any number of executions of a stored procedure are handled serially -- one after the other, no overlap.
However, across multiple sessions / connections, the work being done in a Stored Procedure certainly can overlap if that same code (Stored Procedure or even ad hoc SQL) is run at the same time across other sessions / connections.
Related
When I am executing a stored procedure with some inputs, it took more than 30 seconds to run, but when I directly execute the query with the same inputs, that only took 3 seconds.
I guess it's due to the poor execution plan that stays in the cache, so first I dropped that stored procedure and recreated it, but that did not work. So I added a WITH RECOMPILE option to the stored procedure and tried but that also does not work.
Then I tried to assign the input parameters to local variables and then used those local variables inside the stored procedure (instead of the actual parameters) and after that, the execution time of the stored procedure is reduced to 3 seconds (if it's due to parameter sniffing then WITH RECOMPILE option will also resolve it right!).
And even I restored the same backup in another server and then executed the same stored procedure with the same inputs it took only 3 seconds.
So can someone please clarify the logic behind it?
I have a system set up with a batch execution of a stored procedure 10 times.
exec procedure 1
exec procedure 2
exec procedure 3
exec procedure 4
...
exec procedure 10
The procedure is designed to accumulate a running total based on the ID of the record in a target table.
When I run that batch I will get results that are out of sync. The running total should just be the running total of the previous row, plus the value of the current row.
When I run the same batch with a GO statement in between each, it takes much longer to run, but executes correctly.
Is there any kind of hint (like "MAXDOP 1") that can be done in this situation to force the procedures to execute and complete in order without going out of sync?
I should add that the stored procedure being called, calls several procedures itself. If that has any bearing on a solution.
I did a bit more testing on this, and it looks like my initial thoughts were incorrect. I did several tests with batches using GO statements, and even then, only a few records in the batch would have their running balances updated, but the remaining would stay out of sync. It looks like when I did my initial tests, the first 10 records updated properly, but I didn't notice anything else in that section since the rest of the values were already correct until a later section of the data set.
This looks like it is an issue internal to the procedure, not repeated execution of the procedure. The weird part is that we never experienced this issue on a single-core system which is what still leaves to me thinking this is a parallelism issue, but most likely internal to the procedure.
Sorry for wasting your time.
I'm playing with the idea of rerouting every end-user stored procedure call of my database through a logging stored procedure. Essentially it will wrap the stored procedure call in some simple logging logic, who made the call, how long did it take etc.
Can this potentially create a bottleneck? I'm concerned that when the amount of total stored procedure calls grows this could become a serious problem.
Routing everything through a single point of entry is not optimal. Even if there are no performance issues, it is still something of a maintenance problem as you will need to expose the full range of Input Parameters that the real procs are accepting in the controller proc. Adding procs to this controller over time will require a bit of testing each time to make sure that you mapped the parameters correctly. Removing procs over time might leave unused input parameters. This method also requires that the app code pass in the params it needs to for the intended proc, but also the name (or ID?) of the intended proc, and this is another potential source of bugs, even if a minor one.
It would be better to have a general logging proc that gets called as the first thing of each of those procs. That is a standard template that can be added to any new proc quite easily. This leaves a clean API to the app code such that the app code is likewise maintainable.
SQL can run the same stored procedure concurrently, as long as it doesn't cause blocking or deadlocks on the resources it is using. For example:
CREATE PROCEDURE ##test
AS
BEGIN
SELECT 1
WAITFOR DELAY '00:00:10'
SELECT 2
END
Now execute this stored procedure quickly in two different query windows to see it running at the same time:
--Query window 1
EXEC ##test
--Query window 2
EXEC ##test
So you can see there won't be a line of calls waiting to EXECUTE the stored procedure. The only problem you may run into is if you are logging the sproc details to a certain table, depending on the isolation level, you could get blocking as the logging sproc locks pages in the table for recording the data. I don't believe this would be a problem unless you are running the logging stored procedure extremely heavily, but you'd want to run some tests to be sure.
I have two stored stored procedures, where the second stored procedure is an improvement of the first one.
I'm trying to measure exactly how much is that improvement.
1/ Measuring clock time doesn't seem to be an option as I get different execution times. Even worse, sometimes (rarely, but it happens) the execution time of the second stored procedure is bigger than the execution time of the first procedure (I guess due to the server workload at that moment).
2/ Include client statistics also provides different results.
3/ DBCC DROPCLEANBUFFERS, DBCC FREEPROCCACHE are good, but the same story...
4/ SET STATISTICS IO ON could be an option, but how could I get an overall score as I have many tables involved in my stored procedures?
5/ Include actual execution plan could be an option also. I get an estimated subtreecost of 0.3253 for the first stored procedure, and 0.3079 for the second one. Can I say the second stored procedure is 6% faster (=0.3253/0.3079) ?
6/ Using "Reads" field from SQL Server Profiler? Even in this case I get different results when I execute those stored procedure many times.
So how can I say the second stored procedure is x% faster than the first procedure, no matter the execution conditions (the workload of the server, the server where these stored procedures are executed, etc).
If it is not possible, how can I prove the second stored procedure is more optimized.
What is the difference between a procedure and a stored procedure on sql server?
There is no difference. There is no concept of "unstored" procedures in SQL Server.
CREATE PROCEDURE
Will create a stored procedure
select * from sys.procedures
will show you the stored procedures.
This is as opposed to sending adhoc sql statements or prepared sql statements.
A procedure is a specified series of actions, acts or operations which have to be executed in the same manner in order to always obtain the same result under the same circumstances
A stored procedure is a subroutine available to applications accessing a relational database system. Stored procedures (sometimes called a proc, sproc, StoPro, or SP) are actually stored in the database data dictionary.
n a procedure you have to start the transaction manually, allowing the rollback manually and stuff like that.
In a stored procedure, usually the DBA system takes care of the main transaction in case of errors. You can even use atomic transactions to keep your information consistent.
Then, A stored procedure is execute a bit faster than a single procedure because of the indexing in the dba.
If it's an actual procedure, in the database, it's a stored procedure -- regardless of whether people pronounce the "stored" part.
Stored procedures are in opposition to the client's issuing the SQL statements of the procedure one by one. That's what an un-"stored procedure" would be.