Implementation of periodic writing timestamps in ms sql database - sql-server

Task:
Write timestamps into MS SQL database table every second.
Solutions:
External application which writes timestamps by schedule (Sql agent for example).
Stored procedure, which writes timestamps in infinite loop.
Questions.
Which of the solutions is best?
Is there any drawbacks of running infinite loop in stored procedure?
How to start stored procedure after server reboot?
Any other solutions?

Either, but I'd tend to the stored procedure route with WAITFOR DELAY
Not really
sp_procoption and start up stored procedures
Without involving external client or system, I can't think of any

1, Both have advantages and disadvantages. Evaluate and choose based on your environment
Procedure Advantages:
- Don’t have overhead of SQL Agent processing once per second. (In fact, I don't think you can get SQL Agent to consistantly launch the same job once per second.)
- I don’t think a procedure in WAITFOR mode is using resources – but you’d want to check
Procedure Disads:
- If procedure fails (gets stopped somehow), it won’t start up. (Unless you have a SQL Agent job running to check if the procedure has stopped, in which case might as well go with the procedure)
- Might be easier to stop/break than you’d think (concurrency/deadlocks, detached DBs, manually stop during maintenance then forget to restart)
Job Advantages:
- If job fails (perhaps db is unavailable), next job will still get launched
Job Disads:
- Seems very kludgy, having SQL agent run every second. Measure required server overhead if you do this.
- If SQL Agent fails or is stopped, job won’t run
A suggestion: must it be once per second? Can it be once per 5, 10, 15, or 30?
2, Shouldn’t be any, barring what’s mentioned above. Make darn sure you can’t hit locking, blocking, or deadlock situations!
3, Like #gbn says, sp_procoption
4, Nothing that doesn’t involve cumbersome tricks based on pessimistic locking techniques, or byzantine logic based on the timestamp (not datetime) datatype. The best fix would seem to be the long-term one of combining those two databases into one, but that’s not a short-term option.
Out of sheer paranoia, I'd combine the two like so:
Job set to run every 2, 3, or 5 minutes
Job calls procedure that updates your timesampt, then waitfors a few seconds
Procedure does not stop, so job continues to run; while job is running, it will not be started (because it's still executing)
If procedure somehow dies, job will launch it again next time it's scheduled to run.

Try using SQL Service broker to do this asynchronously and its queue system allows you to not miss any data even if the SQL Server service would have to be restarted. I ve used this once sometime back for this kind of polling scenario.
http://msdn.microsoft.com/en-us/library/ms345108(SQL.90).aspx#sqlsvcbr_topic2
This might help,
http://msdn.microsoft.com/en-us/library/bb839488.aspx

Related

SQL Server SPIDS go into a sleeping state and never recover

I have a long running stored procedure that is executed from IIS. On average this stored procedure takes between two and five minutes to complete because it is searching through a large dataset. (although it has take around 20 minutes in some cases)
Most of the time the stored procedure works fine but every now and then the SPIDS go into a sleeping state and never recover. The only solution I have found is to restart the SQL Server and re-run the stored procedure
The are no table inserts in the proc (only table variable inserts), and the other statements are selects on a large table.
I'm stuck for where to start debugging this issue. Any hints one what it might be or suggestions on tools that would help me find the issue would be most helpful
EDIT: More info added:
The actual issue is the proc doesn't return the resultset. My first thought was to look at the spids, they were sleeping but the cputime was still increasing
It's a .Net app so .Net Core 3.1 with ASP.NET Core and a Blazor UI. The libary used for db connection is System.data.SqlClient I believe System.data.SqlClient uses it's own custom driver. Calling code below:
The stored procedure doesn't return multiple result sets, however obviously different instances of the proc run at the same time.
No limits to connection pooling in IIS
#RichardWatts when you say " re-run the stored procedure" you mean that the same stored proc with the same parameter and data works once you restart SQL Server ?
If so look over your loc (sp_loc} inside your table probably another process loc some data and doesnt release it properly, specialy if you have transaction accessing the same tables.
What is your your isolation level on your connexion ? If you can, try to change it to READ UNCOMMITTED to see if that solve your problem.
as an alternate you can also add a WITH (NOLOCK) or (READUNCOMMITTED) to your sql command.
Know that you will need to hold query with a read uncommited or nolock if you have some modification on the structure of your table or index re construction for example or they will in turn block its execution
Nevertheless be cautious this solution depend on your environment, specially if your tables gots lots of update, delete, insert,... this kind of isolation can lead to a Dirty read and doesnt adress the root cause of your problem wich I would bet is uncomited transaction (good article that explain it)
Make also a DBCC CHECKTABLE just to be sure on this side

sql query notification service issue

We are getting timeout issues on our databases. So I trurned on SQL Server Profiler and see SQLQueryNotificationService running every second with long duration. I checked the Service Broker and there are bunch of SQLQueryNotificationService queues created. I don't think we created any of these queues also there are bunch of stored procedures like these SqlQueryNotificationStoredProcedure-15c5b84b-42b0-4cfb-9707-9b1697f44127. Could you please let me know how to drop them? If I drop them is there any impact on the database? Please let me know. I appreciate any suggestions.
Do you have an ASP.Net web site running or another application that creates Sql Server Cache dependencies? It is Sql Server Service Broker queues, it executes that WAITFOR ... statement which waits for around one minute (60000 msecs), then executes again next second. Shouldn't normally cause problems, it shouldn't block or delay your "normal" queries or stored procedures.
However, I saw it causing issues for me once - one of the stored procedures, when executed from the same web application that established the cache dependency, did timeout (or rather came back in 120 secs which is not acceptable). Exactly the same stored procedure, executed under the same account with same parameters, but from Management Studio, ran fine without any issues. It was SQL Server 2005 SP4.
SQL Profiler showed that in the middle of execution of my stored procedure (and always after the same INSERT INTO ... statement), its execution was interrupted and instead of its statement there was that WAIT FOR .... from Sql Query Notification, completed in one minute, then another WAIT FOR... starting and again, completed in 59 secs - and only after that the Profiler showed me my stored procedure completed. With the duration of 119000, which is almost exactly two minutes.
It was if that query notifications were joining the transaction within my stored procedure.
What helped: recompiled the offending stored procedure. I simply changed its script, did ALTER statement with some minor syntax changes. No problems after that.

Is it possible to set a timeout for a SQL query on Microsoft SQL Server?

I've got a scenario when sometimes a user selects the right parameters and makes a query which takes several minutes or more to execute. I cannot prevent him to select such a combination of parameters (it's quite legal), so I'd like to set a timeout on the query.
Note that I really want to stop the query execution itself and rollback any transactions, because otherwise it hogs up most of server resources. Add an impatient user who restarts the application and tries the combination again, and you've got a recipe for a disaster (read: SQL Server DoS).
Can this be done and how?
As far as I know, apart from setting the command or connection timeouts in the client, there is no way to change timeouts on a query by query basis in the server.
You can indeed change the default 600 seconds using sp_configure, but these are server scoped.
Humm!
did you try LOCK_TIMEOUT
Note down what it was orginally before running the query
set it for your query
after running your query set it back to original value
SET LOCK_TIMEOUT 1800;
SELECT ##LOCK_TIMEOUT AS [Lock Timeout];
I might suggest 2 things.
1)
If your query takes a lot of time because it´s using several tables that might involve locks, a quite fast solution is to run your queries with the "NoLock" hint.
Simply add Select * from YourTable WITH (NOLOCK) in all your table references an that will prevent your query to block for concurrent transactions.
2) if you want to be sure that all of your queries runs in (let´s say) less than 5 seconds, then you could add what #talha proposed, that worked sweet for me
Just add at the top of your execution
SET LOCK_TIMEOUT 5000; --5 seconds.
And that will cause that your query takes less than 5 or fail. Then you should catch the exception and rollback if needed.
Hope it helps.
In management studio you can set the timeout in seconds.
menu Tools => Options set the field and then Ok
It sounds like more of an architectual issue, and any timeout/disconnect you can do would be more or less a band-aid. This has to be solved on SQL server side, by the way of read-only replica, transaction log shipping (to give you a read-only server to connect to), replication and such. Basically you give the DMZ sql server that heavy read can go to without killing stuff. This is very common. A well-designed SQL system won't be taken down by DDoS - that'd be like a car that dies if you step on the gas.
That said, if you are at the liberty to change the code, you could guesstimate if the query is too heavy and you could either reject or return only X rows in your stored procedure. If you are mated to some reporting tool and such and can't control the SELECT it generates, you could point it to a view and then do the safety valve in the view.
Also, if up-to-the-minute freshness isn't critical and you could compromise on that, like monthly sales data, then compiling a physical table of complex joins by job to avoid complex joins might do the trick - that way everything would be sub-second per query.
It entirely depends on what you are doing, but there is always a solution. Sometimes it takes extra coding to optimize it, sometimes it takes extra money to get you the secondary read-only DB, sometimes it needs time and attention in index tuning.
So it entirely depends, but I'd start with "what can I compromise? what can I change?" and go from there.
You can set Execution time-out in seconds.
If you have just one query I don't know how to set timeout on T-SQL level.
However if you have a few queries (i.e. collecting data into temporary tables) inside stored procedure you can just control time of execution with GETDATE(), DATEDIFF() and a few INT variables storing time of execution of each part.
You can specify the connection timeout within the SQL connection string, when you connect to the database, like so:
"Data Source=localhost;Initial Catalog=database;Connect Timeout=15"
On the server level, use MSSQLMS to view the server properties, and on the Connections page you can specify the default query timeout.
I'm not quite sure that queries keep on running after the client connection has closed. Queries should not take that long either, MSSQL can handle large databases, I've worked with GB's of data on it before. Run a performance profile on the queries, prehaps some well-placed indexes could speed it up, or rewriting the query could too.
Update:
According to this list, SQL timeouts happen when waiting for attention acknowledgement from server:
Suppose you execute a command, then the command times out. When this happens the SqlClient driver sends a special 8 byte packet to the server called an attention packet. This tells the server to stop executing the current command. When we send the attention packet, we have to wait for the attention acknowledgement from the server and this can in theory take a long time and time out. You can also send this packet by calling SqlCommand.Cancel on an asynchronous SqlCommand object. This one is a special case where we use a 5 second timeout. In most cases you will never hit this one, the server is usually very responsive to attention packets because these are handled very low in the network layer.
So it seems that after the client connection times out, a signal is sent to the server to cancel the running query too.

Usage history of Stored Procedures in SQL Server 2008

I work with legacy systems that have tens of thousand of lines of stored procedure code, where many of the stored procedures are obsolete and not used anymore. There doesn't seem to be a way to check execution history, so my question is if it might be a good idea to start each stored procedure by inserting a row into a table that keeps records of the execution?
could be very simple like:
insert into
executionHistory (
name,
date
)
select
'spName',
getdate()
-- then rest of procedure
I imagine this could be very useful for doing cleanups of old unused code, and might also be handy when trying to decide where to optimize. I mean it's better to shave 10 seconds off execution time on a procedure that is executed 50 times a day, than saving 10 minutes execution time on a procedure that is only used once a year.
There is a tracing option (SQL Profiler) in SQL server. you could take a trace of a days SQL activity and see which sprocs are executed there.
This will give you a good idea of where to focus your optimisations.
because you're using sql server 2008 i wouldn't do what rwmnau suggest because this would mean you have to modify all your stored procedures.
SQL Server 2008 introduces a feature called Extended Events and SQL Server Auditing based on them. Extended events are high performance tracing system.
by using SQL Server Auditing you can trace your system withouth the overhead of sql trace.
I think your idea is simple enough and would accomplish your goal. Though it would involve modifying every SP, it's the route I would choose. Then you can ensure that you're getting an accurate recording of all activity on the database.
Another poster suggested you do a trace - while this works for short periods, it's only going to catch the time you're watching. You'd have to make sure you traces across any important, high-traffic periods, like month-end financial closing, and even then, you're missing other times you don't think are that big a deal, so you're being subjective.

SQL procedure running time widely divergent

I have an application that runs a huge stored procedure on SQL Server 2000. Usually it takes about 1 minute to complete, but occasionally it will take MUCH longer.
Just now I ran it three times in a row in my test system. It took 1:12, 1:23, and 55:25.
What would cause that behavior? There are other things going on in the database, so I wonder if it has something to do with locks. How can I catch this in the act?
Create a trace and examine it in Profiler. That should at least point towards where the problem lies - in your procedure or elsewhere.
It's probably parameter sniffing: based on the input, Sql Server chose a different query plan.
Another possibility is that a separate query was running at the same time and locked everything up.

Resources