SQL Server 2008 trigger to modify file - file

I need help please about SQL Server 2008 and triggers.
The context: a machine has generated data (number: integer) that I need to inject into a xml file. This data changes a few times a day, but I need it in real time.
Problem: the data is not available directly from the machine, no way... but this machine feeds a SQL Server 2008 database.
So, I think a better way is using a SQL Server trigger. Am I wrong ?
Here's code I'm using:
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER TRIGGER [dbo].[Test]
ON [dbo].[Table_machine]
AFTER UPDATE
AS
IF UPDATE(Valeur)
BEGIN
**********************************
END
This trigger works on 'Valeur' update but I don't know how to modify my xml file?

I would strongly recommend AGAINST putting such logic that does a lot of conversion and writes out to a file system into a trigger - it will slow down the normal operation of the database quite significantly. And triggers and T-SQL are severely limited in what they can do in the file system.
My approach would be:
create a separate application/tool, e.g. in C# or whatever other language you know, and handle the logic of creating that XML from the database and storing it into a file in there
have that tool scheduled by e.g. Windows Scheduler or some other mechanism to be run on a regular basis - whether that's every 10 minutes, or every hour is up to you to decide
The main benefits are:
you've not severely slowed down your database operation
you have more programming power at your disposal to write that logic
you can schedule it to run as frequently or as infrequently as needed (e.g. every 5 minutes during working weekdays from 6am to 10pm - and only once an hour outside these hours - or whatever you choose to do)

Thanks for reply !
It's a good idea and I will try this solution to compare difference with my new solution.
I've tried with trigger EXEC xp_cmdshell 'D:\MyApp.exe'
I create MyApp.exe application who do all modifications in my xml file.
It works perfectly except to send modified data in sql database. I don't know how to take this value...
It must be EXEC xp_cmdshell 'D:\MyApp.exe Valeur' but I don't know how to grab 'Valeur'.
With this solution, database must NOT be too slow, no ? Because Trigger only launch a shell command ?
The only one value I need in database could change 20 times per minute or only 2 times per day but I need this value in 'real time'. So scheduler would not be accurate enough...

Related

SQL Server SPIDS go into a sleeping state and never recover

I have a long running stored procedure that is executed from IIS. On average this stored procedure takes between two and five minutes to complete because it is searching through a large dataset. (although it has take around 20 minutes in some cases)
Most of the time the stored procedure works fine but every now and then the SPIDS go into a sleeping state and never recover. The only solution I have found is to restart the SQL Server and re-run the stored procedure
The are no table inserts in the proc (only table variable inserts), and the other statements are selects on a large table.
I'm stuck for where to start debugging this issue. Any hints one what it might be or suggestions on tools that would help me find the issue would be most helpful
EDIT: More info added:
The actual issue is the proc doesn't return the resultset. My first thought was to look at the spids, they were sleeping but the cputime was still increasing
It's a .Net app so .Net Core 3.1 with ASP.NET Core and a Blazor UI. The libary used for db connection is System.data.SqlClient I believe System.data.SqlClient uses it's own custom driver. Calling code below:
The stored procedure doesn't return multiple result sets, however obviously different instances of the proc run at the same time.
No limits to connection pooling in IIS
#RichardWatts when you say " re-run the stored procedure" you mean that the same stored proc with the same parameter and data works once you restart SQL Server ?
If so look over your loc (sp_loc} inside your table probably another process loc some data and doesnt release it properly, specialy if you have transaction accessing the same tables.
What is your your isolation level on your connexion ? If you can, try to change it to READ UNCOMMITTED to see if that solve your problem.
as an alternate you can also add a WITH (NOLOCK) or (READUNCOMMITTED) to your sql command.
Know that you will need to hold query with a read uncommited or nolock if you have some modification on the structure of your table or index re construction for example or they will in turn block its execution
Nevertheless be cautious this solution depend on your environment, specially if your tables gots lots of update, delete, insert,... this kind of isolation can lead to a Dirty read and doesnt adress the root cause of your problem wich I would bet is uncomited transaction (good article that explain it)
Make also a DBCC CHECKTABLE just to be sure on this side

Recreating indexes will improve performances

I have few tables (base tables) which are getting inserted and updated twice a week. I have indexes created on these tables long back.
I'm applying logic on top of these tables in a stored procedure (without any parameter) and creating a final output table.
I'm scheduling this stored procedure twice a week using SQL Server agent job.
It is running slowly now (50 minutes) whereas if I run the stored procedure manually, it is running faster (15 - 18 minutes)
Do I have to drop the indexes whenever insert or update is happening in base tables and recreate it again after the insert or update?
If so do I have to do it every week?
What is its effect in performance of SQL Server agent jobs?
Indexes do require maintenance, but the rate at which they do depends entirely on how much data is changed, and how those changes are ordered. You can google around for any number of scripts to check your index fragmentation, and how to defragment them. Usually even for larger databases, weekly or nightly maintenances are more than enough.
Anyway, the fact that the execution time differs depending on how you run it, points to two possible causes:
Parametrization, or the SET properties used by the connection.
If your procedure uses parameters but you run the script manually giving the parameters values as you do, then SQL Server knows exactly which values you're using, and can optimize the query execution to use the correct indexes etc on the spot. If your agent calls the procedure with the same parameters, then the process is different. SQL Server may not know which values are being used, so it has to use covering indexes or worse yet, even full on table scans (reading all the data in the whole table, rendering indexes useless) to make sure that it will find all the relevant data for the query. Google SQL Server parametrization, and you can find out more.
The set properties on the other hand control specific session properties that are applied automatically when you connect directly to the database via Management Studio. But when you use an agent job, that may not be the case. This can also result in a different plan which will take far more time.
Both these cases, depend on your database settings and the way your procedure works. So we have to guess here.
But typically, you need to set the following properties in the beginning of a script in an agent job to match the session properties used in your regular Management Studio session:
SET ANSI_NULLS ON;
GO
SET QUOTED_IDENTIFIER ON;
GO
All of the terms here can be googled. I suggest you do so. Those articles can explain these things far better than I've the time for, especially given that - no disrespect intended - you're relatively new to SQL Server. So explaining these things with a suitable terminology here, is difficult. :)

.Net's ExecuteNonQuery versus MS SQL Direct Query

I have a SQL script that will refresh the dependent views of a table once the table has been modified, like adding new fields. The script will be run thru ExecuteNonQuery, see example below.
Using refreshCommand As New SqlClient.SqlCommand("EXEC RefreshDependentViews 'Customer','admin',0", SqlClient.SqlConnection, SqlClient.SqlTransaction)
refreshCommand.ExecuteNonQuery()
End Using
The above code when executed will take 4-5 seconds, but when I copy the script only and run it through MS SQL directly, it only takes 2-3 seconds.
My question is, why they have different intervals?
Please note that the MS SQL server is on my PC itself and also the code.
Thanks
SqlClient and SSMS have different connection-level options (SET options) by default, which can sometimes be a factor. I also wonder what the isolation level is for the two things, which could be compounded if you are using TransactionScope etc in your code.There could also simply be different system load at the time. Basically, hard to say just from that: but there are indeed some things that can impact this.

Getting stored procedure usage data on SQL Server 2000

What is the best way to get stored procedure useage data on a specific database out of SQL Server 2000?
The data I need is:
Total of all stored procedure calls over X time
Total of each specific stored procedure call over X time.
Total time spent processing all stored procedures over X time.
Total time spent processing specific stored procedures over X time.
My first hunch was to setup SQL Profiler wiht a bunch of filters to gather this data. What I don't like about this solution is that the data will have to be written to a file or table somewhere and I will have to do the number crunching to figure out the results I need. I would also like get these results ober the course of many days as I apply changes to see how the changes are impacting the database.
I do not have direct access to the server to run SQL Profiler so I would need to create the trace template file and submit it to my DBA and have them run it over X time and get back to me with the results.
Are there any better solutions to get the data I need? I would like to get even more data if possible but the above data is sufficient for my current needs and I don't have a lot of time to spend on this.
Edit: Maybe there are some recommended tools out there that can work on the trace file that profile creates to give me the stats I want?
Two options I see:
Re-script and recompile your sprocs to call a logging sproc. That sproc would be called by all your sprocs that want to have perf tracking. Write it to a table with the sproc name, current datetime, and anything else you'd like.
Pro: easily reversible, as you'd have a copy of your sprocs in a script that you could easily back out. Easily queryable!
Con: performance hit on each run of the sprocs that you are trying to gauge.
Recompile your data access layer with code that will write to a log text file at the start and end of each sproc call. Are you inheriting your DAL from a single class where you can insert this logging code in one place? Pro: No DB messiness, and you can switch in and out over an assembly when you want to stop this perf measurement. Could even be tweaked with on/off in app.config. Con: disk I/O.
Perhaps creating a SQL Server Trace outside of SQL Profiler might help.
http://support.microsoft.com/kb/283790
This solution involves creating a text file with all your tracing options. The output is put into a text file. Perhaps it could be modified to dump into a log table.
Monitoring the traces: http://support.microsoft.com/kb/283786/EN-US/

Usage history of Stored Procedures in SQL Server 2008

I work with legacy systems that have tens of thousand of lines of stored procedure code, where many of the stored procedures are obsolete and not used anymore. There doesn't seem to be a way to check execution history, so my question is if it might be a good idea to start each stored procedure by inserting a row into a table that keeps records of the execution?
could be very simple like:
insert into
executionHistory (
name,
date
)
select
'spName',
getdate()
-- then rest of procedure
I imagine this could be very useful for doing cleanups of old unused code, and might also be handy when trying to decide where to optimize. I mean it's better to shave 10 seconds off execution time on a procedure that is executed 50 times a day, than saving 10 minutes execution time on a procedure that is only used once a year.
There is a tracing option (SQL Profiler) in SQL server. you could take a trace of a days SQL activity and see which sprocs are executed there.
This will give you a good idea of where to focus your optimisations.
because you're using sql server 2008 i wouldn't do what rwmnau suggest because this would mean you have to modify all your stored procedures.
SQL Server 2008 introduces a feature called Extended Events and SQL Server Auditing based on them. Extended events are high performance tracing system.
by using SQL Server Auditing you can trace your system withouth the overhead of sql trace.
I think your idea is simple enough and would accomplish your goal. Though it would involve modifying every SP, it's the route I would choose. Then you can ensure that you're getting an accurate recording of all activity on the database.
Another poster suggested you do a trace - while this works for short periods, it's only going to catch the time you're watching. You'd have to make sure you traces across any important, high-traffic periods, like month-end financial closing, and even then, you're missing other times you don't think are that big a deal, so you're being subjective.

Resources