I have a SQL script that will refresh the dependent views of a table once the table has been modified, like adding new fields. The script will be run thru ExecuteNonQuery, see example below.
Using refreshCommand As New SqlClient.SqlCommand("EXEC RefreshDependentViews 'Customer','admin',0", SqlClient.SqlConnection, SqlClient.SqlTransaction)
refreshCommand.ExecuteNonQuery()
End Using
The above code when executed will take 4-5 seconds, but when I copy the script only and run it through MS SQL directly, it only takes 2-3 seconds.
My question is, why they have different intervals?
Please note that the MS SQL server is on my PC itself and also the code.
Thanks
SqlClient and SSMS have different connection-level options (SET options) by default, which can sometimes be a factor. I also wonder what the isolation level is for the two things, which could be compounded if you are using TransactionScope etc in your code.There could also simply be different system load at the time. Basically, hard to say just from that: but there are indeed some things that can impact this.
Related
We had a performance issue with one of our queries in our application that was taking 20 seconds to run. Using azure data studio we figured out the SQL that was long running and then eventually traced that back to the entity framework query that was executed.
I had an idea of adding a logging function to our code where it is called before any data access is done (insert, select, delete, update etc) in the entity framework code.
What the function would do is simple execute a "Select user_functionname_now" sql statement.
Then in azure data studio profiler we would see :
The image tells me that the user ran the load invoice function and it took 2717 milliseconds.
Granted if you have 100 users doing things in the app the logs might get mixed up a bit but it would go a long way in being able to figure out where in the code the long running query is executing from.
I was also thinking that we could add a fixed column to each query run so that you could see something like this:
But the issue with adding a column is you are returning extra data each time a query is run which requires more data back and forth between the SQL server and the application and that for sure is not a good thing.
So my question is: Is adding a "Select XYZ" before every CRUD call a bad idea? If we add this logging call to some or all of our code where it executes our queries will it cause a performance issue/slowdown that I haven't thought about?
I don't think using any "select ..." is reasonable in your case.
Perhaps, SET CONTEXT_INFO or sp_set_session_context would be better.
This is the scenario that EF Query Tags are for.
I'm pretty new to azure and cloud computing in general and would like to ask your help in figuring out issue.
Issue was first seen when we had webpage that time outs due to sql timeout set to (30 seconds).
First thing I did was connect to the Production database using MS SQL management studio 2014 (Connected to the azure prod db)
Ran the stored procedure being used by the low performing page but got the return less than 0 seconds. This made me confused since what could be causing the issue.
By accident i also tried to run the same query in the Azure SQL query editor and was shock that it took 29 seconds to run it.
My main question is why is there a difference between running the query in azure sql query editor vs Management studio. This is the exact same database.
DTU usage is at 98% and im thingking there is a performance issue with the stored proc but want to know first why sql editor is running the SP slower than Management studio.
Current azure db has 50 dtu's.
Two guesses (posting query plans will help get you an answer for situations like this):
SQL Server has various session-level settings. For example, there is one to determine if you should use ansi_nulls behavior (vs. the prior setting from very old versions of SQL Server). There are others for how identifiers are quoted and similar. Due to legacy reasons, some of the drivers have different default settings. These different settings can impact which query plans get chosen, in the limit. While they won't always impact performance, there is a chance that you get a scan instead of a seek on some query of interest to you.
The other main possible path for explaining this kind of issue is that you have a parameter sniffing difference. SQL's optimizer will peek into the parameter values used to pick a better plan (hoping that the value will represent the average use case for future parameter values). Oracle calls this bind peeking - SQL calls it parameter sniffing. Here's the post I did on this some time ago that goes through some examples:
https://blogs.msdn.microsoft.com/queryoptteam/2006/03/31/i-smell-a-parameter/
I recommend you do your experiments and then look at the query store to see if there are different queries or different plans being picked. You can learn about the query store and the SSMS UI here:
https://learn.microsoft.com/en-us/sql/relational-databases/performance/monitoring-performance-by-using-the-query-store?view=sql-server-2017
For this specific case, please note that the query store exposes those different session-level settings using "context settings". Each unique combination of context settings will show up as a different context settings id, and this will inform how query texts are interpreted. In query store parlance, the same query text can be interpreted different ways under different context settings, so two different context settings for the same query text would imply two semantically different queries.
Hope that helps - best of luck on your perf problem
I need help please about SQL Server 2008 and triggers.
The context: a machine has generated data (number: integer) that I need to inject into a xml file. This data changes a few times a day, but I need it in real time.
Problem: the data is not available directly from the machine, no way... but this machine feeds a SQL Server 2008 database.
So, I think a better way is using a SQL Server trigger. Am I wrong ?
Here's code I'm using:
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER TRIGGER [dbo].[Test]
ON [dbo].[Table_machine]
AFTER UPDATE
AS
IF UPDATE(Valeur)
BEGIN
**********************************
END
This trigger works on 'Valeur' update but I don't know how to modify my xml file?
I would strongly recommend AGAINST putting such logic that does a lot of conversion and writes out to a file system into a trigger - it will slow down the normal operation of the database quite significantly. And triggers and T-SQL are severely limited in what they can do in the file system.
My approach would be:
create a separate application/tool, e.g. in C# or whatever other language you know, and handle the logic of creating that XML from the database and storing it into a file in there
have that tool scheduled by e.g. Windows Scheduler or some other mechanism to be run on a regular basis - whether that's every 10 minutes, or every hour is up to you to decide
The main benefits are:
you've not severely slowed down your database operation
you have more programming power at your disposal to write that logic
you can schedule it to run as frequently or as infrequently as needed (e.g. every 5 minutes during working weekdays from 6am to 10pm - and only once an hour outside these hours - or whatever you choose to do)
Thanks for reply !
It's a good idea and I will try this solution to compare difference with my new solution.
I've tried with trigger EXEC xp_cmdshell 'D:\MyApp.exe'
I create MyApp.exe application who do all modifications in my xml file.
It works perfectly except to send modified data in sql database. I don't know how to take this value...
It must be EXEC xp_cmdshell 'D:\MyApp.exe Valeur' but I don't know how to grab 'Valeur'.
With this solution, database must NOT be too slow, no ? Because Trigger only launch a shell command ?
The only one value I need in database could change 20 times per minute or only 2 times per day but I need this value in 'real time'. So scheduler would not be accurate enough...
I have few tables (base tables) which are getting inserted and updated twice a week. I have indexes created on these tables long back.
I'm applying logic on top of these tables in a stored procedure (without any parameter) and creating a final output table.
I'm scheduling this stored procedure twice a week using SQL Server agent job.
It is running slowly now (50 minutes) whereas if I run the stored procedure manually, it is running faster (15 - 18 minutes)
Do I have to drop the indexes whenever insert or update is happening in base tables and recreate it again after the insert or update?
If so do I have to do it every week?
What is its effect in performance of SQL Server agent jobs?
Indexes do require maintenance, but the rate at which they do depends entirely on how much data is changed, and how those changes are ordered. You can google around for any number of scripts to check your index fragmentation, and how to defragment them. Usually even for larger databases, weekly or nightly maintenances are more than enough.
Anyway, the fact that the execution time differs depending on how you run it, points to two possible causes:
Parametrization, or the SET properties used by the connection.
If your procedure uses parameters but you run the script manually giving the parameters values as you do, then SQL Server knows exactly which values you're using, and can optimize the query execution to use the correct indexes etc on the spot. If your agent calls the procedure with the same parameters, then the process is different. SQL Server may not know which values are being used, so it has to use covering indexes or worse yet, even full on table scans (reading all the data in the whole table, rendering indexes useless) to make sure that it will find all the relevant data for the query. Google SQL Server parametrization, and you can find out more.
The set properties on the other hand control specific session properties that are applied automatically when you connect directly to the database via Management Studio. But when you use an agent job, that may not be the case. This can also result in a different plan which will take far more time.
Both these cases, depend on your database settings and the way your procedure works. So we have to guess here.
But typically, you need to set the following properties in the beginning of a script in an agent job to match the session properties used in your regular Management Studio session:
SET ANSI_NULLS ON;
GO
SET QUOTED_IDENTIFIER ON;
GO
All of the terms here can be googled. I suggest you do so. Those articles can explain these things far better than I've the time for, especially given that - no disrespect intended - you're relatively new to SQL Server. So explaining these things with a suitable terminology here, is difficult. :)
I try to trace SQL command. I read this post : How can I monitor the SQL commands send over my ADO connection?
It does work for select but not for Delete/Insert/Update...
Configuration : A TADOConnection (MS SQL Server), a TADOTable, a TDatasource, a TDBGrid with TDBNavigator.
So I can trace the SELECT which occurs when the table is open, but nothing occurs when I use the DBNavigator to UPDATE, INSERT, or DELETE records.
When I use a TADOCommand to delete a record, it works too. It seems It doesn't work only when I use the DBNavigator so maybe a clue but I don't find anything about that.
Thanks in advance
Hopefully someone will be able to point you in the direction of a pre-existing library that does your logging for you. In particular, if FireDAC is an option, you might take a look at what it says in here:
http://docwiki.embarcadero.com/RADStudio/XE8/en/Database_Alerts_%28FireDAC%29
Of course, converting your app from using Ado to FireDAC, may not be an option for you, but depending on how great your need is, you could conceivably extract the Sql-Server-specific method of event alerting FireDAC uses into an Ado application. I looked into this briefly a while ago and it looked like it would be fairly straightforward.
Prior to FireDAC, I implemented a server-side solution that caught Inserts, Updates and Deletes. I had to do this about 10 years ago (for Sql Server 2000) and it was quite a performance to set up.
In outline it worked like this:
Sql Server supports what MS used to call "extended stored procedures" which are implemented in custom DLLs (MS may refer to them by a different name these days or have even stopped supporting them). There are Delphi libraries around that provide a wrapper to enable these to be written in Delphi. Of course, these days, if your Sql Server is 64-bit, you need to generate a 64-bit DLL.
You write Extended Stored Procedures to log the changes any way you want, then write custom triggers in the database for Inserts, Updates and Deletes, that feed the data of the rows involved to your XSPs.
As luck would have it, my need for this fell away just as I was completing the project, before I got to stress-testing and performance-profiling it but it did work.
Of course, not in every environment will you be allowed/able to install s/ware and trigger code on the Sql Server.
For interest, you might also take a look at https://msdn.microsoft.com/en-us/library/ms162565.aspx, which provides an SMO object for tracing Sql Server activity, though it seems to be 32-bit only at the moment.
For amusement, I might have a go at implementing an event-handler for the recordset object that underlies a TAdoTable/TAdoQuery, which sould be able to catch the changes you're after but don't hold your breath ...
And, of course, if you're only interested in client-side logging, one way to do it is to write handlers for your dataset's AfterEdit, AfterInsert and AfterDelete events. Those wouldn't guarantee that the changes are ever actually applied at the server, of course, but could provide an accurate record of the user's activity, if that's sufficient for your needs.