Better understanding of MySQL transactions - sql-server

I just realized that my application was needlessly making 50+ database calls per user request due to some hidden coding -- hidden in the sense that between LINQ, persistence frameworks and events it just so turned out that a huge number of calls were being made without me being aware.
Is there a recommended way to analyze individual transactions going to my SQL 2008 database, preferably with some integration to my Visual Studio 2010 environment? I want to be able to 'spy' on individual transactions being made, but only for certain pieces of my code, and without making serious changes to either the code or database.

I addition to SQL Server Profiler, there are a number of performance counters you can look at to see both a real time evaluation and a historic trend:
Batch Requests/sec: Effectively measures the number of actual calls made to the SQL Server
Transactions/sec: Number of transactions in each database.
Connection resets/sec: number of new connections started from the connection pool by your site.
There are many more performance counters you can monitor, specially if you want to measure performance, but going through is besides the scope here. A good starting point is Monitoring Resource Usage.

You can use the SQL Profiler tool that comes with SQL Server Management Studio.
Microsoft SQL Server Profiler is a graphical user interface to SQL Trace for monitoring an instance of the Database Engine or Analysis Services. You can capture and save data about each event to a file or table to analyze later. For example, you can monitor a production environment to see which stored procedures are affecting performance by executing too slowly.

As mentioned, SQL Profiler is userful at the SQL Server level. It is not available in SQL Server SSMS Express however.
At the .NET level, LINQ to SQL and the Entity Framework both support logging. See Logging every data change with Entity Framework, http://msdn.microsoft.com/en-us/magazine/gg490349.aspx, http://peterkellner.net/2008/12/04/linq-debug-output-vs2008/.

Related

SQL Server 2008 Database Secondary

We have an older vendor supplied application that is earmarked for platform upgrades in 2019 but is currently running SQL Server 2008 (SP4). It's about 1.2TB of data. Our internal IT unit has come to the point that we want to create a readable secondary for some reports, but mostly ad-hoc reporting. Usage is about 1500 active sessions and about 25,000 Be/S peak.
Now onto the actual question. The option I forsee are transactional replication, mirroring, and log shipping with a read only standby. One of the developers also put Service Broker with CDC ... any landmines or curveballs with CDC and SB?
Service Broker is a very powerful tool to create and manage ques. CDC reads the log asynchronously to pick up changes to designated tables. They don't interact with each other and are designed to have low impact on an active database. They both work very well even in high volume situations. Like many features in SQL Server they can be used with a minimal learning curve but if you want to really take advantage of these tools some study is required.

Azure Database Query Optimizer using View Indices

SQL Server Enterprise Edition's query optimizer will use indices from a view to increase performance of a query even if the view is not explicitly referenced in the query, if applicable. Question: does Azure Database do the same thing? I know SQL Server Express does not do this, for example. I want to ensure I can still get the performance I need from the query optimizer when doing a sort on a joined table with a few million users (works great on enterprise edition but takes several seconds on express - bottle neck at the sort).
Sometime last year (2012) Microsoft announced that the engine was the same between SQL Server and SQL Azure (now called Windows Azure SQL Database :/). So you will likely get the same behavior. Same performance may be another question. Windows Azure SQL Database is also keeping replicas in place in the event of hardware failure. You get the benefit of the secondary coming online in a fashion that is seamless to you. But, This does have a bit of a performance cost. Also, the SQL running in Windows Azure is running in a shared environment. It is pretty well documented that the performance is not the same as a local dedicated multi-processor machine with fast storage. It is a bit of an unfair comparison multi-user, multi-instance vs. dedicated. For many applications this is fast enough, but not all.

Application Hangs on SQL Server - restart required every time

We have an application which has a SQL Server 2000 Database attached to it. After every couple of days the application hangs, and we have to restart SQL Server service and then it works fine. SQL Server logs show nothing about the problem. Can anyone tell me how to identify this issue? Is it an application problem or a SQL Server problem?
Thanks.
Is it an application problem or a SQL Server problem?
Is it possible to connect to MS SQL Server using Query Analyzer or another instance of your application?
General tips:
Use Activity Monitor to find information about concurrent processes, locks and resource utilization.
Use Sql Server Profiler to trace server and database activity, to capture and save data to a table or file to analyze it later.
You can use Dynamic Management Views (\Database name\Views\System Views folder (in the Management Studio)) to get more detailed information about MS SQL Server internals.
If you have the problems with perfomance (not your case) - you can use Perfomance Monitor and Data Collector Sets to gather perfomance information
Hard to predict the issue, I will suggest you to check your application first.Check what all operations you are performing against data base, are you taking care of connection pooling, unused open connections can create issues.
Check if you can get any log from your application. Without any log information hardly we can suggest anything.
Read this
Application may be hanging due to Deadlock
check the SP runs at that time using Profiler
and check the table manipulation(use nolock),
check the buffer size and segregate the DB into two or three module.

Are there any performance benefits of using SQL Server 2008 over SQL Server 2005?

Are there any performance benefits of using SQL Server 2008 over SQL Server 2005?
Moving a single database from SQL Server 2005-2008 will not notice a difference really. However, there are new tools and options available in SQL Server 2008 that you MIGHT be able to leverage to provider better performance later on in your application.
One item that comes to mind is filtered indexes. Allowing to create an index on a subset of information.
There may be new features in the engine which execute queries in different ways. This includes changes to the optimiser.
Therefore, the only way you can POSSIBLY tell, is to gather detailed performance data from your application on MSSQL2005, and then repeat the experiment on the same (production-quality) hardware with SQL2008.
You will need to make sure your application works correctly- such a migration can't be done lightly as any change could introduce bugs.
Also, the new version of the database could have performance regressions - which you need to be very careful about.
So in summary:
Benchmark YOUR application on SQL2005
Benchmark it on SQL2008
Use the same production-grade test hardware in your lab both times
Don't run VMs (unless that's what you do in production)
Don't change other parameters
This may not be easy if your application is big / complicated.
Yes. You can compress data in SQL 2008 which can have drastic impact on backup and data transfer times.
Actually SQL2008 has built-in compression that you can enable out of the box which could definately improve performance, but it may depend on what is being returned. I would try this option and benchmark to see if you feel its a worthy change.

Statistical calculations in SQL Server

Does anyone know of any packages or source code that does simple statistical analysis, e.g., confidence intervals or ANOVA, inside a SQL Server stored procedure?
The reason you probably don't want to do that is because these calculations are CPU-intensive. SQL Server is usually licensed by the CPU socket (roughly $5k/cpu for Standard, $20k/cpu for Enterprise) so DBAs are very sensitive to any applications that want to burn a lot of CPU power on the SQL Server itself. If you started doing statistics calculations and suddenly the server needs another CPU, that's an expensive licensing proposition.
Instead, it makes sense to do these statistical calculations on a separate application server. Query the data over the wire to your app server, do the number-crunching there, and then send the results back via an update statement or stored proc. Yes, it's more work, but as your application grows, you won't be facing an expensive licensing bill.
In more recent versions of SQL Server you can use .net objects natively. So any .net package will do. Other than that there's always external proc calls...
Unless you have to do it within the stored proc I'd retrieve the data and do it outside SQL Server. That way you can choose from any of the open source or commercial stats routines and it would probably be faster too.
I don't know if a commercial package like this exist. There could be multiple reasons for this, some of which have been outlined above.
If what you are trying to accomplish is to avoid building statistical functions that process your data stored in SQL Server, you might want to try and integrate statistical packages with your database server by importing data from it. For example, R supports it and there is also CRAN
Once you have accomplished that and you still feel that you'd like to make statistical analysis run inside your SQL Server, the next steps would be to call your stats package from a stored procedure using a command line interface. Your best option here is probably xp_cmdshell, though it requires careful configuration in order not to compromise your SQL Server security.

Resources