Performance when the Backend and Frontend database don't same - sql-server

Hi experts I want to know in which scenario I have more Performance(Time to run queries):
If the Backend is MS SQL Server and Fontend MS Access
If both Backend and Frontend are Same on MS Access
If both Backend and Frontend are same and on MS Sql server
and why?

As long as the instance of MS SQL is running on a database server rather than a stand-alone PC then this should provide superior query performance against Access. Access is limited by the processing power the the machine from which you are accessing it as due to been a file based system it has no processing power of it's own and everything must be run locally.
In terms of the front-end, it really depends on what other tasks you are looking to carry out, using an Access front-end over and MS SQL back-end is perfectly acceptable as you can still load any query processing on the server side and thus not decrease performance.

SQL Server is always faster than Access (has a better query optimizer too, amongst all other things), so just axe the Access completely...

Related

Separating database, services and IIS to different servers

We're building a web application using Asp.net MVC with Sql Server database.
This application is supposed to serve hundreds of users or even thousands.
In order for it to be scalable, We want to separate the database to a different server from where the IIS is running.
What's the best way to do this? a reference from the IIS server to the database server using a connection string? won't it slow down performance?
We also want to separate some of the services of this application to a different (third) server.
Does replicating the database to this third services server a good idea? this way we'll reduce the load on the database machine from our first question. How is it done?
What's the best way to do this? a reference from the IIS server to the
database server using a connection string? won't it slow down
performance?
There is no best way because there is only one way to start with. Put the SQL Server on another machine. Finished.
You will have to adjust the connection string. This will introduce latency and the bandwidth is lower than in memory, but practically those things are irrelevant - unless you make a select * From table and filter in memory, which hopefully no one does.
Point is - you get more processing power, you get scalability, and the small theoretical impact is neglegible in comparison. If you manage to load up even a 1gigabit network link with data you have either some seriously bad design or some really awkward programming. The higher speed of a non trivial and hopefully build for the purpose database server will far more than outweigh eny small loss of performance.
Does replicating the database to this third services server a good idea?
How long is a piece of string? It depends - on the services. In general - no. In very few edge cases - yes. How to decide? Experience - and common sense, based on the specific requirements.
It advised to only have SQL Server by it self on a server. and DO NOT run any other applications on the that server/Machine. (even Avoid having Multiple Instances on one machine if possible). As SQL Server by its very nature makes use of all the possible avilable resources to give the best performance and it assumes that it is the only application on that machine.
So hosting Sql Server and IIS on same machine you will start and fight between the two applications for resources.
And obviously if you have sql server on a separate server you will need to tweak your connection string.

SQL Server High Availability on premise - cloud

I would like to know which is the best way to make a copy and keep the copies synchronized of a on premises SQL Server 2008 (not R2) database to SQL Azure.
Think of the SQL Azure as a failover kind of structure...
Notes:
The database runs fine in SQL Azure
I have already figured out how to get the rest of the app running on Azure
Please consider suggestions of the type "Upgrade to SQL Server 2012 because of X" if the gain (reliability, efficiency, time to replicate, etc...) are worth it
I`m looking for instant replication (as fast as possible)
Yes it will have to sync back eventually. If the on-premises deploy crash and the cloud get activated and changed, sync back will be necessary, but i think it does not need to be automatic... of it is, better!
The Database consist of 900+ tables (legacy system)
http://www.windowsazure.com/en-us/manage/services/sql-databases/getting-started-w-sql-data-sync/
http://msdn.microsoft.com/en-us/library/hh456371.aspx
I think the best bet is to use SQL Data Sync, it should give you bidirectional and we use it currently to sync data around the world in terms of datacenters and one local on premise database. It will only give you 5 mins sync timing but this will probably do, otherwise the next best options is to use SQL Server VMs and do the old fashion way. But with SQL Azure Data Sync we have found to be reasonable reliable and been running it for a good six months syncing across 4 database in four data centres in Azure.
Some problems though with it,
It uses Triggers.
It will obivously add load and connections to your current SQL Database.
The new control panel in Azure is a nightmare for it, so I would use the old panel for the moment.
It is in preview last time I looked, so it might not be 100% suitable
for you.
I would imagine there is some better third party solutions out there but off the shelf and in Azure SQL Data sync is well worth a look for the situation you a describing.

Better understanding of MySQL transactions

I just realized that my application was needlessly making 50+ database calls per user request due to some hidden coding -- hidden in the sense that between LINQ, persistence frameworks and events it just so turned out that a huge number of calls were being made without me being aware.
Is there a recommended way to analyze individual transactions going to my SQL 2008 database, preferably with some integration to my Visual Studio 2010 environment? I want to be able to 'spy' on individual transactions being made, but only for certain pieces of my code, and without making serious changes to either the code or database.
I addition to SQL Server Profiler, there are a number of performance counters you can look at to see both a real time evaluation and a historic trend:
Batch Requests/sec: Effectively measures the number of actual calls made to the SQL Server
Transactions/sec: Number of transactions in each database.
Connection resets/sec: number of new connections started from the connection pool by your site.
There are many more performance counters you can monitor, specially if you want to measure performance, but going through is besides the scope here. A good starting point is Monitoring Resource Usage.
You can use the SQL Profiler tool that comes with SQL Server Management Studio.
Microsoft SQL Server Profiler is a graphical user interface to SQL Trace for monitoring an instance of the Database Engine or Analysis Services. You can capture and save data about each event to a file or table to analyze later. For example, you can monitor a production environment to see which stored procedures are affecting performance by executing too slowly.
As mentioned, SQL Profiler is userful at the SQL Server level. It is not available in SQL Server SSMS Express however.
At the .NET level, LINQ to SQL and the Entity Framework both support logging. See Logging every data change with Entity Framework, http://msdn.microsoft.com/en-us/magazine/gg490349.aspx, http://peterkellner.net/2008/12/04/linq-debug-output-vs2008/.

how can I simulate network latency on my developer machine?

I am upsizing an MS Access 2003 app to a SQL Server backend. On my dev machine, SQL Server is local, so the performance is quite good. I want to test the performance with a remote SQL Server so I can account for the effects of network latency when I am redesigning the app. I am expecting that some of the queries that seem fast now will run quite slowly once deployed to production.
How can I slow down (or simulate the speed of a remote) SQL Server without using a virtual machine, or relocating SQL to another computer? Is there some kind of proxy or Windows utility that would do this for me?
I have not used it myself, but here's another SO question:
Network tools that simulate slow network connection
In one of the comments SQL Server has been mentioned explicitly.
You may be operating under a misconception. MS-Access supports so-called "heterogeneous joins" (i.e. tables from a variety of back-ends may be included in the same query, e.g. combining data from Oracle and SQLServer and Access and an Excel spreadsheet). To support this feature, Access applies the WHERE clause filter at the client except in situations where there's a "pass-through" query against an intelligent back-end. In SQL Server, the filtering occurs in the engine running on the server, so SQL Server typically sends much smaller datasets to the client.
The answer to your question also depends on what you mean by "remote". If you pit Access and SQL Server against each other on the same network, SQL Server running on the server will consume only a small fraction of the bandwidth that Access does, if the Access MDB file resides on a file server. (Of course if the MDB resides on the local PC, no network bandwidth is consumed.) If you're comparing Access on a LAN versus SQL Server over broadband via the cloud, then you're comparing a nominal 100 mbit/sec pipe against DSL or cable bandwidth, i.e. against perhaps 20 mbit/sec nominal for high-speed cable, a fifth of the bandwidth at best, probably much less.
So you have to be more specific about what you're trying to compare.
Are you comparing Access clients on the local PC consuming an Access MDB residing on the file server against some other kind of client consuming data from a SQL Server residing on another server on the same network? Are you going to continue to use Access as the client? Will your queries be pass-through?
There is a software application for Windows that does that (simulates a low bandwidth, latency and losses if necessary). It not free though. The trial version has a 30-sec emulation limit. Here is the home page of that product: http://softperfect.com/products/connectionemulator/
#RedFilter: You should indicate which version of Access you are using. This document from 2006 shows that the story of what Access brings down to the client across the wire is more complicated than whether the query contains "Access-specific keywords".
http://msdn.microsoft.com/en-us/library/bb188204(SQL.90).aspx
But Access may be getting more and more sophisticated about using server resources with each newer version.
I'll stand by my simple advice: if you want to minimize bandwidth consumption, while still using Access as the GUI, pass-through queries do best, because then it is you, not Access, who will control the amount of data that comes down the wire.
I still think your initial question/approach is misguided: if your Access MDB file was located on the LAN in the first place (was it?) you don't need to simulate the effects of network latency. You need to sniff the SQL statements Access generates, rather than introducing some arbitrary and constant "network latency" factor. To compare an Access GUI using an MDB located on a LAN server against an upsized Access GUI going against a SQL Server back-end, you need to assess what data Access brings down across the wire to the client from the back-end server. Even "upsized" Access can be a hog at the bandwidth trough unless you use pass-through queries. But a properly written client for a SQL-Server back-end will always be far more parsimonious with network bandwidth than Access going against an MDB located on a LAN server, ceteris paribus.

Pattern for very slow DB Server

I am building an Asp.net MVC site where I have a fast dedicated server for the web app but the database is stored in a very busy Ms Sql Server used by many other applications.
Also if the web server is very fast, the application response time is slow mainly for the slow response from the db server.
I cannot change the db server as all data entered in the web application needs to arrive there at the end (for backup reasons).
The database is used only from the webapp and I would like to find a cache mechanism where all the data is cached on the web server and the updates are sent to the db asynchronously.
It is not important for me to have an immediate correspondence between read db data and inserted data: think like reading questions on StackOverflow and new inserted questions that are not necessary to show up immediately after insertion).
I thought to build an in between WCF service that would exchange and sync the data between the slow db server and a local one (may be an Sqllite or an SqlExpress one).
What would be the best pattern for this problem?
What is your bottleneck? Reading data or Writing data?
If you are concerning about reading data, using a memory based data caching machanism like memcached would be a performance booster, As of most of the mainstream and biggest web sites doing so. Scaling facebook hi5 with memcached is a good read. Also implementing application side page caches would drop queries made by the application triggering lower db load and better response time. But this will not have much effect on database servers load as your database have some other heavy users.
If writing data is the bottleneck, implementing some kind of asyncronyous middleware storage service seems like a necessity. If you have fast and slow response timed data storage on the frontend server, going with a lightweight database storage like mysql or postgresql (Maybe not that lightweight ;) ) and using your real database as an slave replication server for your site is a good choise for you.
I would do what you are already considering. Use another database for the application and only use the current one for backup-purposes.
I had this problem once, and we decided to go for a combination of data warehousing (i.e. pulling data from the database every once in a while and storing this in a separate read-only database) and message queuing via a Windows service (for the updates.)
This worked surprisingly well, because MSMQ ensured reliable message delivery (updates weren't lost) and the data warehousing made sure that data was available in a local database.
It still will depend on a few factors though. If you have tons of data to transfer to your web application it might take some time to rebuild the warehouse and you might need to consider data replication or transaction log shipping. Also, changes are not visible until the warehouse is rebuilt and the messages are processed.
On the other hand, this solution is scalable and can be relatively easy to implement. (You can use integration services to pull the data to the warehouse for example and use a BL layer for processing changes.)
There are many replication techniques that should give you proper results. By installing a SQL Server instance on the 'web' side of your configuration, you'll have the choice between:
Making snapshot replications from the web side (publisher) to the database-server side (suscriber). You'll need a paid version of SQLServer on the web server. I have never worked on this kind of configuration but it might use a lot of the web server ressources at scheduled synchronization times
Making merge (or transactional if requested) replication between the database-server side (publisher) and web side(suscriber). You can then use the free version of MS-SQL Server and schedule the synchronization process to run according to your tolerance for potential loss of data if the web server goes down.
I wonder if you could improve it adding a MDF file in your Web side instead dealing with the Sever in other IP...
Just add an SQL 2008 Server Express Edition file and try, as long as you don't pass 4Gb of data you will be ok, of course there are more restrictions but, just for the speed of it, why not trying?
You should also consider the network switches involved. If the DB server is talking to a number of web servers then it may be being constrained by the network connection speed. If they are only connected via a 100mb network switch then you may want to look at upgrading that too.
the WCF service would be a very poor engineering solution to this problem - why make your own when you can use the standard SQLServer connectivity mechanisms to ensure data is transferred correctly. Log shipping will send the data across at selected intervals.
This way, you get the fast local sql server, and the data is preserved correctly in the slow backup server.
You should investigate the slow sql server though, the performance problem could be nothing to do with its load, and more to do with the queries and indexes you're asking it to work with.

Resources