Why should I upgrade from SQL express to standard SQL server? - sql-server

One of our customer is on SQL express 2012 R2 for last couple of years and their database has grown to 1 GB. At any given time they have around 15 or more workstations connect to this database. They have a dedicated 2008 server for this database. Sometimes I could see some issues with the slow response but most of the time it is just ok. I cannot tell if suggesting standard SQL would improve the performance? Or it would be a waste of money? Can anybody suggest the parameters to check before I can make this decision?
In the task manager there are 2 sqlservr.exe processes and both of them are using 0% CPU but one of the process is using 2.2 GB of memory and the other is using 68 MB of memory.
Am I already pushing the envelope too far?
Please advise.

This cannot be answered without knowing how the system is developed. The vast majority of slow issues I have run across in many years of datbase work have to do with inefficient code or missing indexes. Getting a higher version of the database won't fix either fo those two issues.
Your problem could also be caused by physical equipment that is reaching it's limit or by network issues.
You are not close the data storage capacity of SQL server express, so I would investigate other things first as SQL Server Standard edition is quite a bit more expensive.
Your best bet would be to get a good book on performance tuning and read it. There are hundreds of things that can cause slowness.

Related

DBF/Foxpro - super slow queries through the network

I have an app that conducts queries on multiple DBF files over local area network.
However, the queries are extremely slow (up to 5 minutes on some files). They work quite fast locally, but since this app is going to be distributed to several customers we must account for those who have their DBFs in a remote machine.
What can I do to speed up these queries? I've already turned off oplocks, I repacked the DBFs, there's no antivirus running, and the issue persists.
You don't say what version of FoxPro you're using, or what client and server operating systems are involved, or what sort of network connection is involved.
On up to date Server 2008 R2 or later, with up to date Windows 7 SP1 or later clients you do not need to (and indeed should not) mess with OpLocks. In fact, you can't turn them off anyway without forcing the server back to SMB1, and you really don't want to do that.
The absolute first thing you should do before messing with anything infrastructural is make sure all your queries are Rushmore optimised because as with any database you need to take advantage of indexes. Have you done that?

A scenario to show better performance of having more RAM for SQL server 2008

I've asked for more ram for our SQL Server (currently we have a server with 4 GB of RAM) but our administrator told me that he would accept that only if I can show him the better performance with having more memory available because he has checked the server logs and SQL Server is using only 2.5 GB.
Can someone tell me how can I prove to him the effect of more available memory (like in a performance issue for a query)?
Leaving aside the fact that you don't appear to have memory issues...
Some basic checks to run:
Check the Page Life Expectancy counter: this is how long a page will stay in memory
Target Server Memory is how much RAM SQL Server want to use
Note on PLE:
"300 seconds" is quoted but our busy server has a PLE of 80k+. Which is a week. When our databases are 15 x RAM. With peaks of 3k new rows per second and lot of read aggregations.
Edit, Oct 2011
I found this article on PLE by Jonathan Kehayias: http://www.sqlskills.com/blogs/jonathan/post/Finding-what-queries-in-the-plan-cache-use-a-specific-index.aspx
The comments have many of the usual SQL Server suspect commenting

How do you stress load dev database (server) locally?

Wow, this title gave me immediately "The question you're asking appears subjective and is likely to be closed."
Anyway, after some searching and reading, I decided to ask it.
Coming from my question: What are the first issues to check while optimizing an existing database?, which boiled down to necessity to stress load a local SQL Server dev database received as backup .bak file.
Did I understand correctly the answer by paxdiablo to question: "DB (SQL) automated stress/load tools?" that there is no general purpose stress loading testing SQL tools independent on RDBMS?
What are the stress test load-ing tools for SQL Server?
What are you doing for cheap and dirty stress loading of local dev SQL Server database?
Update: And I am interested in stress loading SQL Server 2000, 2005, 2008 databases
(having no clue about 2000).
OK, let's put aside final/real test (to QA speacialists, DBA and sysadmins) and confine the question in context of stress loading to find obvious (outrageous) flaws in design, performance bottlenecks.
As one word of warning: It is easy to stress test alocal db to see whether the db design is good / bad / missing (bad indices etc.)
Trying to get real time perforamnce metrics out of that is futile - even with plenty of memory (unlikely - most workstations are crappy memory wise compared to real db servers) your disc subsystem will SUCK (in 100 meter high letters) comapred to what a real db server can throw at it. Becasue normal dev local databases only have one or maybe two discs, while db servers often utilize a LOT more and a LOT faster discs. As such, waht may be SLOOOOW on your workstation may be a seconds operation on the sever.
But again, thigns like bad index usage is visible.
You are correct.
There are no general purpose stress loading testing SQL tools independent of RDBMS.
And how could there be? You can throughput benchmark the hardware sub-systems in isolation (such as SAN, network), but the performance of your database depends very much on your access patterns of your application(s), the type of RDBMS, the hardware.
Your best bet is to load test your application connected to your database on a representative hardware platform. There are several tools that can do, including the Ultimate version of Microsoft Visual Studio 2010.

Practical limit for the number of databases in SQL Server?

In one of the stackoverflow podcasts (#18 I think) Jeff and Joel were talking about multi vs single tenant databases. Joel mentioned that "FogBugz on Demand" used a database per customer architecture and I was wondering if there is a point beyond which you'll need to have multiple database servers to distribute the load?
Technically the limit of databases per instance in SQL Server is 32,767, but I doubt that you could use a SQL Server instance that has more than 2,000 databases, at that point the server would probably be not responsive.
You may be able to have close to 30,000 databases if they were all auto-closed and not being used. you can find more information about capacity limits here:
Maximum Capacity Specifications for SQL Server
Joel has talked about this at another place (sorry, no reference handy) and said that before switching to MS SQL 2005 the management console (and the backend) had problems attaching more than 1000 or 2000 databases. It seems that 2005 and probably 2008 again improved on these numbers.
As for all performance questions are always dependent on your actual hardware and workload an can only definitely answered by local benchmarking/system monitoring.
I'd think mostly it depends on the memory limitations of the machine. SQL Server likes to keep as much cached in memory as possible, and as you add databases you reduce the amount of memory available.
In addition, you might want to consider the number of connections to a SQL Server. After 500-1000, it gets very cloggy and slow. So that is a limitation as well.
I think it is more a question of the load on the databases. As was said above, if there is no load then 32,767. With a high load then it comes down, eventually to 1 or less than 1.

Does SQL Server 2005 scale to a large number of databases?

If I add 3-400 databases to a single SQL Server instance will I encounter scaling issues introduced by the large number of databases?
This is one of those questions best answered by: Why are you trying to do this in the first place? What is the concurrency against those databases? Are you generating databases when you could have normalized tables to do the same functionality?
That said, yes MSSQL 2005 will handle that level of database per installation. It will more or less be what you are doing with the databases which will seriously impede your performance (incoming connections, CPU usage, etc.)
According to Joel Spolsky in the SO podcast # 11 you will in any version up to 2005, however this is supposedly fixed in SQL Server 2005.
You can see the transcript from the podcast here.
I have never tried this in 2005. But a company I used to work for tried this on 7.0 and it failed miserably. With 2000 things got a lot better but querying across databases was still painfully slow and took too many system resources. I can only imagine things improved again in 2005.
Are you querying across the databases or just hosting them on the same server? If you are querying across the databases, I think you need to take another look at your data architecture and find other ways to separate the data. If it's just a hosting issue, you can always try it out and move off databases to other servers as capacity is reached.
Sorry, I don't have a definite answer here.

Resources