Sql Server 2005 efficiency savings? - sql-server

Are there good efficiency savings using Sql Server 2005 over Sql Server 2000?
Or does it just have more services etc
Has anyone seen their system work any quicker after making the upgrade?

The surrounding tools such as Analysis Services were substantially rewritten and can get you a variety of wins depending on your requirements. However I don't see a lot of really fundamental changes from 2000 to 2005 in the core database engine.
There are some improvements that may get you better performance in certain situations. SQL2005 has much better support for 64-bit architectures and better table partitioning than SQL2000 (you can partition a table as opposed to making partitioned views). 64-bit support is the most likely to give you a performance win on a large system as it allows you to set up much larger caches.
Apart from those features I don't believe that there is really a large difference. There are probably minor performance tweaks.
The main reason to move from SQL2000 to SQL2005 will be when SQL2000 goes out of support. If you have a running application on SQL2000 there are not a lot of compelling reasons to switch to 2005 while 2000 is still supported by Microsoft.
Data Warehouse systems will get quite a few wins from moving to SQL2005. SSIS, SSAS2005 and SSRS2005 are much better than their SQL2000 counterparts.

2005 provides MVCC - row level versioning essentially - so as a developer there are some efficiencies: less locking to worry about.

I haven't migrated a system from 2000 to 2005 - I've either started with one or the other - so I don't have a comparison of my own. But there is a reasonable chance you will see a perf difference; if not by taking advantage of some of the new features like snapshot isolation, then at least by virtue of the fact that SQL2005's licensing model allows you to go multi-core at no additional licensing cost, and by the fact that SQL2005 has improved memory management.

Things will absolutely run faster with 2005. There were several improvements made to the query optimizer. And now you can create covering indexes so that the included columns only exist at the leaf level and don't have to get sorted. That alone is an enormous improvement and reason enough to upgrade.

SQL 2005 does a better job of working with caching. You used to have to poll SQL 2000 periodically to check for updates to a whole table. Now you can subscribe to a notification when something changes. It also works for queries, tables, and a few other elements.

I would say yes for all of the reasons listed by others, but even if your SQL skills are not that strong and your queries are not that great they will probably run faster on 2005. We moved from 2000 to 2005 and we had some complex queries that we could not get properly optimized in 2000. When we moved to 2005 it ate the queries up! Clearly the optimizer was making much better decisions out of the box.
I would strongly recommend moving to 2005 unless you have no issues with 2000.

Related

SQL Server and compatibility mode - performance

We need to move an enterprise ERP during the upgrade (from 2005 to 2008). I have done some reading regarding the merits of running compatibility mode and I know there are some differences in the SQL estimator running native vs. compatibility mode, but I was wondering if any of you have encountered any performance improvements running a SQL database in compatibility mode on a newer server, i.e. are there any papers or actual experience that suggest that I am going to get better performance running SQL2008 vs. SQL2014 with Compat mode on the database. Do I actually benefit from the new server. We are licensed either way and the ERP is only "guaranteed" on 2008.
I hope to get some feedback for anyone who has run into this problem before. (Compatibility has been around for a long time, so I am sure someone has). Considering that our databases are ~400GB, clustered and SAN'd makes a really real-world test somewhat difficult to really do. Even more-so that the SAN will "prioritise" things - just make the test even more difficult. We all know that SQL 2014 performs better than 2012, but with the poorest of data, it may be the case - hence the general request.
I have never run into any issues with compatibility mode for any version of SQL Server. I also haven't really noticed any performance benefits or drawbacks doing so, but I admit that I haven't done any real timing tests. Usually when I've had to do that, I've upgraded the hardware on the box, so a true comparison is difficult.
Having said that, are you sure that's the way you want to go?
Why not just run a test environment with the database migrated to 2008 and no compatibility? If everything works for you in the test environment, then upgrade directly.
Most SQL Server upgrades are pretty painless, unless you're trying to skip a version or two, which you aren't. Even in failover clusters they aren't that hard as long as you follow the step-by-step procedure from Microsoft.

Third Party Tools for Monitoring SQL Server Performance

I'm in a situation where I came into a new job and I have to support several legacy systems. The original developer is no longer around. These legacy systems are really hammering away at our SQL Server and killing performance. I know that there are a lot of things that can be done in the code, but rewriting code is really my last resort.
What I'm looking for is some sort of tool that will monitor the queries coming into the server and give recommendations on indexing solutions. I know I can use the SQL Server Profiler but I'm looking for something a little more user friendly and something that can help me make the indexing decisions.
I know I didn't explain it very well, but I'm sure this is a common request. I'd like to make informed decisions on what to index and avoid "shooting from the hip" and indexing everything in sight. Thanks for any recommendations!
You don't need a third party tool for this.
Assuming SQL Server 2005+ as long as you can use SQL Profiler (actually SQL Trace - Don't use the Profiler GUI for this to reduce tracing overhead as much as possible) to collect a representative workload you can use the Database Tuning Advisor to automate analysis of the workload and make indexing recommendations.
You can also use the Missing Index DMVs for a quick overview of areas to investigate but the DTA will do more holistic analysis and take into account possible adverse effects of indexes on data modification statements.
+1 for Martin's answer, but since you asked about 3rd party tools, I'll mention one of my favorites (and no, I don't work for the company). Ignite for SQL Server does an excellent job of analyzing server activity in terms of wait time analysis. It won't make recommendations for you, but it will quickly identify the worst performing queries where you need to focus your effort.
SQL Server 2005+ has a lot of DMV's (Dynamic Management views) that you can query to get server info, as well as the Profiler / SQL Trace tool.
We administer several large database servers.
Idera is a good tool to manage multiple database servers easily.
I think you'd make a much better DBA if you learn more about the inbuilt functionality of SQL server.
Have a browse of
http://msdn.microsoft.com/en-us/library/ms188754.aspx
to find out more about DMV's and functions.
Another common issue with performance could be your indexes.
Theres a great tutorial that combines the DMV's with improving indexes here:
http://searchsqlserver.techtarget.com/tip/Using-dynamic-management-views-to-improve-SQL-Server-index-effectiveness
Idera is really worth checking out though as a good starting point. Combined with DMV's & SQL trace there shouldn't be much you won't be able to fix.
Idera just takes most of the leg work out of doing things.
http://www.idera.com/Content/Home.aspx
Idera: SQL Diagnostic Manager

What database to use for big data storage and manipulation?

I have to make a decision of which database server to use for my next project, but the simple decision to use MySQL like almost all the projects I did is harder now, because I expect very much records.
The database will store a user list, some other irrelevant tables, and the last one, some user-collected data. Let's say, if I have 6000 users responding to a quiz about each other. Simple math shows that from those users, if each one completes the quiz about everyone (and in my project that is 99% sure that will happen) I'll end up with 35.99million records(they will exclude themselves and in this particular situation the operation is 6000*5999). Unfortunately 6000 maybe is a small number, the real one growing day by day.
What to choose? MySQL and maybe if things go well and the project grows to expand it in a cluster? PostgreSQL, MSSQL? Oracle?
I've read about all of them, each one has it's pros and cons, but still don't know what to choose. The advantage of MySQL and PostgreSQL is of course, the starting price of $0 which is pretty nice in a usual self-funded startup.
Any opinions, pieces of advice? If you encountered this situation in your experience as developers, I'd love to hear from you.
These days, free isn't something that differenciates between databases any more. Both Oracle and SQL Server have free versions, but the limitations is resources - 4 GB database, RAM & single CPU utilization. Millions of records is not a concern - it's what datatypes you're using.
I saw the OPs comment about not liking MS software - that's your prerogative, but using the free versions of either Oracle or SQL Server do benefit from seamless transition to upscale versions of the respective database.
Personally, my choice would be either Oracle or SQL Server because of IMHO, real feature considerations like hierarchical query support, subquery factoring/CTE, packages (long before I get concerned with functions/procedures), full text searching, xml support, etc.
MySQL will handle 35 million records no problem. Worry about scalability when you get there. You can easily add raid hard disks backing your database tables, and if you really start getting big you can get a compellant SAN that will scream... Don't worry about the DB engine as much as the underlying hardware.. MySQL rocks for us with millions of records.
I've had no problems handling tables as large as 36,000,000 rows on MySQL and Oracle.
Just be sure that you index the proper columns, run EXPLAINs for your queries, and maintain proper design principles.
Most of the truly large scale web properties use a distributed key-value store. That said, 35 million is large, but not that large. With most modern databases, your main two scaling worries should be throughput and what happens when no single box can contain your entire database anymore. And both of these problems can be solved to some degree for any database you choose to use. (Caching, replication, sharding, etc.)
Use MySQL until you can't anymore. At that point, you ought to be rolling in dough anyways and you now have a very desirable problem.
Use MySQL as it's free and you have experience with it.
Besides in my opinion it matters more on how you design the tables than which database you use.
35 million records can be easily handled by MS SQL Server (assuming proper database design, indices, etc.). You can start with the free SQL Server Express edition and later, if you need, you can upgrade to the full version which supports clustering, etc.
SQL Server Express does have some limitations - single CPU, 1 GB memory, max 4 GB database size and a few other things. I'm not sure how quickly these limitations will become a problem but you can always move to the full version when you run into them.
MySQL(i) & Postgre
0$ of costs
large community
many tutorials
well documentated
MSSQL
You can get "money" from MS if you promote that you are using MSSQL (secret information from some companies I worked for)
MS tools work very well
Complete tool set from C# IDE over .NET lib to Windows Server 2003
Oracle
Professional and commercial provider
Used by many large companies (I also heard about Blizzard (World of Warcraft) using Oracle)
- expensive
The final decision depends on the very special requirements of your project.
Make yourself a quick list of things , that ARE IMPORTANT for your project (e.g. quick performed queries) and look up which Database pros are matching the most to your requirements.
Everything is about design. SQL Database are some kind of cars, you just have to know which component has to be placed here and which there.
Make a clear design and you won't struggle with any of them.
May be you can test Firebird
Blog post about big Firebird database here
MySQL licence is here (not allways free).
Postgresql and Firebird are free.
First of all, don't think about performance. Premature optimization being the root of all evil and all that. You can always throw more hardware and/or tuning at it later.
All of the mentioned should perform nicely if tuned/maintained correctly. I'd focus on manageability and familiarity. IMHO open source databases excels on manageability (perhaps not the best GUIs, but the CLI has been my home for a long long time).
And if the database becomes the bottleneck, why limit yourself to those choices? How about a key-value distributed database? Or perhaps serialize data directly to disk? Storing data outside of a RDBMS, while often frowned upon, might be the correct path. Or simply use the common route of denormalization.
Always remember not to optimize prematurely.
As far as opinions go (since you specifically asked for it) I favor open source databases, specifically PostgreSQL. It's rock solid, fast and very well-featured. And even with (relatively) large datasets it has performed superbly on mediocre hardware (some tuning involved, of course, but you can't skip that step no matter which db you end up choosing).

Can SQL server 2008 handle 300 transactions a second?

In my current project, the DB is SQL 2005 and the load is around 35 transactions/second. The client is expecting more business and are planning for 300 transactions/second. Currently even with good infrastructure, DB is having performance issues. A typical transaction will have at least one update/insert and a couple of selects.
Have you guys worked on any systems that handled more than 300 txn/s running on SQL 2005 or 2008, if so what kind of infrastructure did you use how complex were the transactions? Please share your experience. Someone has already suggested using Teradata and I want to know if this is really needed or not. Not my job exactly, but curious about how much SQL can handle.
Its impossible to tell without performance testing - it depends too much on your environment (the data in your tables, your hardware, the queries being run).
According to tcp.org its possible for SQL Server 2005 to get 1,379 transactions per second. Here is a link to a system that's done it. (There are SQL Server based systems on that site that have far more transactions... the one I linked was just the first I one I looked at).
Of course, as Kragen mentioned, whether you can achieve these results is impossible for anyone here to say.
Infrastructure needs for high performance SQL Servers may be very differnt than your current structure.
But if you are currently having issues, it is very possible the main part of your problem is in bad database design and bad query design. There are many way to write poorly performing queries. In a high transaction system, you can't afford any of them. No select *, no cursors, no correlated subqueries, no badly performing functions, no where clauses that aren't sargeable and on and on.
The very first thing I'd suggest is to get yourself several books on SQl Server peroformance tuning and read them. Then you will know where your system problems are likely to be and how to actually determine that.
An interesting article:
http://sqlblog.com/blogs/paul_nielsen/archive/2007/12/12/10-lessons-from-35k-tps.aspx

Are there any performance benefits of using SQL Server 2008 over SQL Server 2005?

Are there any performance benefits of using SQL Server 2008 over SQL Server 2005?
Moving a single database from SQL Server 2005-2008 will not notice a difference really. However, there are new tools and options available in SQL Server 2008 that you MIGHT be able to leverage to provider better performance later on in your application.
One item that comes to mind is filtered indexes. Allowing to create an index on a subset of information.
There may be new features in the engine which execute queries in different ways. This includes changes to the optimiser.
Therefore, the only way you can POSSIBLY tell, is to gather detailed performance data from your application on MSSQL2005, and then repeat the experiment on the same (production-quality) hardware with SQL2008.
You will need to make sure your application works correctly- such a migration can't be done lightly as any change could introduce bugs.
Also, the new version of the database could have performance regressions - which you need to be very careful about.
So in summary:
Benchmark YOUR application on SQL2005
Benchmark it on SQL2008
Use the same production-grade test hardware in your lab both times
Don't run VMs (unless that's what you do in production)
Don't change other parameters
This may not be easy if your application is big / complicated.
Yes. You can compress data in SQL 2008 which can have drastic impact on backup and data transfer times.
Actually SQL2008 has built-in compression that you can enable out of the box which could definately improve performance, but it may depend on what is being returned. I would try this option and benchmark to see if you feel its a worthy change.

Resources