In the Oracle world, it's been gospel to build your database block size to be even multiples of the File system's block size. I assume this is still true but I'm not adverse to being told why technology has made this irrelevant.
But I've been told some SQL Server DBA's are going to upgrade the OS of a SS2000 installation to 64bit to get 64k pages in the FS.
Does SQL Server 2000 support changing the page size?
From what I've read it's fixed at 8k. Is that right?
If it is fixed at 8k, would there be any advantage to making the FS 64k?
I'm getting this information from a reliable source but none-the-less second hand.
EDIT: Thanks to SAMBO, I've read the links and found the specification for
"NTFS Allocation Unit Size" be set to 64Kb
I assume that term = Block Size...
So the conflict I have between 8k DB blocks and 64k FS blocks is in fact the recommended setup from MS.
Make sure you read the Microsoft's Predeployment I/O Best Practices
It recommends using 64K allocation units for NTFS volumes.
Also, read SQL Server 2000 IO basics
Finally have a look at this post.
SQL Servers page size is in fact 8K, this is non-configurable. The advantage of having larger allocation unit on the OS is that perhaps you can get slightly better performance when SQL Server is fetching pages in to its cache.
From my experience, I doubt mucking around with these values will give you any noticeable performance improvements best case you would get a minuscule improvement.
Better spend your efforts doing stuff like isolating tempdb, ensuring raid1/0 array are used, having your transaction log live on a different array to the data file and optimizing queries.
The overall performance of the file-system can make a noticeable difference.
For example, I heard when Windows Server 2003 came out that SQL Server 2000 performance on that platform was improved significantly.
So it doesn't surprise me. I don't think the multiple factor is that big of a deal.
Related
We are running Dynamics GP 2010 on 2 load balanced citrix servers. For the past 3 weeks we have had severe performance hits when users are running Fixed Assets reporting.
The database is large in size, but when I run the reports locally on the SQL server, they run great. The SQL server seems to be performing adequately even when users are seeing slow performance.
Any ideas?
Just because your DB seems un-stressed, it does not mean that it is fine. It could contain other bottlenecks. Typically, if a DB server is not maxing-out its CPUs occasionally, it means there is a much bigger problem.
Standard process for troubleshooting performance problems on a data driven app go like this:
Tune DB indexes. The Tuning Wizard in SSMS is a great starting point. If you haven't tried this yet, it is a great starting point.
Check resource utilization: CPU, RAM. If your CPU is maxed-out, then consider adding/upgrading CPU or optimize code or split your tiers. If your RAM is maxed-out, then consider adding RAM or split your tiers.
Check HDD usage: if your queue length goes above 1 very often (more than once per 10 seconds), upgrade disk bandwidth or scale-out your disk (RAID, multiple MDF/LDFs, DB partitioning).
Check network bandwidth
Check for problems on your app (Dynamics) server
Shared report dictionaries are the bane of reporting in GP. they do tend to slow things down. also, modifying reports becomes impossible as somebody has it open all the time.
use local report dictionaries and have a system to keep them synced with a "master" reports.dic
Here's what we're trying to do.
We will have a 200GB+ SQL Server database that needs to load into memory. Microsoft best practice is to have enough physical memory available on the server and then load the entire database into that. That means we would need 256GB of memory on each of our SQL Servers. This would result is fast access to the database which is loaded to memory, but for the high cost of memory. BTW, we're running SQL Server 2008 on Windows Server 2008.
Currently, our server is setup with only 12GB memory. Just under 3GB is allocated to the OS, and the remaining 9GB is used for SQL Server. Is it possible to increase the pagefile to 256GB and set it up on an SSD drive? What we want to do then is, load the database into the pagefile located on the SSD. We're hoping the performance will be similar to loading the entire database into memory, since it'll be on an SSD.
Will this work? Is there another alternative we're overlooking? We want to keep the costs down as much as we can, without sacrificing the performance of our environment. Any advice would be appreciated.
Thanks.
If you want the database to be stored in memory, you need to buy more memory. In spite of what the other answer suggests, memory is the absolute best and cheapest way to make a database perform better - SQL Server is designed to use memory well.
While SQL Server will take advantage of the page file when it has to, and while having the page file on an SSD will be slightly faster than on an old-fashioned mechanical disk, it's still I/O and swapping and there is a lot of overhead around that, regardless of the disk type underneath. This may turn out to be a little bit better, in general, than having the same page file on a spinny disk (or no page file at all), but I don't think that it's going to be anywhere near the impact of having real memory, or that it's going to come anywhere close to your expectations of "fast access."
If you can't buy more memory then you can start with this page file on an SSD, but I'm confident you will need to additionally focus on other tuning opportunities - largely making sure you have indexes that support the type of queries you run, avoiding full table scans as much as possible. For full table aggregates you can consider indexed views (see here and here); for subsets you can consider filtered indexes.
And just to be sure: you are storing the actual data on an SSD drive, right? If not, then I would argue that you should use the SSD for the data and/or log, not for the page file. The page file isn't going to offer you much benefit if it is constantly swapping data in and out by exchanging with a spinny disk.
Need more clairity on the question.
Are you in control of the database or is this a COTS solution that limits your ability to optimize?
Are you clustering? Is that why adding 200+Gb of RAM is an issue (now more than 400GB, 200 per node)?
Are you on bare metal or virtualized? Is this why RAM may be an issue?
So far it would seem the "experts" have made some assumptions that may not be fair to your circumstance.
Please update your question... :)
I've read quite a bit about SQL Servers using SSDs performing much better than traditional hard drives. In load tests with my app in a test environment, though, I'm able to keep my test DB server (SQL 2005) pegged between 75% and 100% CPU usage without much of a strain on disk access (as far as I can tell). My data set is still pretty small; database backups are under 100 MB. The test server I'm using is not new, but is also no slouch.
So, my questions:
Is the CPU the bottleneck (as opposed to the storage) because the dataset is small and therefore fits in memory?
Will this change once the dataset grows so paging is necessary?
Approximately how big (as a percentage of system memory) does the dataset have to get before SQL Server starts paging? Or does that depend on a lot of other factors?
As the app and its dataset grows, are there other bottlenecks that will tend to crop up besides CPU, storage, and lack of proper indexes?
Yes
Yes
If you have SQL Server configured to use as much memory as it can get, probably when it exceeds the max system memory. But it's very setup dependant on what causes paging (the query that is being executed is the most prevalent cause).
I/O between the request machine and server is the only one that I can think of, and that only matters if you are retrieving large datasets. I also would not group a lack of indexes as a bottleneck, rather indexes enable better performance with regard to searching.
As long as the CPU is the bottleneck on your dedicated SQL-Server machine, you don't have to worry about disk speed (assuming nothing's wrong with the machine). SQL-Server WILL use heavy memory caching. SQL-Server has built-in strategies to perform best under a given load and available resources. Just don't worry about it!
We are having some issues with our production SQL Server.
Server: Dual Quad Core Xeon
8 GB RAM
Single RAID 10 Array
Windows 2003 Server 64-bit
SQL Server 2005 Standard 64-Bit
There is about 250MB of free RAM on the machine right now. SQL Server has around 6GB of RAM, and our monitoring software says that only half of the SQL Server allocated RAM is actually being used.
Our main database is approximately 20GB, with about 12GB being used with any frequency. Our tempdb is at 700MB. Both are located on the same physical disk array.
Additionally, using Filemon, I was able to see that the tempdb file had 100's or 1000's of writes of length 65536. Disk queue length was over 100 nearly 80% of the time.
So, here are my questions-
What would cause all those writes on the tempdb? I'm not sure if we have always had that much activity, but it seems excessive and these problems are recent.
Should I just add more memory to the server?
On high load servers, should tempdb and db files be located on separate arrays?
A high disk queue length does not mean you have an I/O bottleneck if you have a SAN or NAS, you may want to look at other additional counters. Check out SQL Server Urban Legends discussed for more details.
1: The following operations heavily utilize tempdb
Repeated create and drop of temporary tables (local or global)
Table variables that use tempdb for storage purposes
Work tables associated with CURSORS
Work tables associated with an ORDER BY clause
Work tables associated with an GROUP BY clause
Work files associated with HASH PLANS
These SQL Server 2005 features also use tempdb heavily:
row level versioning (snapshotisolation)
online index re-building
As mentioned in other SO answers read this article on best practice for increasing tempdb performance.
2: Looking at the amount of free RAM on the server i.e. looking at the WMI counter Memory->Available Mbytes doesn't help as SQL Server will cache data pages in RAM, so any db server that's running long enough will have little free RAM.
The counters you should look at that are more meaningful in telling you if adding RAM to the server will help are:
SQL Server Instance:Buffer Manager->Page Life Expectancy (in seconds)
A value below 300-400 seconds will mean that Pages are not in memory very long and data continually is being read in from disks. Servers that have a low page life expectancy will benefit from additional RAM.
and
SQL Server Instance:Buffer Manager->Buffer Cache hit Ratio
This tells you the percentage of pages that were read from RAM that didn't have to incur a read from disk, a cache hit ratio lower then 85 will mean that the server will benefit from additional RAM
3: Yes, can't go wrong here. Having tempdb on a separate set of disks is recommended. Look at this KB article under the heading: Moving the tempdb database on how to do this.
Yes, the recommendation on high load servers is to put TempDB on a separate set of drives from the user databases:
SQL Server 2005 Books Online: Optimizing tempdb Performance
Not directly an answer on your question but this might be a good tip: Restarting your SQL Server instance will clear the tempdb, this might be a good start when investigating the actions which are done on the tempdb.
Excellent question, +1
tempdb is used far more heavily in SQL 2005+.
At least: Snapshot isolation levels, online index rebuild, reading INSERTED/DELETED in triggers(used to read the log file!)
This in addition to the usual order by clauses, temp tables etc.
You'd probably be better splitting your log and data files (also for recoverability).
More memory is always good but see this 64 bit specific stuff, Grumpy Old DBA below.
Finally, and maybe most important probably, you can have contention of space allocation in tempdb:
Explanations from Linchi Shea and SQL Server storage team
Late edit:
Paul Randall added an entry "Comprehensive tempdb blog post series" which offers good links
Writes to the tempdb can be anything. Internal hash tables, temp tables, table variable, stored procedure calls, etc.
If you only have 250 Megs of free RAM, then yes more RAM would be good.
It is always recommended that you split tempdb and user databases to different disks.
All writes to the tempdb will be 64k in size as that's the size of each database extent.
I want to access my sql server database files in a INTEL SS4000-E storage. It´s a NAS Storage. Could it be possible to work with it as a storage for sql server 2000? If not, what is the best solution?
I strongly recommend against it.
Put your data files locally on the server itself, with RAID mirrored drives. The reasons are twofold:
SQL Server will run much faster for all but the smallest workloads
SQL Server will be much less prone to corruption in case the link to the NAS gets broken.
Use the NAS to store backups of your SQL Server, not to host your datafiles. I don't know what your database size will be, or what your usage pattern will be, so I can't tell you what you MUST have. At a minimum for a database that's going to take any significant load in a production environment, I would recommend two logical drives (one for data, one for your transaction log), each consisting of a RAID 1 array of the fastest drives you can stomach to buy. If that's overkill, put your database on just two physical drives, (one for the transaction log, and one for data). If even THAT is over budget, put your data on a single drive, back up often. But if you choose the single-drive or NAS solution, IMO you are putting your faith in the Power of Prayer (which may not be a bad thing, it just isn't that effective when designing databases).
Note that a NAS is not the same thing as a SAN (on which people typically DO put database files). A NAS typically is much slower and has much less bandwidth than a SAN connection, which is designed for very high reliability, high speed, advanced management, and low latency. A NAS is geared more toward reducing your cost of network storage.
My gut reaction - I think you're mad risking your data on a NAS. SQL's expectation is continuous low-latency uninterrupted access to your storage subsystem. The NAS is almost certainly none of those things - you local or SAN storage (in order of performance, simplicity and therefore preference) - leave the NAS for offline file storage/backups.
The following KB lists some of the constraints and issues you'd encounter trying to use a NAS with SQL - while the KB covers SQL 7 through 2005, a lot of the information still applies to SQL 2008 too.
http://support.microsoft.com/kb/304261
local is almost always faster than networked storage.
Your performance for sql will depend on how your objects, files, and filegroups are defined, and how consumers use the data.
Well "best" means different things to different people, but I think "best" performance would be a TMS RAMSAN or a RAID of SSDs... etc
Best capacity would be achieved with a RAID of large HDDs...
Best reliability/data saftey would be achieved with Mirroring across many drives, and regular backups (off site preferably)...
Best availability... I don't know... maybe a clone the system and have a hot backup ready to go at all times.
Best security would require encryption, but mainly limiting physical access to the machine (and it's backups) is enough unless it's internet connected.
As the other answers point out, there will be a performance penalty here.
It is also worth mentioning that these things sometimes implement a RAM cache to improve I/O performance, if that is the case and you do trial this config, the NAS should be on the same power protection / UPS as the server hardware, otherwise in case of power outtage the NAS may 'loose' the part of the file in cache. ouch!
It can work but a dedicated fiber attached SAN will be better.
Local will usually be faster but it has limited size and won't scale easily.
I'm not familiar with the hardware but we initially deployed a warehouse on a shared NAS. Here's what we found.
We were regularly competing for resources on the head unit -- there was only so much bandwidth that it could handle. Massive warehouse queries and data loads were severely impacted.
We needed 1.5 TB for our warehouse (data/indexes/logs) we put each of these resources onto a separate set of LUNS (like you might do with attached storage). Data was spanning just 10 disks. We ran into all sorts of IO bottlenecks with this. the better solution was to create one big partition across lots of small disks and store data, index and logs all in the same place. This sped things up considerably.
If you're dealing with a moderately used OLTP system, you might be fine but a NAS can be troublesome.