Shared volume for data (multiple MDF) and another shared volume for logs (multiple LDF) on SAN - sql-server

I have 3 instances of SQL Server 2008, each on different machines with multiple databases on each instance. I have 2 separate LUNS on my SAN for MDF and LDF files. The NDX and TempDB files run on the local drive on each machine. Is it O.K. for the 3 instances to share a same volume for the data files and another volume for the log files?
I don't have thin provisioning on the SAN so I would like to not constaint disk space creating multiple volumes because I was adviced that I should create a volume (drive letter) for each instance, if not for each database. I am aware that I should split my logs and data files at least. No instance would share the actual database files, just the space on drive.
Any help is appretiated.

Of course the answer is: "It depends". I can try to give you some hints on what it depends however.
A SQL Server Instance "assumes" that it has exclusive access to its resources. So it will fill all available RAM per default, it will use all CPUs and it will try to saturate the I/O channels to get maximum performance. That's the reason for the general advice to keep your instances from concurrently accessing the same disks.
Another thing is that SQL Server "knows" that sequential I/O access gives you much higher trhoughput than random I/O, so there are a lot of mechanisms at work (like logfile organization, read-ahead, lazy writer and others) to avoid random I/O as much as possible.
Now, if three instances of SQL Server do sequential I/O requests on a single volume at the same time, then from the perspective of the volume you are getting random I/O requests again, which hurts your performance.
That being said, it is only a problem if your I/O subsystem is a significant bottleneck. If your logfile volume is fast enough that the intermingled sequential writes from the instances don't create a problem, then go ahead. If you have enough RAM on the instances that data reads can be satisfied from the buffer cache most of the time, you don't need much read performance on your I/O subsystem.
What you should avoid in each case is multiple growth steps on either log or data files. If several files on one filesystem are growing, you will get fragmentation and fragmentation can transform a sequential read or write request even from a single source to random I/O again.
The whole picture changes again if you use SSDs as disks. These have totally different requirements and behaviour, but since you didn't say anything about SSD I will assume that you use a "conventional" disk-based array or RAID configuration.
Short summary: You might get away with it, if the circumstances are right, but it is hard to assess without knowing a lot more about your systems, from both the SAN and SQL perspective.

Related

Multiple File groups on a Virtual machine for SQL Server

In many of the SQL Server articles it is mentioned that the best practice is to use multiple File group on a physical disk to avoid disk contention and disk spindle issues .So my query are :
1:Does the same theory of having multiple file group hold true for
a virtual machine ?
2:Should i still create my temp db to a
different disk and should i also create multiple temp db files to
avoid large read/write operation on the same temp db file in a
virtual machine setup for my production environment
You recommendation and reasoning would be helpful to decide the best practice.
Thanks.
Yes, it still applies to virtual servers. Part of the contention problem is accessing the Global Allocation Map (GAM) or Shared Global Allocation Map (SGAM), which exists for each database file and can only be accessed by one process or thread at a time. This is the "latch wait" problem.
If your second disk is actually on different spindles, then yes. If the database files would be on different logical disks but identical spindles, then it's not really important.
The MS recommendation is that you should create one database file for each logical processor on your server, up to 8. You should test to see if you find problems with latch contention on tempdb before adding more than 8 database files.
You do not need to (and generally should not) create multiple tempdb log files because those are used sequentially. You're always writing to the next page in the sequence, so there's no way to split up disk contention.
The question needs a bit more information about your environment. If the drives for the VM are hosted on a SAN somewhere and the drives presented to the VM are all spread across the same physical disks on the SAN then you're not going to avoid contention. If, however, the drives are not on the same physical drives then you may see an improvement. Your SAN team will have to advise you on that.
That being said, I still prefer having the log files split from the data file and tempDB being on it's own drive. The reason being that if a query doesn't go as planned then it can fill the log file drive which may take that database offline, but other databases may still be able to keep running (assuming they have enough empty space in their log files).
Again with tempDB, if that does get filled then the transaction will error out, and everything else should keep running without intervention.

Effects of data directory being on same partition as windows in SQL Server

If the database directory is on the same partition as windows can it cause my operating system be less responsive? Will the performance degradation of the OS be extensive or barely noticeable?
The big issue would be to make sure to keep the data files (which are dominated by random I/O) on a separate disk (and preferably different controller, too) from the log files (which have only sequential writes, for the most part). Mixing those random I/O with the constant sequential writing can be quite bad for performance.
And yes - I don't have any numbers to back it up, but I would believe having the OS and SQL Server data files on the same disk would cause performance to suffer - you will have two distinct sets of operations (OS and SQL Server) accessing the same disk in two separate, random I/O kind of patterns - certainly not ideal!
A commonly seen best practice for low-end SQL Server systems would be three disks:
one for the OS and tools
one for your data
one for your logs

Recommended placement of tempdb and log for SQL Server OLTP database(s)

Suppose the following configuration:
Drive D ... Data, Drive E .... TempDB, Drive F ... Log.
and suppose all drives are on separate spindles with respective drive controllers.
Concerning performance; is the above configuration optimal, decent, or not advisable?
With budgetary constraints in mind, can any of these DB's share the save drive without significant performance degradation?
Which of these drives needs to be the fastest?
This is difficult to answer without a full analysis of your system. For example, to do this properly, we should have an idea what kind of IOPS your system will generate, in order to plan for slightly more capacity than peak load.
I always love RAID10 across the board, separate arrays for everything, and in many instances splitting into different file groups as performance needs dictate.
However, in a budget-constrained environment, here is a decent, basic configuration, for someone who wants to approximate the ideal:
4 separate arrays:
System databases: RAID 5 (not the operating system array, either!)
Data: RAID 5
Logs: RAID 10
Tempdb: RAID 1 or 10, the latter for high IOPS scenario
(Optional) - RAID 5 to dump backups to (copy from here to tape)
This setup provides decent performance, and higher recoverability chances. For example, in this instance, if your Data array fails, you can still run the server and BACKUP LOG to do a point in time recovery on the failed databases, since you could still access system databases and your transaction logs in the face of data array failure.

What's the best storage for SQL Server 2000?

I want to access my sql server database files in a INTEL SS4000-E storage. It´s a NAS Storage. Could it be possible to work with it as a storage for sql server 2000? If not, what is the best solution?
I strongly recommend against it.
Put your data files locally on the server itself, with RAID mirrored drives. The reasons are twofold:
SQL Server will run much faster for all but the smallest workloads
SQL Server will be much less prone to corruption in case the link to the NAS gets broken.
Use the NAS to store backups of your SQL Server, not to host your datafiles. I don't know what your database size will be, or what your usage pattern will be, so I can't tell you what you MUST have. At a minimum for a database that's going to take any significant load in a production environment, I would recommend two logical drives (one for data, one for your transaction log), each consisting of a RAID 1 array of the fastest drives you can stomach to buy. If that's overkill, put your database on just two physical drives, (one for the transaction log, and one for data). If even THAT is over budget, put your data on a single drive, back up often. But if you choose the single-drive or NAS solution, IMO you are putting your faith in the Power of Prayer (which may not be a bad thing, it just isn't that effective when designing databases).
Note that a NAS is not the same thing as a SAN (on which people typically DO put database files). A NAS typically is much slower and has much less bandwidth than a SAN connection, which is designed for very high reliability, high speed, advanced management, and low latency. A NAS is geared more toward reducing your cost of network storage.
My gut reaction - I think you're mad risking your data on a NAS. SQL's expectation is continuous low-latency uninterrupted access to your storage subsystem. The NAS is almost certainly none of those things - you local or SAN storage (in order of performance, simplicity and therefore preference) - leave the NAS for offline file storage/backups.
The following KB lists some of the constraints and issues you'd encounter trying to use a NAS with SQL - while the KB covers SQL 7 through 2005, a lot of the information still applies to SQL 2008 too.
http://support.microsoft.com/kb/304261
local is almost always faster than networked storage.
Your performance for sql will depend on how your objects, files, and filegroups are defined, and how consumers use the data.
Well "best" means different things to different people, but I think "best" performance would be a TMS RAMSAN or a RAID of SSDs... etc
Best capacity would be achieved with a RAID of large HDDs...
Best reliability/data saftey would be achieved with Mirroring across many drives, and regular backups (off site preferably)...
Best availability... I don't know... maybe a clone the system and have a hot backup ready to go at all times.
Best security would require encryption, but mainly limiting physical access to the machine (and it's backups) is enough unless it's internet connected.
As the other answers point out, there will be a performance penalty here.
It is also worth mentioning that these things sometimes implement a RAM cache to improve I/O performance, if that is the case and you do trial this config, the NAS should be on the same power protection / UPS as the server hardware, otherwise in case of power outtage the NAS may 'loose' the part of the file in cache. ouch!
It can work but a dedicated fiber attached SAN will be better.
Local will usually be faster but it has limited size and won't scale easily.
I'm not familiar with the hardware but we initially deployed a warehouse on a shared NAS. Here's what we found.
We were regularly competing for resources on the head unit -- there was only so much bandwidth that it could handle. Massive warehouse queries and data loads were severely impacted.
We needed 1.5 TB for our warehouse (data/indexes/logs) we put each of these resources onto a separate set of LUNS (like you might do with attached storage). Data was spanning just 10 disks. We ran into all sorts of IO bottlenecks with this. the better solution was to create one big partition across lots of small disks and store data, index and logs all in the same place. This sped things up considerably.
If you're dealing with a moderately used OLTP system, you might be fine but a NAS can be troublesome.

Which type of external drives are good for SQL backup files?

As a part of database maintenance we are thinking of taking daily backups onto an external/firewire drives. Are there any specific recommended drives for the frequent read/write operations from sql server 2000 to take backups?
Whatever you do, just don't use USB 1.1.
The simple fact is that harddrives over a period of time will fail. The best two solutions
I can recommend unfortunately do not avail of using harddrives.
Using a tape backup, granted is slower but you get the flexibility of having the option of offsite backups. It is easy to put a tape in the boot of a car. Rotating the tapes means that you can have pretty recent protection against any unforseen situations.
Another option is an online backup solution where the backups are encrypted and copied offsite. My reccommendation is definitly at least having some sort of offsite backup external to the building that you keep the SQL servers. After all it is "disaster" recovery.
Pretty much any external drive can be used here, provided it has the space to hold your backups and enough performance to get the backups there. The specifics depend on your exact requirements.
In my experience, FireWire tends to outperform USB for disk activity, regardless of their theoretical maximum transfer rates. And FireWire 800 will perform even better yet. I have found poor performance from FireWire and USB drives when you have multiple concurrent reads/writes going on, but with backups, it's generally more large sequential reads and writes.
Another option that is a little bit more complex to setup and manage, but can provide you with greater flexibility and performance is external SATA (eSATA). You can even get Hot Swappable external SATA enclosures for even greater convenience, and ease of taking your backups offsite.
However, another related option that I've had excellent success with is to setup a separate server to act as your backup server. You can use whatever disk options you choose (FireWire, SATA, eSATA, SCSI, FiberChannel, iSCSI, etc), and share out that disk storage as a network share (I use NFS and Samba on a Linux box, but for a Windows oriented network, a Windows share will work fine). You can then access the shares across the network and backup multiple machines to it. Also, the separation of backup server from your production machines will give you greater flexibility if you need to take it offline for maintenance, adding/removing storage, etc.
Drobo!
A USB hard drive RAID array that uses normal - off the shelf hard drives. 4 bays, when you need more space, buy another hard drive. Out of bays? Buy bigger hard drives and replace your smallest in the array.
http://www.drobo.com/
Depending on the size of the databases speed of the drive can be a real factor. I would look into something like Drobo but with an eSata or SAS interface. There is nothing more entertaining than watching a terabyte go through USB 2.0. Also, you might consider something like hyperbac or RedGate SQL Backup to compress the backup and make it easier to fit on the drive as well.
For the most part, external drives aren't a good option - unless your database is really small.
Other than some of the options others have listed, you can also use UNC/Network shares as a great 'off-box' option.
Check out the following video for some other options:
SQL Server Backup Options (Free Video)
And the videos on configuring backups on the site will show you how to specify a network path for backup purposes.

Resources