How to detach data disk in Azure SQL VM - sql-server

For temporary testing purpose. I created a SQL VM in Azure, and the Azure wizard assign me a OS disk with 127G and a Data disk with 1T. But the cost of the Data disk is a little bit expensive for me. so I change the server default data and log path to OS(C) disk. and backup DB to OS(C) disk. then detach Data(F) disk.
The problem is sql server start fail without data disk. what should I do if I want run sql server without data(F) disk?

C drive is dedicated for OS. While putting data and log files on the same drive as OS may work for test/dev workloads, it is not recommended to put database files and OS on the same drive for your production workloads. Depending on your VM type, OS drive may use Standard Disks (HDD) or Premium Disks. There are IOPS limit for each of these Disks types. For instance, Standard Disks can support up to 500 IOPS and it is important to keep these IOPS for OS so that OS operations do not have to compete IOPS with other applications. Starving OS operations can result in VM restart.
Please send an email to AzureDisksPM#microsoft.com if you have additional questions.
Thanks,
Aung

as far as I recall, you dont really need data disk to run sql on azure vm. by default it will use it to host database files, you can move those to disk C and repoint SQL to them. there are many ways to do that, you can consult official docs.
https://learn.microsoft.com/en-us/sql/relational-databases/databases/move-system-databases?view=sql-server-2017

Related

Best practice in choosing the drives for data and log files while installing sql server in prod environment?

I need to install a SQL server for prod environment. There are only two drives in the system one drive with 120 GB and another with 50 GB. How to choose the drives to keep the user defined db data and log files and temp db files.
Your question too broad to have simple answer.
Take this points into consideration:
What is the size of user data database?
What is expected growth of user data database?
Do you have a lot of queries with #-tables? (tempdb strain)
What is expected transaction count in second/minute?
Use SQLIO to measure speed of your drives.
Do you have ready load tests? (Use them and look in Resource Monitor, check disk queues)
Which recovery model you are using? (Growth of your log files)
Which backup strategy you are planning?
It is equally possible:
you don't have to worry with all DBs on default location
you need faster hardware

Processing data faster using ssis and an ssd drive

We have a new local server that processes data using sql server 2008 and ssis. in this server i have dedicated drive to do different things. the c drive is for the os and software. the d drive is for db storage , and the ssis . The e drive is a ssd drive that we restore each database that is being used by the ssis.
our idea was that we process allot of data and since the ssd drive is only 500gb( because of the cost) we would have everything on a regular drive and transfer the databases in use to the ssd drive to have the process run faster.
when i run the ssis without the ssd drive it takes about 8 hrs and when i run the process restoring the databases on the ssd drive the process takes about the same amount of time to process( in fact if i include the restoring of the data bases the process takes longer)
As of right now i cannot move the os and software to the ssd drive to test to see if that would help the process.
is there a way to utilze the ssd drive to process the data and to speed up the process.
If you want to speed up a given process, you need to find the bottleneck.
Generally speaking (since you give no details of the SSIS-Process) at each point in the operation, one of the systems components (CPU, RAM, I/O, Network) is operating at maximum speed. You need to find the component that contributes the most to your run time and then either speed that component up by replacing it with a faster component or reduce the load on it by redesigning the process.
Since you ruled out already the I/O to the user database(s), you need to look elsewhere. For a general ad hoc look, use the systems resource monitor (available through the task manager). For a deeper look, there are lots of performance counters available via perfmon.exe, both for OS (CPU, I/O), SSIS and SQL Server.
If you have reason to believe that DB-I/O is your bottleneck, try moving the tempdb to the SSD (if you generally have lots of load on the tempdb that is a good idea anyway). Instructions here.
Unless you give us some more details about the SSIS process in question, that's all I can say for now.

Creating SQL Server database with data and log file on synology NAS Server

I need to create a new database for an application. I planned to put the data files on NAS server (Synology 812). I tried to create the database using different paths for 2 days but nothing worked. At the bottom you can see an example path N'\\10.1.1.5\fileserver\...
I also tried 'N\\10.1.1.5\**volume1\fileserver**\payroll.ldf' because from synology admin interface properties dialog shows this path for fileserver shared directory.
fileserver is a shared folder. I can reach that folder from file explorer.
\\10.1.1.5\fileserver\
And I can create new file or folders inside it using windows explorer. But unluckily the create statement does not work.
CREATE DATABASE Payroll
ON
( NAME = Payroll_dat,
FILENAME = N'\\10.1.1.5\fileserver\payrolldat.mdf',
SIZE = 20MB,
MAXSIZE = 70MB,
FILEGROWTH = 5MB )
LOG ON
( NAME = 'Payroll_log',
FILENAME = N'\\10.1.1.5\fileserver\payroll.ldf',
SIZE = 10MB,
MAXSIZE = 40MB,
FILEGROWTH = 5MB )
GO
I will be very happy if someone has a solution for my problem.
Thank you for your time.
Ferda
SQL Server doesn't support UNC paths by default. See the KB at http://support.microsoft.com/kb/304261 - Description of support for network database files in SQL Server.
Extracts:
Microsoft generally recommends that you use a Storage Area Network
(SAN) or locally attached disk for the storage of your Microsoft SQL
Server database files because this configuration optimizes SQL Server
performance and reliability. By default, use of network database files
(stored on a networked server or Network Attached Storage [NAS]) is
not enabled for SQL Server.
It can be enabled, but you must ensure your hardware meets some strict conditions:
However, you can configure SQL Server to store a database on a
networked server or NAS storage server. Servers used for this purpose
must meet SQL Server requirements for data write ordering and
write-through guarantees, which are detailed in the "More Information"
section.
[...]
Any failure by any software or hardware component to honor this protocol can result
in a partial or total data loss or corruption in the event of a system failure.
[...]
Microsoft does not support SQL Server networked database files on NAS or networked
storage servers that do not meet these write-through and write-order requirements.
Performance can also be heavily compromised:
In its simplest form, an NAS solution uses a standard network
redirector software stack, standard network interface card (NIC), and
standard Ethernet components. The drawback of this configuration is
that all file I/O is processed through the network stack and is
subject to the bandwidth limitations of the network itself. This can
create performance and data reliability problems, especially in
programs that require extremely high levels of file I/O, such as SQL
Server. In some NAS configurations tested by Microsoft, the I/O
throughput was approximately one-third (1/3) that of a direct attached
storage solution on the same server. In this same configuration, the
CPU cost to complete an I/O through the NAS device was approximately
twice that of a local I/O.
So in summary, if you can't gurantee that your hardware supports these requirements, you're playing with fire. It might work for a small test environment, but I'd not host a Live database like this lest data gets corrupted or performance severely suffers.
To enable, use trace flag 1807 as decribed in the KB.
You need SQL Server 2008R2 or later. Starting with 2008R2 the UNC names are supported, although not encouraged (for all the reasons Chris mentions). An SMB 3.0 capable environment goes a long way into solving most of UNC storage problems. Prior to 2008R2 the trace flag 1807 would work, but was not a supported deployment (CSS could refuse to help you if you asked for help on any issue). See SQL Server Can Run Databases from Network Shares & NAS.
I think the best thing you can do is configure a ISCSI volume in your nas device creating a RAID 5 if you have 3 or more available disk, after that you should connect your NAS and your server in an independient VLAN connected to a different network device or even in a diferent phisical network to avoid impact on your LAN due to operations traffic, in this way you will be using your NAS as a SAN

Which single file embedded DB for a network project?

I am looking for an embedded database for a VB 2010 application working over the network. The database file is on a shared network folder on a NAS server (NTFS). For this reason I cannot use any server database like mysql, sql server, etc...
There are nearly 20 PCs accessing the shared folder on the network.
Each pc can open till 3 connections to the database, so we could have till 60 connections to the database. Mostly they just read the database, a writing to the database happens each 5-6 minutes and rarely at the same time, but it can happen.
In the past I had successfully used access+jet for such applications and never had problems, anyway with less network users.
I would still use access+jet (so I do not need to convert the whole database and code), but I would like to use something newer.
I have seen that SQLite is not jet right for network/shared enviroment.
SQL Compact is also not right for shared folder.
VistaDB is too expensive.
Firebird could be an option, but I have no experience: It should be used in a production system and I do not know if I could trust it.
Any suggestion? Or shell I stay by access?
Thanks for replying.
Go with firebird. Stable, lightweight, free and very fast as a network and as an embedded database. I am using it everywhere.
However the database cannot reside on a shared network folder. It must reside on a hard drive that is physically connected to the host machine.
VistaDB is good as an embedded database, but has awful performance as a network database because it is not true client-server.

Which type of external drives are good for SQL backup files?

As a part of database maintenance we are thinking of taking daily backups onto an external/firewire drives. Are there any specific recommended drives for the frequent read/write operations from sql server 2000 to take backups?
Whatever you do, just don't use USB 1.1.
The simple fact is that harddrives over a period of time will fail. The best two solutions
I can recommend unfortunately do not avail of using harddrives.
Using a tape backup, granted is slower but you get the flexibility of having the option of offsite backups. It is easy to put a tape in the boot of a car. Rotating the tapes means that you can have pretty recent protection against any unforseen situations.
Another option is an online backup solution where the backups are encrypted and copied offsite. My reccommendation is definitly at least having some sort of offsite backup external to the building that you keep the SQL servers. After all it is "disaster" recovery.
Pretty much any external drive can be used here, provided it has the space to hold your backups and enough performance to get the backups there. The specifics depend on your exact requirements.
In my experience, FireWire tends to outperform USB for disk activity, regardless of their theoretical maximum transfer rates. And FireWire 800 will perform even better yet. I have found poor performance from FireWire and USB drives when you have multiple concurrent reads/writes going on, but with backups, it's generally more large sequential reads and writes.
Another option that is a little bit more complex to setup and manage, but can provide you with greater flexibility and performance is external SATA (eSATA). You can even get Hot Swappable external SATA enclosures for even greater convenience, and ease of taking your backups offsite.
However, another related option that I've had excellent success with is to setup a separate server to act as your backup server. You can use whatever disk options you choose (FireWire, SATA, eSATA, SCSI, FiberChannel, iSCSI, etc), and share out that disk storage as a network share (I use NFS and Samba on a Linux box, but for a Windows oriented network, a Windows share will work fine). You can then access the shares across the network and backup multiple machines to it. Also, the separation of backup server from your production machines will give you greater flexibility if you need to take it offline for maintenance, adding/removing storage, etc.
Drobo!
A USB hard drive RAID array that uses normal - off the shelf hard drives. 4 bays, when you need more space, buy another hard drive. Out of bays? Buy bigger hard drives and replace your smallest in the array.
http://www.drobo.com/
Depending on the size of the databases speed of the drive can be a real factor. I would look into something like Drobo but with an eSata or SAS interface. There is nothing more entertaining than watching a terabyte go through USB 2.0. Also, you might consider something like hyperbac or RedGate SQL Backup to compress the backup and make it easier to fit on the drive as well.
For the most part, external drives aren't a good option - unless your database is really small.
Other than some of the options others have listed, you can also use UNC/Network shares as a great 'off-box' option.
Check out the following video for some other options:
SQL Server Backup Options (Free Video)
And the videos on configuring backups on the site will show you how to specify a network path for backup purposes.

Resources