I am trying to setup a development environment on an azure VM. I require Visual Studio 2013 and SQL Server 2012. The problem is that after installing the SQL Server the performance of the VM decreases drastically. After booting up the VM and opening Visual Studio for the first time I have to wait approx. 10 minutes. After that I can close VS and reopen it in a matter of seconds.
I've tried some things described in here: https://msdn.microsoft.com/en-us/library/azure/dn133149.aspx (separate disk for sql, disabled georeplication for the storage account, enabled locked pages, enabled instant file initialization), but it does not seem to help.
Oh, I'm using the G1 size (ssd disk).
Any suggestions how to improve the performance?
UPDATE: Further testing has shown that the Sql Server is responsible for the poor performance. Any clues how to make this faster?
In fact Azure users who need storage performance like on their boxes with SSD have to use Premium storage account and configure Storage Spaces (software JBOD in Windows Server 2012) inside a virtual machine.
http://blogs.technet.com/b/canitpro/archive/2015/02/26/azure-disk-iops-and-virtual-machines-in-iaas-part-ii.aspx
Related
At work we load data into a SQL Server 2012 database, and create .bak files that are exported. Yes that is correct, due to compatibility issues, we need to use SQL Server 2012.
This process, which is probably running for 3-4 hours per day, is currently running on an on-premise machine, but we want to move it to Azure.
However, SQL databases in Azure are v2017+, but I have read that it's possible to run SQL Sever 2012 in a Docker container. Before I invest a lot of time into this idea, has any one tried to host an old SQL Server version in a Docker container in Azure?
As said, use a VM. Microsoft maintain VM images all the way back to 2008 plus you can integrate backup and automatic updates to the OS and SQL. Images are listed here and you can pay as you go or bring your own license:
https://learn.microsoft.com/en-us/azure/azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview
In the pay as you go model you can shutdown the VM (i.e. the VM itself not just the OS) and you won’t get charged for the VM or the SQL license. You will still get charged for storage. See here:
https://learn.microsoft.com/en-us/azure/azure-sql/virtual-machines/windows/pricing-guidance#pay-per-usage
Since I want to evaluate TFS 2018 as a solution for our 25 Person Dev Team, I installed an evaluation alongside with the SQL Server 2017 on a Windows Server 2012 R2 (running on a physical Win Server 2012 R2).
The problem (TLDR): The only time when TFS perform fast is when connecting remotely to the server and login with just one account. As soon as a second user logs into the TFS web interface, the response time increases up to 2-3 minutes for everything. Any domain network login causes also the response performance to drop.
VM Specs:
Windows Server 2012 R2
TFS 2018 app & data tier
SQL Server 2017 (only for TFS)
IIS (only for TFS)
The VM has assigned 4 Threads (2 Cores)
2-32 GB Dynamic Memory (25% buffer) usually taking about 8GB
Connection via SSL
Physical Specs:
Windows Server 2012 R2
Roles: Domain Controller, Active Directory, DNS, WSUS, CertAuth, Hyper-V (only running the TFS Machine above), Repository Server
Xeon E5-2620 # 2.1GHz
Physical RAID 5 with 3 disks (7200rpm).
4x16 GB RAM (own channel each)
Details:
Ok, I know that the physical server already performs quite a load of a task, but we have just a small network of about 30 Workstations. I noticed no significant performance issues since the Hyper-V / TFS installation.
I know that putting the data tier (SQL Server) also in the VM is not recommended since the overhead of the host system will most likely slow down any I/O. But I don't have the feeling it is a hardware problem, since there a no repository in there jet, and it's responding fast if just one user accesses it. I'll probably get an appropriate physical server for the data tier once it will be used in a production environment, but for now, I just want to evaluate the capabilities of TFS if it fits our workflow needs.
We authenticate via Active Directory and SSL properly certified from the physical host.
Ping goes through in < 1 ms so no network load issues.
The resource monitor (both VM and physical) are barely different to idle during requests.
Installing TFS with and SQL Server Express on a local machine works just fine as you would expect.
I feel like I read every tutorial and guide about TFS, installing SQL Server on Hyper-V Servers, troubleshoot performance issues with TFS and so on.
I sitting on this problem for weeks now and don't find a cause of this problem.
Has anyone an idea what could cause this issue or what I could look into?
I have implemented executing sqlcmd.exe to run a script (provided by hosting & ops DBAs) that copies a newly created database in the C# app that creates the database to a second server in an AlwaysOn Availability Group. Unfortunately we have no AlwaysOn Availability support in our development or integration environments so I have only been able to test executing sqlcmd to run the script and handling the script failure. Is it at all possible for me to simulate the AlwaysOn Availability Group environment on my developer workstation if a create a second SQL Server instance. I am running SQL 2014 Developer Edition at the moment but should it be required, will be able to upgrade to SQL 2016 Developer Edition.
If this is not possible we will be forced to deploy without full end to end testing and have the first end to end tests happen in the production environment.
I have fairly good developer level SQL skills; in other words I'm very comfortable with stored procedures and such but have very little knowledge of the new and advanced features for actually administering SQL servers.
You can but you will either have to create VM (Virtual Machines) using VMWare software or VMs using Microsoft (MS) technology. There are other VM products but these two are used for the majority of virtualizations of the SQL Server/Microsoft stack. If you use MS technology you can download the software for usually up to 90 days (free) without activating the software. I would do it on an SSD so it finishes in a timely basis and you probably need 12-16GB of memory on the host, developer, machine.
There are detailed online instructions on creating VMs for SQL Server on clusters. The best ones have screen shots.
The SQL Server is running on a well o:) configured server. The server configuration is given below.
OS - Windows Server 2012 R2
RAM - DDR3 24 GB ECC
RAID 10
The NAV SERVER is also installed on the same server. Almost 112 concurrent end-users are accessing the NAVISION database through different clients system.
I have noticed that at a particular time (5PM/6PM) the SQL as well as the NAV SERVER are consuming the whole (20GB+) RAM of the server everybody & makes the server unstable.
How can I solve this issue? Thanks in advance!
If it's always happening in a particular time you should have to figure what is hammering it.
SQL job? Something in NAV? Windows Task? Antivirus? Some user is running something which affecting it (copying thousands of files to network share for example)?
But yes - adding more RAM is always good.
I've read that you can pay a per hour fee to license SQL Server on a Windows Azure VM, if you want to run a dedicated instance (as opposed to using Azure SQL). However, when I go to create a VM running SQL Server, only the evaluation edition is available in the image gallery. I don't see any options in the VM creation process to add the additional per hour license for SQL Server. Where does this come in to play?
This is a result of Virtual Machines on Windows Azure being still in preview.
As far as I know - whilst in preview only the evaluation copy of SQL is formally available and at no extra charge. I suspect this will change when IaaS goes GA.
You can read about it here