Central SQL Server development database server - sql-server

We are in the process of rethinking our development environment. At the moment, we all have Elitebook notebooks which are not as fast as we'd like. We're thinking of virtualizing our development environment to a central VM server.
Our developers work in Visual Studio and use SQL Server as a database. We also have a few SharePoint developers who need a 64bit Win2k8 machine for SharePoint 2010. These are already virtual machines with their own local SQL Server installation.
Every developer's machine or VM has SQL Server installed. This requires resources from all boxes, and is challenging when working with a team on a project. Therefore we're looking into the possibility of centralizing the resources into a single DB server. That box would have to run multiple SQL Server instances (each Sharepoint developer needs a separate one to begin with). We also have the need for an older SQL Server 2005 and SQL Server 2000 installation for backwards compatibility. Besides the SQL Server box, the plan consists of a VM session for each developer with the development tools installed. So a developer could just RDP into the development environment, have his own image and make use of the centralized DB server. Test servers will also be virtualized in the same environment.
I'm looking for some tips and best practices on this matter. For instance:
What's the number of SQL Server instances a normal box can take? And if we upscale virtualized cores / memory; is that enough to add new instances? I don't expect heavy usage in dev.
What's the downside on centralizing the SQL Server instances as opposed to keeping a local instance on every development box?
How should this be integrated in a DTAP strategy?

Just some thoughts:
The number of instances is hardware dependent; I'm not sure if there's a mathematical formula to help calculate how many instances you can run on a VMbox, but it sounds like you're going to need a beast of a machine to run multiple instances per developer PLUS development tools (I can barely get SSMS and Visual Studio to play nice on my laptop). Better hardware = more cost.
Developing on a terminal system may have some advantages, but I can think of a few disadvantages as well:
network latency. If your developers work remotely at all, they'll
need a fast internet connection in order to do anything. If the dev
environments stay on their laptops, they could work disconnected.
instance interdependence. Although SQL Server instances are
completely seperate, sometimes you gotta reboot the server. You'll
need to coordinate that with all of your developers (may not be a big
deal if there's only a handful of you).
redundancy/maintenance. If one developer's machine is down, you
lose a person-day (is that even a word?) of productivity; if the
server goes down, your company is paying for a holiday for everyone
:)
Have you priced out what it would cost to upgrade your individual workstations so that you could compare costs on building out a centralized infrastructure?
Again, these are just some challenges that I think you should consider; there are probably ways to offset them, but they may factor in to your decision.

I would centralize the database servers as best as you can. Having all the dev's work off of centralized database instances should make migrating changes between environments MUCH easier. That alone is worth the effort.
For the SharePoint development environment, I'd highly recommend investing in a few books to make sure you go down the right path. You should be able to have all the dev's working off the same development instances for that too. Here is a good book on the subject, I'm sure there are others: SharePoint 2010 Development
As far as making the developer machines VM's too, I would seek the input of the development staff first and foremost and seriously consider Stuart's pros & cons.

Related

SQL Server development environment for a small team - best practice needed

We are a small team of 5 working on a same database. It's a reporting solution so there is about 5 more 3rd party databases which is source of data.
It is very important to have latest data for development, so sometimes those linked databases backed up and restored on each dev's local SQL Server(an they pretty big). Then there is always a problem with dev's databases being out of sync from each other. When it was 2 of us, there was no problem. But when more people got added to the team - it starts to be a pain to arrange.
So, I was thinking about building a dev SQL Server. It will be more powerful then laptops (faster queries) and latency should not be an issue. We always have internet when working and that is not a problem either. I only wonder if it's OK to work together out of the same database. Sounds like a good plan, but not sure if there will be any "gotchas" comparing having databases locally. We will be able to keep data fresh for everyone easy. And I think it should save time and making easier to build/rebuild dev machines if needed. I even think that database where we code can be 1 per person, but still on a same box. This way we can always compare them "right on a box" if needed.
Any advices on this setup? Pro's or Con's?
A new option is to use isolated containers on a shared server, with clones of the production database environment. Full disclosure, I work for Windocks, where this is the primary use of SQL Server containers. The most recent release includes built-in SQL Server db cloning. Benefits of this approach include speed (environments deliverd in <1 minute), and license and labor savings by going to a shared server rather than a score of VMs.

How hard to migrate from IaaS to PaaS on Azure

So I'm thinking of dipping a toe in the Azure pool
Our web App Suite will soon be a pure ASP.Net + SQL Server affair
For various reasons it will be simpler to initially create a SQL VM and run everything from there initially.
How hard will it be to ...
...migrate SQL off the VM and into either "Cloud Services" or "Data Management"?
...migrate the suite of WebApps off the VM and into "Websites"?
It is my understanding that having achieved this migration, the OS level updates will no longer be my concern as they will be handled by the service. Thus at this point I'll be able to throw the original VM away :)
This isn't exactly answering your questions, but it might help educate you on more questions to ask and giving you a boost out of the gate. These were all lessons learned before, during, or after our migration of our systems to Azure. Now that we're up there, we have a ~50GB database with ~6 services running across ~30 instances. As long as our database backup behaves, total amount of effort in upgrading all of this is less than an hour (and could be much less if we didn't have many safeguards meant to force us to be aware of what's going on during the migration process - we don't want it to be too easy to deploy just to protect us from ourselves).
Preparing to migrate your system to Azure:
If you're planning to go to Azure, you first need to make sure your architectures and technologies are compatible. This isn't to say you have to code everything specific to Azure. This means some of the following things:
You should realize that "high availability" does not mean "error-free". In fact, high-availability environments usually have more errors that you have to handle and manage. For example, if you have a request going over the network to a server that just had a motherboard fry and was taken offline, that network request will be unsuccessful. That's not typically a problem you code for in "standard" server apps. To take it even further, what if that failed network is for a Database Connection that gets put back into a connection pool? That will cause that connection to be poisoned and broken the next time somebody pulls it out regardless of the future state of that server that went poof! There are just some extra things to worry about here because you're no longer depending on just 1 network with 4 servers on it but are now depending on hundreds of networks with thousands of servers on them. That 0.05% error scenario will happen MUCH more often to you than you've ever experienced in the past and you really have to be aware of this!
You should use dependency-injection to easily change things around. Proper separation of concerns will changes that seem very difficult become very easy in Azure.
You should use architectures for "high-availability". For example, a web application that would break when ran in a web farm would also break in Azure but a web application designed to work in a web farm would be very easy to run in Azure.
You should have automated deployments and configuration transforms for all of your applications. Anything else is just unsustainable unless it's nothing more than one little web site or something like that.
Depending on your needs, you can do it in phases. If database latency is something that isn't a big deal, perhaps a hybrid approach (over VPN from Azure to your data center) is acceptable to get your apps in Azure first while you later migrate your database. Or perhaps the opposite. What we did was keep primary apps and database in our data center but put secondary apps up in Azure first. Then some primary apps (that took a performance hit for a month until) later our database and critical apps. That final migration sure made for a very long weekend and not much sleep, but it is SOO much nicer now that we're done!
Migrating your applications to Azure:
This ultimately depends very heavily on what your application is or does, and every scenario has different steps/issues/benefits. I'm not going to cover this deeply other than to say, "Use Google, it's your friend!". Beyond that, for us, getting our applications up into Azure was the largest payoff when compared to our data. The ROI on our app migration was less than a month between hosting costs, licensing, and management effort. Instead of taking a couple days to setup a server, I can now take a day to setup an entirely new and duplicated environment of all of our SaaS applications/databases/etc and have them running on ~25 different Cloud instances!
Instead of trying to tell you how to migrate these, let me give you a few words of caution so you know sooner rather than later:
If you have app problems in Windows 2012, humor me and try it in Windows 2008 R2. There are a couple bugs in some of the 2012 images that they've prepared. It's incredibly trivial to switch back and forth!
Go make your logging 1000x's better than what it is now. If you don't do that now, you'll regret it.
Don't depend 100% on the easy-to-implement "Azure Logging". It works well enough but it more-or-less requires your applications to start successfully and is absolutely useless in debugging startup problems. If you don't have an alternative, then you will waste many, many hours just debugging stupid little problems when your app starts up. By the time you're done with it, you could have easily added 5 other logging frameworks and had an amazingly awesome logging system in place plus a running app instead of nothing but a running app to show for the same amount of time. Really, do this! (Profiling is a good idea as well, although Mini-Profiler has load-balancing issues if you have multiple instances.)
If you add new endpoints to your deployment (ports, etc), you cannot simply "Upgrade" an existing deployment. You must delete it (the deployment, not the service) and install from scratch. You can delete the Staging one, deploy to Staging, then swap.
If you have WCF apps, pretend you don't know about Windows Activation Services. They're disabled in Azure by default. You can either hack them to turn them on (startup scripts) or create your own self-hosting application. We self-host so we can more easily tweak service configurations once we're deployed (it's not easy to edit web.config files in a way that sticks in Azure). Web services work in "IIS" in Azure but TCP, named pipes, etc. do not.
Go learn about and add the Transient Errors Application Block (or something equivalent) to anything you communicate with. If you don't do it now, you'll regret it.
Go make your logging better! Really, really, REALLY do this!
Migrating your SQL Server Database to Azure:
Getting your database up into Azure is a bit of a painful process. There isn't a quick and easy way to just get it up there and making it work. Some people have to make some major changes while others just have to tweak a few ignorable things here and there. However, no matter how large or small your database is, you really REALLY must devote a lot of time to testing it. Test your migration process. Test your scripts to prepare your database. Test the performance and stability of the database up in the cloud. Test your backup procedures. Test your upgrade procedures. Test your backup restoration procedures. Test ALL of this because I guarantee you that you will find some surprises!
Schema:
Go learn about all of the limitations of SQL Azure. No Heaps, etc. Learn them before you start! Go learn them now! They're all mostly to very reasonable.
Be aware of the 2GB T-Log limitation! This means some very large indexes can never be rebuilt! (that said, our 30GB table isn't yet hitting this)
To deploy your schema, go into SSMS for your local db and use the "Tasks -> Extract Data-tier Application..." feature (it's in different areas of the menu in different versions of SSMS). Take this file and go into SSMS for your Azure database and use the "Deploy Data-tier Application" feature. (This will help you catch some of the Azure limitations you aren't honoring if this process fails.) This is, by far, the easiest way to get an empty version of your database up into Azure.
Use a tool like Redgate SQL Compare to verify your work (you'll have to tweak a couple options, like WITH NOCHECK to get a clean comparison).
You'll have to cleanup users, schemas, broken sprocs, etc. before you succeed at this. (this is a good thing!)
Data:
Go learn about all of the limitations of SQL Azure. Learn them before you start! Go learn them now! They're all mostly to very reasonable.
Go download the Azure Database Migration Wizard from Codeplex (or wherever the latest version is). It's not the most amazing software (kinda unstable) but even if it crashes once or twice on you, it'll still save you a LOT of time!
I strongly recommend RedGate's SQL Data Compare. The previously-mentioned migration wizard will help you identify problems (it's on you to fix those) and will get you ~98% migrated but you'll want to come back and clean up after it. It has some bugs that misses nullable BIT fields (and upper ascii characters) and some other things that a tool like SQL Data Compare can easily identify and fix. It can also give you the peace-of-mind that you can depend on your database.
If your database is large, consider spinning up a temporary VM in Azure (they have them with SQL Server pre-installed and available in ~20 minutes) to do your migrations from. If you do this, it's best to upload a compressed database backup to Blob storage (Cerebrata's storage too is great for this) and then grab it and restore to SQL Server in that VM. Then stage your migrations all from there!
Test, test, TEST!!!
Be careful running SQL on a VM, it's not a high availability solution. Azure VMs are prone to restarting from time to time. Unless you have multiple VMs running SQL Server in an availability group, or you have some sort of mirroring and load balancing setup, you won't have a high availability solution. I too originally favoured the IaaS to PaaS route, in the end it seemed to be a false economy as migrating IaaS to PaaS is about as much work as migrating on-premise to PaaS. In the end I decided to take the time to optimise my application for PaaS, i.e. moving durable storage to blobs, implementing transient error handling and retry logic, etc.
What you're proposing is certainly possible but having a multi VM arrangement to deliver high availability SQL takes a bit of work and is expensive! Have a read of the following guide, it was really helpful to me when I started the migration process:
Top 7 Concerns of Migrating a .NET Application to Azure
Just yesterday Microsoft announced their plan to host also Iaas solutions and not only Saas solution on their Azure platform.
http://weblogs.asp.net/scottgu/archive/2013/04/16/windows-azure-general-availability-of-infrastructure-as-a-service-iaas.aspx
About migration, it really depends. We work with a distribution mechanism: TFS + Octopus so the deployment is very easy and it works on Iaas or SQL Azure, it doesn't really matter.
There are also other things to keep into consideration when moving into Saas. Probably your code should be refactored if it's not Saas oriented or your application may have a very high hosting cost over Azure.

Is it 'ok' to develop with a DEV database residing on the same SQL server as the live production app?

Sometimes we have upwards to 4-6 people either RDPed looking at data in SQL Management Studio, or hitting the server with LINQpad, Toad, etc from various locations while developing in mostly ASP.NET and Flex with WebOrb. Is this bad? Bad in the sense that we are trying to keep our live production app stable and as lag free as possible for global users?
i don't think i'd do that. if it was just me, then sure:) but if there's a bunch of people god only knows what queries they might run. we always use a test server for such things.
best regards,
don
Best practice would be separate servers. Next best, separate instances on same server. Next best, separate databases on a instance.
However, I wouldn't be letting any developers RDP into a production SQL Server (or production anything), regardless of choice of segregation mechanism. Use a separate terminal server with tools and everything there.
You can have dev and prod db on the same instance. Just make sure the permission are setup so that developers cannot touch the prod db. The negative is a long running query in dev will impact prod.
In SQL SERVER 2005 a better solution is to have a dev "instance" and a prod "instance".
Then is someone mis-behaves on the dev instance you and just bring down that insance.
In SQL server 2008 you can setup up CPU usage plans which can help throttle how much resources can be used. You should investigate that.
It depends on a lot of variables. It's generally better to have them on different servers. This is really depending on how you use sql server. If you just have databases, don't use a lot of the management tools, like nightly processes to alter data and other jobs you might be OK. You are running a real risk of having bleed over code from developing on the dev database to the production one though. It's safer to have them separated out, especially for the small amount of money needed to create a dev instance of sql server.
I find this a poor practice for several reasons:
First suppose one of your devs messes up and does something that ends up taking all of the processing power of your server. Oops prod is down for no good reason.
Second, devs could easliy change the wrong database. Oops prod is down for no good reason. At least you can avoid this by not giving any production rights to devs (which you should be doing anyway as a best practice.)
Third, if the database is the on the same server it has to have a different name, this can make moving things to prod difficult and error prone. I think it also means it will be less likely that you deploy correctly through source controlled scripts. If you choses to copy objects from one database to the other, then you can have issues with that as well. First if there is data in the object already, you may accidentally wipe it out (hope you have a backup) or you may move the new table structure but miss things like the PKs and FKS and default values and triggers and constraints and indexes or the wizard might take much longer to do the move because in the background it is creating and populating a new table and then droping the old and renaming the new one rather than using alter table. Oops prod is down or seriously slowed for no good reason.
I tend to agree with the "separate servers" folks, although with my company we actually do most of our day to day development work on our local machines -- so we have SQL Server installed locally. This can be a pain, of course, if you're developing reporting or something that needs production data. In that scenario, developers here usually get a subset of production data exported to work with.
For acceptance testing vs. deployment though, we do use separate instances.
Developers probably shouldn't have production access UNLESS they're also the ones who do application deployments (as can be the case with small teams like the one I'm in). If you do end up using separate DBs on the same server, I would at least lock down RDP access and grant access to each development DB on an individual basis. That's how it works here -- I don't have admin rights to any of our servers at this time, and can only admin databases for applications that belong specifically to my team.
It depends how much you value your live service. I know I wouldn't trust me and my fat hands running SQL on the same hardware as a live application.
Even if the application is not business critical, and the app is not data-bound, you can set up a development environment on an unused desktop machine, so why wouldn't you do that instead of take the risk?
The set up I use is typically DEV database on a local instance of SQL Server (Development Version for me, but Express would probably also work), a QA database on a test instance of SQL Server. In our environment, this is located on a virtual instance of W2K3 -- soon to be W2K8. Production databases live either on dedicated instances of SQL server or on one of various clustered instances. We don't mix PROD/QA/DEV at all. I use RedGate SQL Compare to synchronize schemas between the various systems, including different developer instances of the database.
It will be 'OK' as much as the team don't had any administrator privileges over the server (either SQL or Windows), and their user log-ins just grant access to potentially destroy just the development database and it's associated files, having denied access to production databases
For other application testing reasons, we created a copy of our production server (which is a virtual server) on a separate domain. This allowed the Windows Server Name, SQL Serer Name, Database name to be exactly the same (lots of settings on 3rd party apps require this level of configuration to get different processes to work.). Now we can rebuild a test environment by creating an exact virtual image of our production server.
I was sceptical about running SQL Server on a virtual machine, but it has given our small company a lot of flexibility. We like to think our databases are critical, but it is for internal uses and having some down time would just have workers shift their lunch hour.

Setting up a diverse database testing environment

I'm writing some code that needs to work against an array of different database products.
MySql
Sql Server 2000 to 2008
PostgreSQL
Oracle 9i & 10g
Jet 4.0 (MS Access)
MSDE
Sybase Adaptive Server Anywhere
Sybase Sql Anywhere
Progress OpenEdge
We have a set of generic integration tests that we need to run against each database product, as part of a continuous integration build.
I'm sure other people have had to set-up similar test environments and I would like to tap into some of that wisdom - what strategies do you end up using, what worked well or did not work well?
Some thoughts:
Separate virtual machines for each of the products, each allocated a small amount of memory (easier to manage for certain scenarios, or where we have slightly different setup's for individual products).
A couple of virtual machines or even a single virtual machine for all the products (i.e. perhaps an ubuntu box for postgresql & mysql, and a windows 2008 server machine for the remaining products) - I like one or two vm's because this is a more portable environment for running the tests i.e. when on the road / off-site, as my laptop would probably crawl to a halt running 8 or 10 small VM's.
And finally how have you tackled the prohibitive cost of some of these commercial products i.e. Oracle or Progress OpenEdge, and are the previous versions still available i.e. are there free "single-developer" editions available, or cheaper routes to purchasing these products?
We also use vmware for our servers, one vmware host per database. You should be OK with putting several databases on one vmware.
You shouldn't have much a problem with expensive software licenses. Oracle, for example, allows you to have a development license for all of their products. So long as you are not running a production DB on your laptop vmware, you should be OK. Of course, IANAL.
For Oracle, get the Enterprise Edition if that's what your customers will be using (most likely).
Be sure and take advantage of vmware snapshots. You can get the databases configured perfectly for running your tests, run your tests, and then revert the databases back to the pre-test state ready for running your tests again.
What we do here is (mostly) a VM for each configuration required, we also target different operating systems so have something like 22 configurations we are testing. These VM's are hosted in an ESX server, and scripted to start and stop when required so that they are not all running at once. It was a lot of work to set up.
In terms of cost of the Database servers, the kind of software we are building required top-end versions so the testing costs have been factored into the overall dev costs (we just had to "suck it up")
Oracle does have a free 'Express Edition'. It doesn't have all of Oracle's functionality but then if you are also running against Jet then that shouldn't be an issue.
Architecturally, it may be better off creating an integration testing farm of server(s) that are powerful enough to run the variously configured VMs. It could be fronted with a queue process listening for integration testing requests (which could be triggered using svn commits or other source control check in triggers).
For the most accurate range of testing, you probably want to have a separate VM for each configuration. This will reduce the risk of conflicting configs and also increase the flexibility of running a custom set of VMs (e.g. Oracle + MySQL + PostgreSQL and not the others, etc.). Depending on your build process, it may also allow you to run 10-minute builds.
A benefit of running an integration test farm is that if you're on the road using your laptop, you can trigger off the integration build after code check-in, run all the tests and have it notify you of the results. You can also archive each request and the results to help diagnose failing builds.
Oracle license excerpt:
We grant you a nonexclusive, nontransferable limited license to use the programs only for the purpose of developing, testing, prototyping and demonstrating your application, and not for any other purpose.
So, you can download full Oracle versions 9, 10 and 11 and use them free of charge as stated in license.
Check downloads section at www.oracle.com
It sounds like you have a good idea of what to do already. I'd suggest the following VM layouts:
A Windows Server 2008 box hosting the following:
MSDE
Jet 4.0 (MS Access)
Sybase Adaptive Server Anywhere
Sybase Sql Anywhere
Progress OpenEdge
On an Ubuntu or Red Hat box:
MySql
PostgreSQL
Oracle 9i Download it here
Oracle 10g (Use the free express edition)
I don't see any need to test against the full blown SQL and Oracle editions. The free editions are stripped down, so there is very little risk of that code breaking when you run it against the full version of those servers. If you had tested against the full server and went down to the free stuff, then yes some things might break, but by testing with the least common denominator, you should ensure good quality.

SQL Express for production?

Is using SQL Express in a production environment a reasonable choice?
I looked at Microsoft's comparison chart:
https://www.microsoft.com/en-us/sql-server/sql-server-2019-comparison
I would be using SQL Express with a small to mid-sized web site. I don't believe I would exceed the 4GB database size limit. Is SQL Express typically supported in shared hosting environments? Is there something I'm missing that would make SQL Express an unreasonable choice?
I know many people using SQL Express for production and it works well, the biggest limiting factor is the absence of SQL Agent for automated backups. To automate backups you have to either take a VM image (if on a VPS) or use windows scheduler or some other technology.
The only other major limiting factor is the ram limitation, but for a small site I have not really noticed that being too much of an actual issue.
Many cheap hosting companies uses SQL Express. And I know from personal experience that SQL Express is a viable solution for most things.
"Most things" includes a large project in a production environment.
One thing to ask might be - if your needs outweigh the capabilities of SQL Server Express, will you be able to afford a commercial version?
I think the one of the ideas of SQL Server Express is that you could host your site on it and once you outgrow it (need more than 4GB, etc.) you will buy a commercial version, especially now that you're locked in to using it. But if your site will outgrow it sooner than the income coming in will be able to buy a commercial version, this could be a problem (and possibly a database design flaw if your site is consuming more data/disk space than it needs to)
SQL Express is a great choice if you don't mind the memory and processor limitations. The database size limitation is not as much of an issue now that in 2008 R2 they up'd the limit to 10gb. I think the other limitations are still the same though.
There's a maximum of 5 concurrent connections for SQL Express. If you have more than 5 concurrent connections performance will drop severely.
Alos, you have to consider that SQL Agent is not included so if you want to schedule backups or maintenance tasks you have to use windows scheduler.
Other that this, it's a perfect solution for small databases.

Resources