SQL Server development environment for a small team - best practice needed - sql-server

We are a small team of 5 working on a same database. It's a reporting solution so there is about 5 more 3rd party databases which is source of data.
It is very important to have latest data for development, so sometimes those linked databases backed up and restored on each dev's local SQL Server(an they pretty big). Then there is always a problem with dev's databases being out of sync from each other. When it was 2 of us, there was no problem. But when more people got added to the team - it starts to be a pain to arrange.
So, I was thinking about building a dev SQL Server. It will be more powerful then laptops (faster queries) and latency should not be an issue. We always have internet when working and that is not a problem either. I only wonder if it's OK to work together out of the same database. Sounds like a good plan, but not sure if there will be any "gotchas" comparing having databases locally. We will be able to keep data fresh for everyone easy. And I think it should save time and making easier to build/rebuild dev machines if needed. I even think that database where we code can be 1 per person, but still on a same box. This way we can always compare them "right on a box" if needed.
Any advices on this setup? Pro's or Con's?

A new option is to use isolated containers on a shared server, with clones of the production database environment. Full disclosure, I work for Windocks, where this is the primary use of SQL Server containers. The most recent release includes built-in SQL Server db cloning. Benefits of this approach include speed (environments deliverd in <1 minute), and license and labor savings by going to a shared server rather than a score of VMs.

Related

Migrating MS Access 2013 DB to better work from home solution

I have a number of MS Access databases that range from the simple to the fairly complex. They exist as split databases on a shared drive on an on site LAN and the users all have accdr front ends to work with.
Until Covid 19 this worked quite well, now we all have to work from home. While I expected some performance issues, I did not quite expect performance to take quite that much of a hit. So I am looking for ways to migrate to something that will work well with everyone trying to work via a VPN.
An additional fly in the ointment is that there is no budget to work with and getting IT support is akin to summoning the great old ones (it's difficult and you are likely to die insane).
So I have begun to research some different options. MS SQL Server has come up, but I don't know very much about how to implement it. I do not have a dedicated machine to put this on.
I have looked at Sharepoint, but some of the stuff I have been reading makes it seem like this is not a great option as some of my queries are complicated and I have some pretty large tables (45k records, 100 fields per record) In the most complex DB, I have to add several thousand records each day and run several update queries on the freshly added records.
MS Azure looks promising, but again I don't know if that will put me at odds with the malevolent IT gods.
I started looking at office 365 Power apps, but I don't need any mobile device support and it doesn't look like it has the Oomph I need.
Google and Duck Duck Go haven't thrown up anything usefull that I could find among the dross. I'm certain what I need is out there, I just can't find it. I have found that One drive is right out, and likewise Sharepoint for anything other than the simplest DB I have built.
What I am looking for is any solutions, articles, books or even papyrus scrolls and stone tablets that might get me pointed in the right direction. Any Ideas? Any other information you need?
Edit: After so looking I have found that I may be able to get MS SQL server on a virtual server without angering the IT demons. Azure as a solution is out unless I find a suitable sacrifice. Any good places to look for information on how to use SQL server from a standing start?
Consider migrating the backend databases to SQL Server. There's a SQL Server Migration Assistant that will do this for you. Your frontend will contain links to the resulting SQL Server tables.
The last time I did this I got an immediate 2X performance improvement on a LAN. Over a VPN, you should expect similar, possibly better, performance improvements. Quite a good win for something so simple to do without having to do a full rewrite. Don't expect miracles however; Access by nature is a very thick client.
You don't necessarily need a full-blown SQL Server; SQL Server Express should suffice, and you can run that on any machine on your LAN. The download for SQL Server Express Edition can be found here.
You can read up on the migration process here.
You should consider using Remote Destop Connection first!
As you already managed to connect employees by VPN to LAN, you just need to enable remote access to their machines at work! That is the simplest solution and it doesn't need a fast connection or any changes on the application. You can enable WOL so they can turn on machines themselfes.
Of course you should also consider migrate to a RDBMS like Robert advised!

How hard to migrate from IaaS to PaaS on Azure

So I'm thinking of dipping a toe in the Azure pool
Our web App Suite will soon be a pure ASP.Net + SQL Server affair
For various reasons it will be simpler to initially create a SQL VM and run everything from there initially.
How hard will it be to ...
...migrate SQL off the VM and into either "Cloud Services" or "Data Management"?
...migrate the suite of WebApps off the VM and into "Websites"?
It is my understanding that having achieved this migration, the OS level updates will no longer be my concern as they will be handled by the service. Thus at this point I'll be able to throw the original VM away :)
This isn't exactly answering your questions, but it might help educate you on more questions to ask and giving you a boost out of the gate. These were all lessons learned before, during, or after our migration of our systems to Azure. Now that we're up there, we have a ~50GB database with ~6 services running across ~30 instances. As long as our database backup behaves, total amount of effort in upgrading all of this is less than an hour (and could be much less if we didn't have many safeguards meant to force us to be aware of what's going on during the migration process - we don't want it to be too easy to deploy just to protect us from ourselves).
Preparing to migrate your system to Azure:
If you're planning to go to Azure, you first need to make sure your architectures and technologies are compatible. This isn't to say you have to code everything specific to Azure. This means some of the following things:
You should realize that "high availability" does not mean "error-free". In fact, high-availability environments usually have more errors that you have to handle and manage. For example, if you have a request going over the network to a server that just had a motherboard fry and was taken offline, that network request will be unsuccessful. That's not typically a problem you code for in "standard" server apps. To take it even further, what if that failed network is for a Database Connection that gets put back into a connection pool? That will cause that connection to be poisoned and broken the next time somebody pulls it out regardless of the future state of that server that went poof! There are just some extra things to worry about here because you're no longer depending on just 1 network with 4 servers on it but are now depending on hundreds of networks with thousands of servers on them. That 0.05% error scenario will happen MUCH more often to you than you've ever experienced in the past and you really have to be aware of this!
You should use dependency-injection to easily change things around. Proper separation of concerns will changes that seem very difficult become very easy in Azure.
You should use architectures for "high-availability". For example, a web application that would break when ran in a web farm would also break in Azure but a web application designed to work in a web farm would be very easy to run in Azure.
You should have automated deployments and configuration transforms for all of your applications. Anything else is just unsustainable unless it's nothing more than one little web site or something like that.
Depending on your needs, you can do it in phases. If database latency is something that isn't a big deal, perhaps a hybrid approach (over VPN from Azure to your data center) is acceptable to get your apps in Azure first while you later migrate your database. Or perhaps the opposite. What we did was keep primary apps and database in our data center but put secondary apps up in Azure first. Then some primary apps (that took a performance hit for a month until) later our database and critical apps. That final migration sure made for a very long weekend and not much sleep, but it is SOO much nicer now that we're done!
Migrating your applications to Azure:
This ultimately depends very heavily on what your application is or does, and every scenario has different steps/issues/benefits. I'm not going to cover this deeply other than to say, "Use Google, it's your friend!". Beyond that, for us, getting our applications up into Azure was the largest payoff when compared to our data. The ROI on our app migration was less than a month between hosting costs, licensing, and management effort. Instead of taking a couple days to setup a server, I can now take a day to setup an entirely new and duplicated environment of all of our SaaS applications/databases/etc and have them running on ~25 different Cloud instances!
Instead of trying to tell you how to migrate these, let me give you a few words of caution so you know sooner rather than later:
If you have app problems in Windows 2012, humor me and try it in Windows 2008 R2. There are a couple bugs in some of the 2012 images that they've prepared. It's incredibly trivial to switch back and forth!
Go make your logging 1000x's better than what it is now. If you don't do that now, you'll regret it.
Don't depend 100% on the easy-to-implement "Azure Logging". It works well enough but it more-or-less requires your applications to start successfully and is absolutely useless in debugging startup problems. If you don't have an alternative, then you will waste many, many hours just debugging stupid little problems when your app starts up. By the time you're done with it, you could have easily added 5 other logging frameworks and had an amazingly awesome logging system in place plus a running app instead of nothing but a running app to show for the same amount of time. Really, do this! (Profiling is a good idea as well, although Mini-Profiler has load-balancing issues if you have multiple instances.)
If you add new endpoints to your deployment (ports, etc), you cannot simply "Upgrade" an existing deployment. You must delete it (the deployment, not the service) and install from scratch. You can delete the Staging one, deploy to Staging, then swap.
If you have WCF apps, pretend you don't know about Windows Activation Services. They're disabled in Azure by default. You can either hack them to turn them on (startup scripts) or create your own self-hosting application. We self-host so we can more easily tweak service configurations once we're deployed (it's not easy to edit web.config files in a way that sticks in Azure). Web services work in "IIS" in Azure but TCP, named pipes, etc. do not.
Go learn about and add the Transient Errors Application Block (or something equivalent) to anything you communicate with. If you don't do it now, you'll regret it.
Go make your logging better! Really, really, REALLY do this!
Migrating your SQL Server Database to Azure:
Getting your database up into Azure is a bit of a painful process. There isn't a quick and easy way to just get it up there and making it work. Some people have to make some major changes while others just have to tweak a few ignorable things here and there. However, no matter how large or small your database is, you really REALLY must devote a lot of time to testing it. Test your migration process. Test your scripts to prepare your database. Test the performance and stability of the database up in the cloud. Test your backup procedures. Test your upgrade procedures. Test your backup restoration procedures. Test ALL of this because I guarantee you that you will find some surprises!
Schema:
Go learn about all of the limitations of SQL Azure. No Heaps, etc. Learn them before you start! Go learn them now! They're all mostly to very reasonable.
Be aware of the 2GB T-Log limitation! This means some very large indexes can never be rebuilt! (that said, our 30GB table isn't yet hitting this)
To deploy your schema, go into SSMS for your local db and use the "Tasks -> Extract Data-tier Application..." feature (it's in different areas of the menu in different versions of SSMS). Take this file and go into SSMS for your Azure database and use the "Deploy Data-tier Application" feature. (This will help you catch some of the Azure limitations you aren't honoring if this process fails.) This is, by far, the easiest way to get an empty version of your database up into Azure.
Use a tool like Redgate SQL Compare to verify your work (you'll have to tweak a couple options, like WITH NOCHECK to get a clean comparison).
You'll have to cleanup users, schemas, broken sprocs, etc. before you succeed at this. (this is a good thing!)
Data:
Go learn about all of the limitations of SQL Azure. Learn them before you start! Go learn them now! They're all mostly to very reasonable.
Go download the Azure Database Migration Wizard from Codeplex (or wherever the latest version is). It's not the most amazing software (kinda unstable) but even if it crashes once or twice on you, it'll still save you a LOT of time!
I strongly recommend RedGate's SQL Data Compare. The previously-mentioned migration wizard will help you identify problems (it's on you to fix those) and will get you ~98% migrated but you'll want to come back and clean up after it. It has some bugs that misses nullable BIT fields (and upper ascii characters) and some other things that a tool like SQL Data Compare can easily identify and fix. It can also give you the peace-of-mind that you can depend on your database.
If your database is large, consider spinning up a temporary VM in Azure (they have them with SQL Server pre-installed and available in ~20 minutes) to do your migrations from. If you do this, it's best to upload a compressed database backup to Blob storage (Cerebrata's storage too is great for this) and then grab it and restore to SQL Server in that VM. Then stage your migrations all from there!
Test, test, TEST!!!
Be careful running SQL on a VM, it's not a high availability solution. Azure VMs are prone to restarting from time to time. Unless you have multiple VMs running SQL Server in an availability group, or you have some sort of mirroring and load balancing setup, you won't have a high availability solution. I too originally favoured the IaaS to PaaS route, in the end it seemed to be a false economy as migrating IaaS to PaaS is about as much work as migrating on-premise to PaaS. In the end I decided to take the time to optimise my application for PaaS, i.e. moving durable storage to blobs, implementing transient error handling and retry logic, etc.
What you're proposing is certainly possible but having a multi VM arrangement to deliver high availability SQL takes a bit of work and is expensive! Have a read of the following guide, it was really helpful to me when I started the migration process:
Top 7 Concerns of Migrating a .NET Application to Azure
Just yesterday Microsoft announced their plan to host also Iaas solutions and not only Saas solution on their Azure platform.
http://weblogs.asp.net/scottgu/archive/2013/04/16/windows-azure-general-availability-of-infrastructure-as-a-service-iaas.aspx
About migration, it really depends. We work with a distribution mechanism: TFS + Octopus so the deployment is very easy and it works on Iaas or SQL Azure, it doesn't really matter.
There are also other things to keep into consideration when moving into Saas. Probably your code should be refactored if it's not Saas oriented or your application may have a very high hosting cost over Azure.

Advantage Database or SQL Server

I have a client that currently uses a local Advantage Database on their PC along with an application. They are thinking of upscaling their setup to have multiple applications running communicating with a database server i.e/a client-server environment.
They are now considering the best database for this approach. They are looking at the Advantage Database Server product in comparison to SQL Server Express(the application does not warrant a full SQL Server at this stage).
Obviously SQL Server is a more well known product probably with more support but I was hoping you could give me some opinions and thoughts on what you think the best product would be in terms of performance, stability and support.
One thing to note although not directly relevant is that the application is currently written in Delphi and there could be a move to C# to bring it up to date.
The migration from a local Advantage Database to a client/server Advantage database is a very simple process. It simply involves changing the connection properties within the program. There are no other coding changes that need to be done.
Advantage has a great support team and has been in development for over 15 years. The stability and support are at least equal to SQL Server.
Advantage also provides a .NET Data Provider which would allow for C# development.
I have developed for both SQL Server and Advantage. They each have their pros and cons (although now I favor Advantage).
Given your situation, however, this decision appears to be a no-brainer: Advantage Database Server. Why? It's already done!
My Advantage programs run, unmodified, against the same database either locally or remotely. All I change is the connection string. I'm not saying that your customer's code won't have to be changed. I am saying it is likely to be trivial. Compare that to the greater effort involved in switching to a whole new database engine.
In general I'm a SQL Server person all the way. I work with id daily and have for almost ten years, but in your situatuion, it seems silly to consider moving to a new database when there is aclear upgrade path to do what you want using the backend you already have. It would be much less work and far less likely to introduce new bugs to stay within the same database family.
ADS wins hands down. It is maintenance-free. It is extremely reliable. It is extremely fast. It is extremely scalable. SQL is very well supported, and the ADS newsgroups are responsive (answers within hours instead of days on SQL server fora) and well-informed. I have been using ADS since 1991 and it has never gone wrong! My users are incredibly demanding and to be able to turn round solutions within hours instead of days, is both a joy to me and a business incentive to the end users and clients. Deployment is gentle, fast and simple. Platform support is better than SQL server. 64-bit server deployment abounds and is well-grounded, transparent and reliable. 64-bit clients are coming in the next version (10). My experience with ADS is wholly positive, whereas my ventures with SQL server have been fraught with difficulties, idiosyncrasies and workrounds!
I happen to be a support rep for Advantage so when you say "Obviously SQL Server is a more well known product probably with more support" I have to argue a bit.
As Chris stated switching from Advantage Local Server to the the Advantage Remote (client/server) Server is a pretty painless process - they designed it that way.
Install the Advantage Database Server on a machine where the data is located (not a requirement but it's recommended). You can get a free trial here: http://marketing.ianywhere.com/forms/ADS91-30-Day
Within the application there will be TAdsConnection component(s) - change the TAdsConnection.ConnectionType to 'REMOTE' (http://devzone.advantagedatabase.com/dz/webhelp/Advantage9.1/mergedProjects/ade/sec7/connectiontype.htm)
You can specify the path (TAdsConnection.ConnectPath) from the clients in a couple different ways but the recommended is:
\\server:6262\mydata
http://devzone.advantagedatabase.com/dz/webhelp/Advantage9.1/mergedProjects/ade/sec7/connectpath_tadsconnection.htm
Note: 6262 is the port used by default (may need to add an exception to the firewall). Also if your application uses a data dictionary the path would include the name of the .ADD file (e.g. \\server:6262\mydata\mydd.add)
Hope this helps!

Getting the production database to work with locally

I'm nearing the stage of saying "lets go live" of a client's system I've been working on for the past few months, basically a few autonomous services including a public facing website, an intranet website, an hourly exporter from a legacy OLTP DB based on modified flags/triggers etc.. and few others services that compose the system, with each their own specialized database etc.. , communicating with each other via messages (NServiceBus).
When I started I tried to keep a local replication of everything but its proving more and more difficult and on reflection probably a major friction point of the past few weeks, I like to keep regularly up to date as the legacy database is growing and causing hundreds events daily. Having high latency & mediocre bandwidth (between myself and the client's site, I'm in SE Asia where bandwidth is generally crap anyway) is also an issue for RDP, SQL tools, remote connection strings etc.. Tracking down integration bugs and understanding scenarios they present during feedback/integration/QA is also difficult as my data doesn't reflect the current state of the client's DB (the clients' staff have been working and evolving the data their end) and means another break, coffee and lengthy sync again. It would be ideal to do it all locally and then deploy at the end but I have to deliver parts incrementally (to get the check), and some parts are even in use (although not critical) so bug fixing on in use pats needs turn around quick, and with it being a small company, incremental feedback, it helps flush out some of the more vague requirements along the way (curse me).
I was thinking it would be good to have twice daily sync between the environments (their DBs to mine), I somewhat have design control over everything apart from the legacy SQL server database.
What are the best options SO users?
I was thinking of setting up a Windows 2003 light VM on my dev box. And in this install the same setup of the client sites (but not spread across multiple servers obviously). And then for syncing the databases I was thinking about SQL Server replication? or batch scripts? Or is there any better tools - ones that are fast and good compressions? I don't want my changes to go back to production (I have a separate CI & deployment procedure), I just want (I think I want.. tell me if better idea) my databases to be refreshed every night or twice a day (maybe whilst I'm at lunch bandwidth permitting).
How does everyone approach this?
I would recommend two ways to do this:
Snapshot Replication
Backing up the transaction log and manually (or batch) applying it
Snapshot replication can be difficult to get working, but it is possible even in offline situations such as physcially carrying snapshots to another location.
The transaction log method can be used as part of your standard backup procedures. ie: A full backup twice a week with transaction log backups more regularly.
Remember that best practice is to cleanse the data before using it in a test environment. At the very least this should be changing all personal data, especially email addresses, passwords and any other method which could result in some automated process making contact with the user in your database.

Is it 'ok' to develop with a DEV database residing on the same SQL server as the live production app?

Sometimes we have upwards to 4-6 people either RDPed looking at data in SQL Management Studio, or hitting the server with LINQpad, Toad, etc from various locations while developing in mostly ASP.NET and Flex with WebOrb. Is this bad? Bad in the sense that we are trying to keep our live production app stable and as lag free as possible for global users?
i don't think i'd do that. if it was just me, then sure:) but if there's a bunch of people god only knows what queries they might run. we always use a test server for such things.
best regards,
don
Best practice would be separate servers. Next best, separate instances on same server. Next best, separate databases on a instance.
However, I wouldn't be letting any developers RDP into a production SQL Server (or production anything), regardless of choice of segregation mechanism. Use a separate terminal server with tools and everything there.
You can have dev and prod db on the same instance. Just make sure the permission are setup so that developers cannot touch the prod db. The negative is a long running query in dev will impact prod.
In SQL SERVER 2005 a better solution is to have a dev "instance" and a prod "instance".
Then is someone mis-behaves on the dev instance you and just bring down that insance.
In SQL server 2008 you can setup up CPU usage plans which can help throttle how much resources can be used. You should investigate that.
It depends on a lot of variables. It's generally better to have them on different servers. This is really depending on how you use sql server. If you just have databases, don't use a lot of the management tools, like nightly processes to alter data and other jobs you might be OK. You are running a real risk of having bleed over code from developing on the dev database to the production one though. It's safer to have them separated out, especially for the small amount of money needed to create a dev instance of sql server.
I find this a poor practice for several reasons:
First suppose one of your devs messes up and does something that ends up taking all of the processing power of your server. Oops prod is down for no good reason.
Second, devs could easliy change the wrong database. Oops prod is down for no good reason. At least you can avoid this by not giving any production rights to devs (which you should be doing anyway as a best practice.)
Third, if the database is the on the same server it has to have a different name, this can make moving things to prod difficult and error prone. I think it also means it will be less likely that you deploy correctly through source controlled scripts. If you choses to copy objects from one database to the other, then you can have issues with that as well. First if there is data in the object already, you may accidentally wipe it out (hope you have a backup) or you may move the new table structure but miss things like the PKs and FKS and default values and triggers and constraints and indexes or the wizard might take much longer to do the move because in the background it is creating and populating a new table and then droping the old and renaming the new one rather than using alter table. Oops prod is down or seriously slowed for no good reason.
I tend to agree with the "separate servers" folks, although with my company we actually do most of our day to day development work on our local machines -- so we have SQL Server installed locally. This can be a pain, of course, if you're developing reporting or something that needs production data. In that scenario, developers here usually get a subset of production data exported to work with.
For acceptance testing vs. deployment though, we do use separate instances.
Developers probably shouldn't have production access UNLESS they're also the ones who do application deployments (as can be the case with small teams like the one I'm in). If you do end up using separate DBs on the same server, I would at least lock down RDP access and grant access to each development DB on an individual basis. That's how it works here -- I don't have admin rights to any of our servers at this time, and can only admin databases for applications that belong specifically to my team.
It depends how much you value your live service. I know I wouldn't trust me and my fat hands running SQL on the same hardware as a live application.
Even if the application is not business critical, and the app is not data-bound, you can set up a development environment on an unused desktop machine, so why wouldn't you do that instead of take the risk?
The set up I use is typically DEV database on a local instance of SQL Server (Development Version for me, but Express would probably also work), a QA database on a test instance of SQL Server. In our environment, this is located on a virtual instance of W2K3 -- soon to be W2K8. Production databases live either on dedicated instances of SQL server or on one of various clustered instances. We don't mix PROD/QA/DEV at all. I use RedGate SQL Compare to synchronize schemas between the various systems, including different developer instances of the database.
It will be 'OK' as much as the team don't had any administrator privileges over the server (either SQL or Windows), and their user log-ins just grant access to potentially destroy just the development database and it's associated files, having denied access to production databases
For other application testing reasons, we created a copy of our production server (which is a virtual server) on a separate domain. This allowed the Windows Server Name, SQL Serer Name, Database name to be exactly the same (lots of settings on 3rd party apps require this level of configuration to get different processes to work.). Now we can rebuild a test environment by creating an exact virtual image of our production server.
I was sceptical about running SQL Server on a virtual machine, but it has given our small company a lot of flexibility. We like to think our databases are critical, but it is for internal uses and having some down time would just have workers shift their lunch hour.

Resources