Will scale Azure DB from Web To new tier cause availability issue - database

As far as I can tell, scaling an Azure DB from the retired tiers to the new tiers is simply a matter of using the scale function in the Azure portal.
What I cannot seem to find anywhere is a definitive answer as to whether there are any connection string changes required (or any other issues that could cause unavailability) when scaling from the retired to new tiers.
I have a production database that needs to be upgraded, service interruption would be very bad.

The scale operation will not change the connection string. You could face some (very small, but) finite amount of downtime while the switchover happens.
Please refer to the documentation for details. Note that you will be have to suspend geo-replication (if already enabled) for the duration of the upgrade.

Techincaly it will be the same server, same connection string, same everything, but version and features.
But I would be concerned about the following statement from docu:
The duration of upgrade depends on the size, edition and number of
databases in the server. The upgrade process can run for hours to days
for servers especially for servers that has databases:
Larger than 50 GB, or
At a non-premium service tier
Which is kind of concerning.
What I would do, if possible is:
Put my service into read-only mode (put on hold any writes to the DB)
Create new db in same server from the existing one with the command - CREATE DATABASE AS COPY OF ...
When creation of DB is ready, export the new db to backpac and delete the DB when export is ready.
Perform upgrade.
In theory you could do the process without putting your system into Read-Only mode, but I am just taking more precautions measures.
And yes, you also have to aware that you are upgrading your Azure SQL DB Server not just a single Database.

Related

Database enumeration fails on my Azure hosted SQL Server

I can connect to my database server (Azure hosted) with SQL Server Management Studio, however I am unable to enumerate its databases.
On the Azure portal, I am seeing a similar issue:
The fact is, these databases are up and running. My application continues to function. I am also able to issues queries via SSMS Query window. The main problem here is the inability to enumerate databases on the server. If you have any idea how I might solve this, thank you very much for any advice which you can provide!
This was an issue with the Azure platform. Here is what we received from our hosting provider in response to our inquiry:
SQL and Open-Source Database Service Management Issues - East US - Mitigated (Tracking ID 8K76-LZ8)
Summary of impact: Between approximately 13:30 and 16:30 UTC on 19 May 2020, a subset of customers in East US may have intermittently experienced timeouts and latency issues when processing service management operations - such as create, update, delete - for SQL resources hosted in this region, including Azure SQL Database and open-source databases such as Azure Database for PostgreSQL, Azure Database for MySQL, and Azure Database for MariaDB. Some customers may have also encountered issues or experienced latency when loading database management tools or expanding database resources in SQL Server Management Studio (SSMS). Retries may have been successful.
Preliminary root cause: After an initial investigation, engineers identified that an increased volume of requests had consumed a large number of available private endpoint connections for the SQL control plane in East US, thus leading to failures for subsequent control plane operations in the region.
Mitigation: Once the preliminary root cause was identified, engineering teams terminated connections from the source of the increased load.
Next steps: We apologize for the impact to affected customers. Engineers will continue to investigate the underlying cause and take steps to prevent future occurrences, which includes:
• Further investigating and understanding the source of increased workload on the SQL platform
• Increasing the resiliency of the SQL platform to better account for this type of work volume
[End of Report]
Anyway, I am posting this so that anyone who encounters this issue in the future should know - it may not be your issue. It may be an issue with the Azure hosting platform in which case you will just need to wait until engineers resolve the issue at their end.

Alarm DB Logger (Intouch) configuration with SQL Server Mirroring

I have an installation which has two SCADA (Intouch) HMIs and I want to save the data in an SQL Server database which will be in another computer. To be as sure as possible that I have an operating database I'm going to set a SQL Server mirroring. So I will have 2 SQL server databases with a distributor. About this I don't have any doubt. To make it easy to understand I've made an image with the architecture of the system.
Architecture.
My doubt is how do I configure the Alarm DB Logger to make it point, automatically, to the secondary database in case that the principal database is down for any unknown failover.
PS: I don't know if it's even possible.
Configure it the database in Automatic failover. The connection are handled automatically in case of a failover. Read on Mirroring EndPoints
The below Links should have more than enough information.
https://learn.microsoft.com/en-us/sql/database-engine/database-mirroring/role-switching-during-a-database-mirroring-session-sql-server
https://learn.microsoft.com/en-us/sql/database-engine/database-mirroring/the-database-mirroring-endpoint-sql-server
The AlarmDBLogger reads its configuration from the registry, so you could try the following:
Stop AlarmLogger
Change ServerName in registry [HKLM].[Software].[Wonderware].[AlarmLogger].[SQLServer]
Start AlarmLogger
But what about the two InTouch-nodes? What if one of those fails? You would have to make sure one of them logs alarms, and that they don't log duplicates!
The standard controls and activex for alarms use a specific view in the alarm database. You cannot change that behaviour, but you can script a server change in InTouch or System Platform.
Keep in mind that redundancy needs to be tested, and should only be implemented if 100% uptime is necessary. In many cases you will be creating new problems to solve instead of solving an actual problem.

Whats the best redundancy setup on AWS for SQL Server 2014

We're migrating our environment over to AWS from a colo facility. As part of that we are upgrading our 2 SQL Server 2005s to 2014s. The two are currently mirrored and we'd like to keep it that way or find other ways to make the servers redundant. # of transactions/server-use is light for our app - but it's in production, requires high availability, and, as a result, requires some kind of fail over.
We have already setup one EC2 instance and put SQL server 2014 on it (as opposed to using RDBMS for licensing reasons and are now exploring what to do next to achieve this.
What suggestions do people have to achieve the redundancy we need?
I've seen two options thus far from here and googling around. I list them below - we're very open to other options!
First, use RDBMS mirroring service, but I can't tell if that only applies if the principal server is also RDBMS - it also doesn't help with licensing.
Second, use multiple availability zones. What are the pros/cons of this versus using different regions altogether (e.g., bandwidth issues) etc? And does multi-AZ actually give redundancy (if AWS goes down in Oregon, for example, then doesn't everything go down)?
Thanks for the help!
The Multi-AZ capability of Amazon RDS (Relational Database Service) is designed to offer high-availability for a database.
From Amazon RDS Multi-AZ Deployments:
When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. In case of an infrastructure failure (for example, instance hardware failure, storage failure, or network disruption), Amazon RDS performs an automatic failover to the standby, so that you can resume database operations as soon as the failover is complete. Since the endpoint for your DB Instance remains the same after a failover, your application can resume database operation without the need for manual administrative intervention.
Multiple Availability Zones are recommended to improve availability of systems. Each AZ is a separate physical facility such that any disaster that should befall one AZ should not impact another AZ. This is normally considered sufficient redundancy rather than having to run across multiple Regions. It also has the benefit that data can be synchronously replicated between AZs due to low-latency connections, while this might not be possible between Regions since they are located further apart.
One final benefit... The Multi-AZ capability of Amazon RDS can be activated by simply selecting "Yes" when the database is launched. Running your own database and using mirroring services requires you to do considerably more work on an on-going basis.

Multipurposing a failover server?

I'm not a DBA so this may be a stupid question but I'll ask it anyway. We're upgrading our SQL Servers from 2000 to 2005 and we will probably use either database replication or database mirroring. Our DBA would like to "multipurpose" the standby server meaning that he'd like to increase our capabilities and capacity by running other database applications on the standby server since "it's just going to be sitting there anyway" (his words, not mine). Is this such a good idea? Right now, our main application server uses only one instance that contains 50+ databases. As I understand it, what we're doing now and what our DBA is proposing for a failover server is a bad idea because all of these databases are sharing memory, CPUs, and working areas. If one applications starts behaving badly, the other DBs could be affected.
Any thoughts?
It's really a business question that needs to be answered?? is a slow app better then no app if you can't afford the expense of extra hardware?
Standby and mirrored db's can be used for reporting. Using it as the failover db can work if you have enough headroom (i.e. both databases will comfortably run on the server)
Will you depend on these extra applications? Where do they run in the failover case?
You really need to understand your failure modes.
If you look at it as basic resource math, that doesn't often make sense unless the resources you have running in the failure scenarios can handle the entire expected load. Sometimes this is the case, but not always. In this case, to handle the actual load you may need yet another server to come in (like RAID - perhaps your load needs a minimum of 5 servers, but you have a farm of 6, then you need 1 standby server for ever server to fail above 1). Sometimes a farm can run degraded, but sometimes they just puke and die.
And in the case of out of normal operation, you often have accident cascading where a legitimate incident causes a cascade of issues - e.g. your backup tape is busy restoring a server from a backup (to a test environment, even - there are no real "failures"), now your sql server or exhcange server (or both) is not backed up and your log gets full.
Database Mirroring would not be the way to go here in my opinion as it provides redundancy at the database level only. So you would need to configure database mirroring for up to 50 databases based on the information you provided. The chances are that if one DB where to fail all, 50 would probably follow, as failures typically occur at the hardware level rather than a specific database.
It sounds to me like you should be using SQL Server Clustering technology. You could create an Active/Active cluster to support your requirements.
What is an Active/Active Cluster?
An Active/Active SQL Server cluster means that SQL Server is running on both nodes of a two-way cluster. Each copy of SQL Server acts independently, and users see two different SQL Servers. If one of the SQL Servers in the cluster should fail, then the failed instance of SQL Server will failover to the remaining server. This means that then both instances of SQL Server will be running on one physical server, instead of two.
Applying this to your scenario
You could then split the databases between two instances of SQL server, one active instance on each node. Should one node fail, the other node will pick up the slack and vice versa.
Further Reading
An introduction to SQL Server Clustering
I suspect that you will find the following MSDN thread useful reading also
"it's just going to be sitting there anyway"
It will be sitting there applying transactions...
Take note of John Sansom's recommendation. Keep in mind that a Active/Active cluster requires two sql server licenses and a failover cluster/mirror only needs one.
Setting up mirroring for a large number of db's could turn into a big pain. You need any jobs/maintenance to move over as well - which can be achieved with alerts on WMI failover events. There's probably more to think about that could complicate things.

Caching to a local SQL instance on a web server

I run a very high traffic(10m impressions a day)/high revenue generating web site built with .net. The core meta data is stored on a SQL server. My team and I have a unique caching strategy that involves querying the database for new meta data at regular intervals from a middle tier server, serializing the data to files and sending those to the web nodes. The web application uses the data in these files (some are actually serialized objects) to instantiate objects and caches those in memory to use for real time requests.
The advantage of this model is that it:
Allows the web nodes to cache all data in memory and not incur any IO overhead querying a database.
If the database ever goes down either unexpectedly or for maintenance windows, the web servers will continue to run and generate revenue. You can even fire up a web server without having to retrieve its initial data from the DB because all the data it needs are in files on its own disks.
Allows us to be completely horizontally scalable. If throughput suffers, we can just add a web server.
The disadvantages are that this caching and persistense layers adds complexity in the code that queries the database, packages the data and unpackages it on the web server. Any time our domain model requires us to add entities, more of this "plumbing" has to be coded. This architecture has been in place for four years and there are probably better ways to tackle this.
One strategy I have been considering is using replication to replicate our master sql server database to local database instances installed on each web server. The web server application would use normal sql/ORM techniques to instantiate objects. Here, we can still sustain a master database outage and we would not have to code up specialized caching code and could instead use nHibernate to handle the persistence.
This seems like a more elegant solution and would like to see what others think or if anyone else has any alternatives to suggest.
I think you're overthinking this. SQL Server already has mechanisms available to you to handle these kinds of things.
First, implement a SQL Server cluster to protect your main database. You can fail over from node to node in the cluster without losing data, and downtime is a matter of seconds, max.
Second, implement database mirroring to protect from a cluster failure. Depending on whether you use synchronous or asynchronous mirroring, your mirrored server will either be updated in realtime or a few minutes behind. If you do it in realtime, you can fail over to the mirror automatically inside your app - SQL Server 2005 & above support embedding the mirror server's name in the connection string, so you don't even have to lift a finger. The app just connects to whatever server's live.
Between these two things, you're protected from just about any main database failure short of a datacenter-wide power outage or network outage, and there's none of the complexity of the replication stuff. That covers your high availability issue, and lets you answer the scaling question separately.
My favorite starting point for scaling is using three separate connection strings in your application, and choose the right one based on the needs of your query:
Realtime - Points directly at the one master server. All writes go to this connection string, and only the most mission-critical reads go here.
Near-Realtime - Points at a load balanced pool of read-only SQL Servers that are getting updated by replication or log shipping. In your original design, these lived on the web servers, but that's dangerous practice and a maintenance nightmare. SQL Server needs a lot of memory (not to mention money for licensing) and you don't want to be tied into adding a database server for every single web server.
Delayed Reporting - In your environment right now, it's going to point to the same load-balanced pool of subscribers, but down the road you can use a technology like log shipping to have a pool of servers 8-24 hours behind. These scale out really well, but the data's far behind. It's great for reporting, search, long-term history, and other non-realtime needs.
If you design your app to use those 3 connection strings from the start, scaling is a lot easier, and doesn't involve any coding complexity - just pick the right connection string.
Have you considered memcached? Since it is:
in memory
can run locally
fully scalable horizontally
prevents the need to re-cache on each web server
It may fit the bill. Check out Google for lots of details and usage stories.
Just some addition to what RickNZ proposed above..
Since your master data which you are caching currently won't change so frequently and probably over some maintenance window, here is what should you do first on database side:
Create a SNAPSHOT replication for the master tables which you want to cache. Adding new entities will be equally easy.
On all the webservers, install SQL Express and subscribe to this Publication.
Since, this is not a frequently changing data, you can rest assure, no much server resource usage issue minus network trips for master data.
All your caching which was available via previous mechanism is still availbale minus all headache which comes when you add new entities.
Next, you can leverage .NET mechanisms as suggested above. You won't face memcached cluster failure unless your webserver itself goes down. There is a lot availble in .NET which a .NET pro can point out after this stage.
It seems to me that Windows Server AppFabric is exactly what you are looking for. (AKA "Velocity"). From the introductory documentation:
Windows Server AppFabric provides a
distributed in-memory application
cache platform for developing
scalable, available, and
high-performance applications.
AppFabric fuses memory across multiple
computers to give a single unified
cache view to applications.
Applications can store any
serializable CLR object without
worrying about where the object gets
stored. Scalability can be achieved by
simply adding more computers on
demand. The cache also allows for
copies of data to be stored across the
cluster, thus protecting data against
failures. It runs as a service
accessed over the network. In
addition, Windows Server AppFabric
provides seamless integration with
ASP.NET that enables ASP.NET session
objects to be stored in the
distributed cache without having to
write to databases. This increases
both the performance and scalability
of ASP.NET applications.
Have you considered using SqlDependency caching?
You could also write the data to the local disk at the web tier, if you're concerned about initial start-up time or DB outages. But at least with a SqlDependency, you shouldn't have to poll the DB to look for changes. It can also be made relatively transparent.
In my experience, adding a DB instance on web servers generally doesn't work out too well from a scalability or performance perspective.
If you're concerned about performance and scalability, you might consider partitioning your data tier. The specifics depend on your app, but as an example, you could move read-only data onto a couple of SQL Express servers that are populated with replication.
In case it helps, I talk about this subject at length in my book (Ultra-Fast ASP.NET).

Resources