Database enumeration fails on my Azure hosted SQL Server - sql-server

I can connect to my database server (Azure hosted) with SQL Server Management Studio, however I am unable to enumerate its databases.
On the Azure portal, I am seeing a similar issue:
The fact is, these databases are up and running. My application continues to function. I am also able to issues queries via SSMS Query window. The main problem here is the inability to enumerate databases on the server. If you have any idea how I might solve this, thank you very much for any advice which you can provide!

This was an issue with the Azure platform. Here is what we received from our hosting provider in response to our inquiry:
SQL and Open-Source Database Service Management Issues - East US - Mitigated (Tracking ID 8K76-LZ8)
Summary of impact: Between approximately 13:30 and 16:30 UTC on 19 May 2020, a subset of customers in East US may have intermittently experienced timeouts and latency issues when processing service management operations - such as create, update, delete - for SQL resources hosted in this region, including Azure SQL Database and open-source databases such as Azure Database for PostgreSQL, Azure Database for MySQL, and Azure Database for MariaDB. Some customers may have also encountered issues or experienced latency when loading database management tools or expanding database resources in SQL Server Management Studio (SSMS). Retries may have been successful.
Preliminary root cause: After an initial investigation, engineers identified that an increased volume of requests had consumed a large number of available private endpoint connections for the SQL control plane in East US, thus leading to failures for subsequent control plane operations in the region.
Mitigation: Once the preliminary root cause was identified, engineering teams terminated connections from the source of the increased load.
Next steps: We apologize for the impact to affected customers. Engineers will continue to investigate the underlying cause and take steps to prevent future occurrences, which includes:
• Further investigating and understanding the source of increased workload on the SQL platform
• Increasing the resiliency of the SQL platform to better account for this type of work volume
[End of Report]
Anyway, I am posting this so that anyone who encounters this issue in the future should know - it may not be your issue. It may be an issue with the Azure hosting platform in which case you will just need to wait until engineers resolve the issue at their end.

Related

SQL server high availability solutions in Amazon AWS

I am in the process of migrating to Amazon AWS and need a SQL server high availability solution. The current licence that I have is SQL standard 2016.
At this time Amazon does not support shared volumes for Windows instances. Therefore, I am not able to do a regular SQL cluster fail over solution. This is the one where if the entire server goes down the stand by server picks up the slack and continues writing to the same storage. My only option is high availability always on basic groups. As I am starting to get familiar with this feature I find it very maintenance intensive and can see it becoming a problem when dealing with thousands of databases. In my case I have about 5k databases mostly small in size 600mb or less each. My question is Amazon not a viable hosting environment for a full SQL fail over solution. Is the high availability always on basic groups one per database a viable solution?

What step should be considered for CPU/memory shortage in SQL Server

We recently faced an issue in a server where 12000 concurrent users were trying to access an application but only 120 SQL Server connections were available.
Basic issue I've found is in the architecture of deployment of application and database as below:
DB & App on Same Server
Data and log files of all database whether system or user, are on system drive i.e. C:\
Questions:
By looking what metrics in perfmon or taking what steps can I prove the above points as the basic cause?
Other than the two causes mentioned above, how to correlate metrics/stats in perfmon with a particular SQL Server query?

Will scale Azure DB from Web To new tier cause availability issue

As far as I can tell, scaling an Azure DB from the retired tiers to the new tiers is simply a matter of using the scale function in the Azure portal.
What I cannot seem to find anywhere is a definitive answer as to whether there are any connection string changes required (or any other issues that could cause unavailability) when scaling from the retired to new tiers.
I have a production database that needs to be upgraded, service interruption would be very bad.
The scale operation will not change the connection string. You could face some (very small, but) finite amount of downtime while the switchover happens.
Please refer to the documentation for details. Note that you will be have to suspend geo-replication (if already enabled) for the duration of the upgrade.
Techincaly it will be the same server, same connection string, same everything, but version and features.
But I would be concerned about the following statement from docu:
The duration of upgrade depends on the size, edition and number of
databases in the server. The upgrade process can run for hours to days
for servers especially for servers that has databases:
Larger than 50 GB, or
At a non-premium service tier
Which is kind of concerning.
What I would do, if possible is:
Put my service into read-only mode (put on hold any writes to the DB)
Create new db in same server from the existing one with the command - CREATE DATABASE AS COPY OF ...
When creation of DB is ready, export the new db to backpac and delete the DB when export is ready.
Perform upgrade.
In theory you could do the process without putting your system into Read-Only mode, but I am just taking more precautions measures.
And yes, you also have to aware that you are upgrading your Azure SQL DB Server not just a single Database.

Windows Azure VM (Iaas) unexpected restarts [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I have a number of VMs on Windows Azure (Iaas) hosting a website. There are a number of load-balanced front-end VMs, all connecting to a single VM with SQL Express. It works well.
However!
I'm getting random restarts across all the VMs. As for the front-end VMs (with IIS), since they are load balanced, the site is not affected and the load balancer adjusts accordingly. But when the VM hosting the database is restarted, the site is down until the DB is up again. It takes < 3min to boot up, but that's still unacceptable if it happens frequently enough. Although the restarts are relatively rare (2 a month per VM), sometimes we get a week with 4 restarts per VM, which gets frustratingly annoying. Not all VMs restart as frequently and I cannot figure out a pattern. Restarts are also unexpected (pull-the-power-cable type of restarts, and not shutdowns). Datacenter is West Europe.
Microsoft emphasises that SLA only covers 2VMs in an availability set, which I can't have for the database VM (and the enterprise SQL edition costs an arm and three legs). Also, SQL Azure isn't an option as the application is very chatty, and the SQL Azure database was being throttled during peak times (though it works super smooth with SQL Express on a Medium VM!).
My question(s):
Is it normal to have so many restarts? Are there other people having the same problem? What is your experience with such an environment on Azure? What can I do to minimise this downtime?
Thanks all!
Is it normal to have so many restarts?
Yes this can happen in a given month, you need to stand up SQL Server in high availability mode to really get this to work.
Yes it does cost an arm and leg. ;(
What is your experience with such an environment on Azure?
Some months are really good some months are bad, depends on your cluster and which datacenter you are in. MS have mixed range our hardware out in there datacenters. That does not mean they are running on old laptops in some datacenters but it does mean in my experience the new datacenters tend to have better kit in them and thus less restarts. I.e we use USA East.
What can I do to minimise this downtime?
High availability with a witness is the only way to give you availability in VM and yes it cost and arm and leg.
Other serious options. Cache Cache ..You should use computer cache, azure cache and try to minmize your calls to the database. This might reduce your chatty app and allow you to step back in SQL Azure, but might give you enough to for the failover to recover back.
Queues Queues would help you application recover and give you user a message of we are working on it.
Use SQL Azure as failover. Data sync using SQL Azure Sync from Premise (Not sure this works with Express) to SQL Azure and write into you app code to pick up the connection error and failover.
Look at using other parts of Azure for parts of your app to reduce your amount of calls coming into SQL , i.e Can you move stuff to table storage ?
HTHS give you some ideas.
Windows Azure Infrastructure Services (IaaS) has only been in General Availability (GA, or production) about 3 weeks, since April 16 (see announcement here). Prior to GA, there was no SLA and you would have seen more frequent OS restarts as various patches were still being applied to the Host OS. Are you saying that this pattern has continued at the same velocity since April 16?
Now that IaaS is GA, I wouldn't expect 4 restarts in a week. That said: there are several reasons you'd see a restart:
Host hardware failure (this takes down all Guest OSs running on that host)
Host software update (and only if requiring a restart of the Host os). Host OS reboots shouldn't be happening at the frequency you're seeing.
Guest OS issues. Here's where things depart from PaaS (web/worker role Cloud Services). In IaaS, there's no Guest OS maintenance done by Azure; this is all in your hands. It's possible to get reboots if installing Windows Updates automatically. Possibly you could be running into an application-level issue causing the box to become unresponsive for a long period of time, resulting in the Azure fabric controller rebooting your box as it thinks it's unhealthy. And... your app could be somehow crashing the box.
If you've ruled out application error and are sure the VMs are in good health at the time they're rebooting, you may need to open a support ticket with Microsoft to help diagnose the issue further.

Overcoming Windows Azure Sql Database 150 gb size limitation

SQL Azure has a database size limit of 150 gb. I have read through their documentation several times and also searched online but I'm unclear about this: Does using federations allow a developer to grow beyond a 150 gb data base? For example can I have several 150GB federation members.
If not, how can I handle a database larger than 150 gb on Windows Azure?
basically, How do I scale out beyond 150 gb on Windows Azure
If theres no other way is RDS a good alternative(share any other alternatives)
Currently it is not possible to have a single database larger than 150G.
The only approach is to either split the data into multiple databases, one account can have up to 149 user databases plus the master DB, or use SQL Azure Federations. Currently, if I am not mistaken, the total number of Federations supported is Int16.MaxValue - 1. Each federation is actually a separate database, transparent to the developer, which can be up to 150GB.
However, SQL Azure Federations has its own pros and cons, along with some data access layer re-factoring. If you are interested you may check out these cool videos on SQL Azure Federations:
Building Scalable Apps with SQL Azure
Using SQL Azure Database Federations
UPDATE
I will not completely agree with #ryancrawcour. What he explains is just the peak of the iceberg lying bellow the water. The amount of required re-factoring really depends on how data is consumed from the application. I will just mention a few factors for considerations (which are not complete picture at all). Consider any of the following:
Data that is common for all federations (how you get this data)
Stored proc, that post-processes data - you have to iterate in each and ever federation member and execute that stored proc. There is no way to execute the Stored proc once and process data in all the federations.
Aggregate data, which is spread across more than 1 federation member
List data from more than one federation member.
These are just few operations that you will need to consider, and that does not require "just change in connection string and execute one use federation ..." before each query. Actually using SQL Azure Federations you don't need to change the connection string at all. It is all the same SQL Azure connection string. The "USE FEDERATION ..." statement is what you have execute before each query. But it is way not just the only thing. And how about if one is using EntityFramework (model first, or code first, or whatever). Things get even more complicated and need real understanding of SQL Azure Federations.
I would say that SQL Azure Federations is different way of thinking about data, about modelling and normalizing.
UPDATE 2 - new Database sizes announced by Microsoft
As of 03. April 2014 the maximum size for a single Database has been increased to 500GB. The only available information to date is here. Be aware that the management portal still doesn't show this option (as of Today and now: 4. Apri 2014, 15:00 GMT+0:00).
I've been looking for these same answers a while ago. In addition to the answers Anton provided (which are very accurate), I found that you can make your WAVM with SQL Server installation redundant through load balancing and mirroring.
The advantage of WASD is that everything is automated. E.g. when your WAVM instance is taken out of the roulation of the load balancer, you'll need bring a new one up yourself. WASD takes care of all of this.
With WASD Federations you're able to scale to 75TB of data (if I remember correctly), while with WAVM with SQL Server you can scale to 16TB tops.
Also with WASD Federations you can more granularly divide the SQL Workloads.
Regards,
Patriek
There is also the new Azure feature of persistent VMs (currently in preview) which will allow you to migrate your on-premises applications to cloud with minimal changes.
Further reading: Infrastructure as a Service Series: Running SQL Server in a Windows Azure Virtual Machine
.This guide might be helpful as well.
Edit
Here is a comparison with Sql Azure
While considering your scale options, be aware that, as of April 3 2014, Microsoft announced upcoming changes to SQL Premium, including ability to scale each SQL Database instance to 500GB (along with geo-replication, self-service restore, and higher uptime SLA). No date has been announced yet, but you can read about the announcement details here.
There is now a 1 Terrabyte tier available - see https://azure.microsoft.com/en-us/pricing/details/sql-database/ and look at the Premium level.

Resources