Windows/SQL Azure Maintenance Windows? - sql-server

I have a couple of SQL Azure databases deployed. They all seem to work just fine at most times of the day. However, I have recently noticed that there is a consistent set of errors around the 5AM to 7AM PST time (GMT -8). Does anyone know if there are maintenance windows or anything else, server side at Azure, that would consistently cause errors during this hour? I have already checked my code to verify that there isn't anything on the client side that would cause this type of consistency in errors.

You shouldn't be seeing any type of daily window outtage. If you are, I would recommend you open up a support ticket and drive the issue through to resolution. Please also post the findings so we can all learn from it. :)

Related

SQL Server / SSISDB Jobs suddenly taking longer to complete

Beginning on 12/29/2022, several of our SSISDB packages started taking twice as long to finish their etl. This delays the beginning of our daily reporting for our company.
There are no errors that give much of a clue, and there is nothing out of the ordinary in the logs. The company has been on a code deployment freeze for 3 weeks now, so I'm pretty sure it is not that.
The Server CPU fluctuates between 30 and 60%, so I don't think it's a server resource issue. This phenomenon is occurring to various ETLs. I have looked into the reasons while these jobs will "Hang" or go "Runaway", but there is no discernable explanation I can find.
Can you recommend steps for debugging? SQL Server Management Studio 2018.

Avoiding "schema drift detected" errors in SSDT comparisons

I'm trying to update a SQL Server project in Visual Studio 2019 by using the SSDT schema comparison. My source is a running database server, the destination is the VS SQL Server project.
When the comparison is done and I click "Update", I get the message
Source schema drift detected. Press Compare to refresh the comparison
No matter how many times I refresh the comparison, I always get the same result.
I have tried various connection tweaks (read-only intent, asynchronous processing, multiple active result sets) in the hopes that I can make the comparison run faster and update the project before the drift happens, but to no avail. I have also tried reducing the types of objects included in the comparison, but have not been able to reduce it enough to prevent drift from being detected.
I think the biggest issue I have is that aside from the "schema drift detected" message, I feel like I'm shooting in the dark. By that I mean that I have no idea what is causing SSDT to detect drift, and therefore I can't work around it.
I tried running the SQL Profiler to capture what SSDT is doing so I could find where SSDT is detecting drift. However, I haven't been able to find any query that gives different results when run multiple times within a short period.
So in conclusion, my questions are:
What does SSDT look at to determine when the database schema has drifted?
How can I update my SQL Server project when it always detects schema drift?
I also struggled for months to find the cause of the same error. I was already thinking about flashing Windows 10 on my laptop. I won't list the dead ends anymore. In my final desperation, I copied the SQL Server database and VS project to another machine, and there the comparison worked without a bone. The suspicion arose that maybe the error is not in VS, but rather that my SQL server is confusing VS.
I have a SQL Server 2012. I put the latest update on it (SP4) and wonder of wonders, compare and sync started working perfectly right away. Of course, now before every update I pray a little so that I don't encounter the "Source schema drift detected" message.
I have been unsuccessfully fighting this annoying error for MANY SSDT versions.
Searching for it you will see multiple places where it is claimed to be fixed, WHICH IS FALSE, as it is happening right now with VS 2022 SSDT.
In my case, it ONLY happens when comparing against ONE out of the 5 database servers I regularly use the tool with.
The only workaround I have found that usually works is to REBOOT the destination database server (NOT just cycle the SQL Server Service) and then run the SSDT compare QUICKLY!
As the server that this happens on is an integration server running on a VM in my local network, I can bounce the server, but in other scenarios this would be a show-stopper.
IMO the most onerous things about this issue is that you cannot even generate the script to copy / paste into SSMS, which is how I often use the tool.
This issue has not been fixed for YEARS and is very intermittent, so I have no hope of seeing it actually fixed - I hope this workaround is helpful to someone.

Logs on azure sql database

We had an issue yesterday that we are trying to figure out. Out of nowhere everything on the database changed,
We know it was an update without a where clause, but we are just a few developers. So if any of us would have done it we would know it.
It was at a strange time of the day, very late at night and only a few ip addresses are allowed into the server.
Is there any way to get the full log with ips of all the transactions on azure?
Did anyone had a similar problem? can it be a break through?
Are there any software protections, scripts that we can add to limit this?
Is there any way to get the full log with ips of all the transactions on azure?
Few options i could think off,Even this is not possible in onpremises..if you don't have correct measures to detect this...else contact support for a request to read TLOG of the database(Azure support won't read the log,unless you have a business justification,as this involves involving many teams due to safety reasons)
1.) You could use activity log to know more details..
2.) There is an sys.event_log (Azure SQL Database) DMV ,which shows connections successfull or not .you can correlate to know the users based on your office set up..this won't show success or failures
To avoid this happening again,Audit data and Azure offers many features to know more on whats happening like
1..Get started with SQL database auditing
2. Enable rules to get alerted when some thing happens..
Enable Auditing and Threat Detection on the server if you hadn't
For more information, please read this page.

SQL Server: the reason for process blocking in the past

We are using SQL Server 2012 Enterprise edition.
Normally we get hardly any blocked processes, but last weekend we experienced very unusual situation. Within 2 hours we got more "blocked process" alerts than we did in the last year together. There were a few hundred alerts within this time. Then suddenly without any interference from anyone everything went back to norm and we didn't get any blocked processes ever since. I want to prevent this situation from occurring again.
I am well aware how to find what can be causing blocking at present, but I have very little idea how to find what caused the block in the past, which is currently resolved.
I checked error logs in SQL Server Management Studio, but there is nothing there under the date when blocking occurred. There is also nothing unusual in the Windows event viewer. Where else should I check?
Could you please help?
From what you describe, I'm not too sure you will actually find the cause of the previously blocking processes if you did not actively setup tracing i.e. have your blocked process threshold set and configured with an alert to provide said trace information. The situation you described is interesting and definitely worth monitoring.
Here is an article on blocked process threshold configuration in SQL Server and a link through to Alerts configuration.
Hope this helps

What is your biggest SQL Server mistake or embarrassing incident?

You know the one I am talking about.
We have all been there at some point. You get that awful feeling of dread and the realisation of oh my god did that actually just happen.
Sure you can laugh about it now though, right, so go on and share your SQL Server mishaps with us.
Even better if you can detail how you resolved your issue so that we can learn from our mistakes together.
So in order to get the ball rolling, I will go first……..
It was back in my early years as a junior SQL Server Guru. I was racing around Enterprise Manager, performing a few admin duties. You know how it is, checking a few logs, ensuring the backups ran ok, a little database housekeeping, pretty much going about business on autopilot and hitting the enter key on the usual prompts that pop up.
Oh wait, was that a “Are you sure you wish to delete this table” prompt. Too late!
Just to confirm for any aspiring DBA’s out there, deleting a production table is a very very bad thing!
Needless to say a world record was promptly set for the fastest database restore to a new database, swiftly followed by a table migration, oh yeah. Everyone else was none the wiser of course but still a valuable lesson learnt. Concentrate!
I suppose everyone has missed the WHERE clause off a DELETE or UPDATE at some point...
Inserted 5 million test persons into a production database. The biggest mistake in my opinion was to let me have write access to the production db in the first place. :P Bad dba!
My biggest SQL Server mistake was assuming it was as capable as Oracle when it came to concurrency.
Let me explain.
When it comes to transactional isolation level in SQL Server you have two choices:
Dirty reads: transactions can see uncommitted data (from other transactions); or
Selects block on uncommitted updates.
I believe these come from ANSI SQL.
(2) is the default isolation level and (imho) the lesser of two evils. But it's a huge problem for any long-running process. I had to do a batch load of data and could only do it out of hours because it killed the website while it was running (it took 10-20 minutes as it was inserting half a million records).
Oracle on the other hand has MVCC. This basically means every transaction will see a consistent view of the data. They won't see uncommitted data (unless you set the isolation level to do that). Nor do they block on uncommitted transactions (I was stunned at the idea an allegedly enterprise database would consider this acceptable on a concurrency basis).
Suffice it to say, it was a learning experience.
And ya knkow what? Even MySQL has MVCC.
I changed all of the prices to zero on a high-volume, production, eCommerce site. I had to take the site down and restore the DB from backup.. VERY UGLY.
Luckily, that was a LOOONG time ago.
forgetting to highlight the WHERE clause when updating or deleting
scripting procs and checking drop dependent objects first and running this on production
I was working on the payment system on a large online business. Multi-million Euro business.
Get a script from a colleague with a few small updates.
Run it on production.
Get an error report 30 minutes later from helpdesk, complaining about no purchases last 30 minutes.
Discover that all connections are waiting on a table lock to be released
Discover that the script from my colleague started with an explicit BEGIN TRANSACTION and expected me to manually type COMMIT TRANSACTION at the end.
Explain to boss why 30 minutes of sales were lost.
Blame myself for not reading the script documentation properly.
Starting off a restore from last week onto a production instance instead of the development instance. Not a good morning.
I've seen plenty other people miss a WHERE clause.
Myself, I always type the WHERE clause first, and then go back to the start of the line and type in the rest of the query :)
Thankfully we only ever make one cock-up before you realise that using transactions really is very, very trivial. I've amended thousands of records on accident before, luckily roll-back is there...
If you're querying the live environment without having thoroughly tested your scripts then I think embarrassing should really be foolhardy or perhaps unprofessional.
One of my favorites happened in an automated import when the client changed the data structure without telling us first. The Social Security number column and the amount of money we were to pay the person got switched. Luckily we found it before the system tried to pay someone his social security number. We now have checks in automated imports that look for funny data before running and stop it if the data seems odd.
Like zabzonk said, forgot the WHERE clause on an update or two in my day.
We had an old application that didn't handle syncing with our HR database for name updates very efficiently, mainly due to the way they keyed in changes to titles. Anyway, a certain woman got married, and I had to write a database change request to update her last name, I forgot the where clause and everyone in said application's name was now Allison Smith.
Columns are nullable, and parameter values fail to retrieve the correct information...
The biggest mistake was giving developers "write" access to the production DB
many DEV and TEST records were inserted / overwritten and backup- ed too production until it was wisely suggested (by me!) to only allow read access!
Sort of SQL-server related. I remember learning about how important it is to always dispose of a SqlDataReader. I had a system that worked fine in development, and happened to be running against the ERP database. In production, it brought down the database because I assumed it was enough to close SqlConnection, and had hundreds, if not thousands of open connections.
At the start of my co-op term I ended up expiring access to everyone who used this particular system (which was used by a lot of applications in my Province). In my defense, I was new to SQL Server Management Studio and didn't know that you could 'open' tables and edit specific entries with a sql statement.
I expired all the user access with a simple UPDATE statement (access to this application was given by a user account on the SQL box as well as a specific entry in an access table) but when I went to highlight that very statement and run it, I didn't include the WHERE clause.
A common mistake I'm told. The quick fix was unexpire everyones accounts (including accounts that were supposed to be expired) until the database could be backed up. Now I either open tables and select specific entries with SQL or I wrap absolutely everything inside a transaction followed by an immediate rollback.
Our IT Ops decided to upgrade to SQL 2005 from SQL 2000.
The next Monday, users were asking why their app didn't work. Errors like:
DTS Not found etc.
This lead to a nice set of 3 Saturdays in the office rebuilding the packages in SSIS with a good overtime package :)
Not, exactly a "mistake" but back when I was first learning PHP and MYSQL I would spend hours daily, trying to figure out why my code was not working, not knowing that I had the wrong password/username/host/database credentials to my SQL database. You cant believe how much time I wasted on that, and to make it even worse this was not a one time incident. But LOL, its all good, it builds character.
I once, and only once, typed something similar to the following:
psql> UPDATE big_table SET foo=0; WHERE bar=123
I managed to fix the error rather quickly. Since that and another error my updates always start out as:
psql> UPDATE table SET WHERE foo='bar';
Much easier to avoid errors that way.
I worked with a junior developer once who got confused and called "Drop" instead of "Delete" on a table.
It was a long night working to get the backup restored...
Edit: I should have mentioned, this was in a production environment, and the table was full of data...
This was before the days when Google could help. I didn't encounter this problem with SQL Server, but with it's ugly older cousin Sybase.
I updated a table schema in a production environment. Not understanding at the time that stored procedures that use SELECT * must be recompiled to pickup new fields, I proceeded to spend the next eight hours trying to figure out why the stored procedure that performed a key piece of work kept failing. Only after a server reboot did I clue in.
Losing thousands of dollars and hundreds of (end user) man-hours at your flagship customer's site is quite an educational experience. Highly recommended!!
A healthy amount of years ago I was working on a clients site, that had a nice script to clear the dev environment of all orders, carts and customers.. to ease up testing, so I of course put the damn script on the productions server query analyzer and ran it.
Took some 5-6 minutes to run too, I was bitching about how slow the dev server was until the number of deleted rows came up. :)
Fortunately I had just ran a full backup since I was about to do an installation..
Beyond the typical where clause error. Ran a drop on an incorrect DB and thus had to run a restore. Now I triple check my server name. Thankfully I had a good backup.
I set the maximum server memory to 0. I was thinking at the time that would automatically tell SQL server to use all available memory (It was early). No such luck. SQL server decided to use only 16 MB and I had to connect in single user mode to get the setting changed back.
Hit "Restore" instead of "Backup" in Management Studio.

Resources