I am using SQL Azure migration wizard for migrating one of my database to a different instance. It literally took more than 12 hours to do BCP out itself. The only change i have doneis to increase the packet size from 4096 to 65535(max). Is that wrong ? And i am doing this from a AWS server which is part of the same subnet where SQL server RDS instance is hosted
Analysis completed at 7/16/2016 1:53:31 AM -- UTC -> 7/16/2016 1:53:31 AM
Any issues discovered will be reported above.
Total processing time: 12 hours, 3 minutes and 14 seconds
There is a blog post from the SQL Server Customer Advisory Team (CAT) that goes into a few details about optimal settings to get data into and out of Azure SQL databases.
Best Practices for loading data to SQL Azure
When loading data to SQL Azure, it is advisable to split your data into multiple concurrent streams to achieve the best performance.
Vary the BCP batch size option to determine the best setting for your network and dataset.
Add non clustered indexes after loading data to SQL Azure.
If, while building large indexes, you see a throttling-related error message, retry using the online option.
Related
I tried to use Azure SQL Database with Tableau for visualizing data. First issue was that Tableau takes a huge time on connecting to db on start (about 10-30 minutes). After connecting it works fast. I provided some investigations and determined that the same situation happens when I try to expand tables list of database in Sql Server Management studio and then I run SELECT * FROM INFORMATION_SCHEMA.TABLES and have execution time ~10 min. Count of tables in db = 63. Tests was performed at single connection to db, so there was'nt any processes that can lock server and all another queries works fine.
Any ideas what exactly can affect so bad to performance?
There could be many reasons for slowness:
Your Azure SQL Database could be in lower performance tier
You might be located very far from your Azure SQL data center. Try to move the Azure SQL to a data center closer to your location
Please refer to link for any issues related to network latency: https://blogs.msdn.microsoft.com/azuresqldbsupport/2017/10/02/how-to-troubleshoot-slow-performance-after-moving-to-azure-due-to-network-latency/
I am working on a Web application based on EF with over 1 GB seeded data. The application is hosted in Azure with Bizspark subscription account.
I created a App Service Plan with the Web application associated with an App Service sometime back. I started uploading data to Sql Server but this failed. I realized that the default size was 1GB so I upgraded the plan to a Standard Plan with 10 DTU and 10 GB yesterday and uploaded the data around 5 days back.
After which I due to certain issues, I wiped out the App Service Plan and created a new one. SQL Server size and setup was not modified.
I created a new plan and uploaded the application and observed the following -
Database tables got wiped out
Database prizing structure was reset to Basic
I upgraded the database plan once again to 10 GB and 10 DTU yesterday night. I see that the change has not taken affect yet.
How long does it take get the size fixed?
Will the tables have to be recreated?
9/11
I just tried uploading data via bcp tool. But I got the following error:
1000 rows sent to SQL Server. Total sent: 51000
Communication link failure
Text column data incomplete
Communication link failure
TCP Provider: An existing connection was forcibly closed by the remote host.
Communication link failure
This is new as yesterday before I changed the db size I got the following error:
9/10
1000 rows sent to SQL Server. Total sent: 1454000
The database 'db' has reached its size quota. Partition or delete data, drop indexes, or consult the documentation for possible resolutions.
BCP copy in failed
I don't understand the inconsistency in the failure message or even that the upload failed for the same data file.
Regards,
Lalit
Scaling up a database to a higher service tier should not take more than a few minutes from Basic to Standard. The schemas and table inside the database are left unchanged.
You may want to look into the Activity log of your Azure server to understand who initiated the scale down from Standard to Basic. Furthermore, you may want to turn on the Auditing feature to understand all the operations that are performed on your database.
On connectivity issues, you can start looking at this documentation page. It also looks like you have inserted rows several times into your database through the BCP command and this causes a space issue for the Basic tier.
We recently faced an issue in a server where 12000 concurrent users were trying to access an application but only 120 SQL Server connections were available.
Basic issue I've found is in the architecture of deployment of application and database as below:
DB & App on Same Server
Data and log files of all database whether system or user, are on system drive i.e. C:\
Questions:
By looking what metrics in perfmon or taking what steps can I prove the above points as the basic cause?
Other than the two causes mentioned above, how to correlate metrics/stats in perfmon with a particular SQL Server query?
What is the fastest way to backup/restore Azure SQL database?
The background: We have the database with size ~40 GB and restoring it from the .bacbac file (~4GB of compressed data) in the native way by Azure SQL Database Import/Export Service takes up to 6-8 hours. Creating .bacpac is also very long and takes ~2 hours.
UPD:
UPD.
Creating the database (by the way transactional consistent) copy using CREATE DATABASE [DBBackup] AS COPY OF [DB] takes only 15 minutes with 40 GB database and the restore is simple database rename.
UPD. Dec, 2014. Let me share with you our experience about the fastest way of DB migration schema we ended up with.
First of all, the approach with data-tier application (.bacpac) turned out to be not viable for us after DB became slightly bigger and it also will not work for you if you have at least one non-clustered index with total size > 2 GB until you disable non-clustered indexes before export - it's due to Azure SQL transaction log limit.
We stick to Azure Migration Wizard that for data transfer just runs BCP for each table (parameters of BCP are configurable) and it's ~20% faster than approach with .bacpac.
Here are some pitfalls we encountered with the Migration Wizard:
We run into encoding troubles for non-Unicode strings. Make sure
that BCP import and export runs with same collation. It's -C ... configuration switch, you can find parameters with which BCP calling
in .config file for MW application.
Take into account that MW (at least the version that is actual at the moment of this writing) runs BCP with parameters that will leave the constraints in non-trusted state, so do not forget to check all non-trusted constraints after BCP import.
If your database is 40GB it's long past time to consider having a redundant Database server that's ready to go as soon as the main becomes faulty.
You should have a second server running alongside the main DB server that has no actual routines except to sync with the main server on an hourly/daily basis (depending on how often your data changes, and how long it takes to run this process). You can also consider creating backups from this database server, instead of the main one.
If your main DB server goes down - for whatever reason - you can change the host address in your application to the backup database, and spend the 8 hours debugging your other server, instead of twiddling your thumbs waiting for the Azure Portal to do its thing while your clients complain.
Your database shouldn't be taking 6-8 hours to restore from backup though. If you are including upload/download time in that estimate, then you should consider storing your data in the Azure datacenter, as well as locally.
For more info see this article on Business Continuity on MSDN:
http://msdn.microsoft.com/en-us/library/windowsazure/hh852669.aspx
You'll want to specifically look at the Database Copies section, but the article is worth reading in full if your DB is so large.
Azure now supports Point in time restore / Geo restore and GeoDR features. You can use the combination of these to have quick backup / restore. PiTR and Geo restore comes with no additional cost while you have to pay for
Geo replica
There are multiple ways to do backup, restore and copy jobs on Azure.
Point in time restore.
Azure Service takes full backups, multiple differential backups and t-log backups every 5 minutes.
Geo Restore
same as Point in time restore. Only difference is that it picks up a redundant copy from a different blob storage stored in a different region.
Geo-Replication
Same as SQL Availability Groups. 4 Replicas Async with read capabilities. Select a region to become a hot standby.
More on Microsoft Site here. Blog here.
Azure SQL Database already has these local replicas that Liam is referring to. You can find more details on these three local replicas here http://social.technet.microsoft.com/wiki/contents/articles/1695.inside-windows-azure-sql-database.aspx#High_Availability_with_SQL_Azure
Also, SQL Database recently introduced new service tiers that include new point-in-time-restore. Full details at http://msdn.microsoft.com/en-us/library/azure/hh852669.aspx
Key is to use right data management strategy as well that helps solve your objective. Wrong architecture and approach to put everything on cloud can prove disastrous... here's more to it to read - http://archdipesh.blogspot.com/2014/03/windows-azure-data-strategies-and.html
SQL Azure has a database size limit of 150 gb. I have read through their documentation several times and also searched online but I'm unclear about this: Does using federations allow a developer to grow beyond a 150 gb data base? For example can I have several 150GB federation members.
If not, how can I handle a database larger than 150 gb on Windows Azure?
basically, How do I scale out beyond 150 gb on Windows Azure
If theres no other way is RDS a good alternative(share any other alternatives)
Currently it is not possible to have a single database larger than 150G.
The only approach is to either split the data into multiple databases, one account can have up to 149 user databases plus the master DB, or use SQL Azure Federations. Currently, if I am not mistaken, the total number of Federations supported is Int16.MaxValue - 1. Each federation is actually a separate database, transparent to the developer, which can be up to 150GB.
However, SQL Azure Federations has its own pros and cons, along with some data access layer re-factoring. If you are interested you may check out these cool videos on SQL Azure Federations:
Building Scalable Apps with SQL Azure
Using SQL Azure Database Federations
UPDATE
I will not completely agree with #ryancrawcour. What he explains is just the peak of the iceberg lying bellow the water. The amount of required re-factoring really depends on how data is consumed from the application. I will just mention a few factors for considerations (which are not complete picture at all). Consider any of the following:
Data that is common for all federations (how you get this data)
Stored proc, that post-processes data - you have to iterate in each and ever federation member and execute that stored proc. There is no way to execute the Stored proc once and process data in all the federations.
Aggregate data, which is spread across more than 1 federation member
List data from more than one federation member.
These are just few operations that you will need to consider, and that does not require "just change in connection string and execute one use federation ..." before each query. Actually using SQL Azure Federations you don't need to change the connection string at all. It is all the same SQL Azure connection string. The "USE FEDERATION ..." statement is what you have execute before each query. But it is way not just the only thing. And how about if one is using EntityFramework (model first, or code first, or whatever). Things get even more complicated and need real understanding of SQL Azure Federations.
I would say that SQL Azure Federations is different way of thinking about data, about modelling and normalizing.
UPDATE 2 - new Database sizes announced by Microsoft
As of 03. April 2014 the maximum size for a single Database has been increased to 500GB. The only available information to date is here. Be aware that the management portal still doesn't show this option (as of Today and now: 4. Apri 2014, 15:00 GMT+0:00).
I've been looking for these same answers a while ago. In addition to the answers Anton provided (which are very accurate), I found that you can make your WAVM with SQL Server installation redundant through load balancing and mirroring.
The advantage of WASD is that everything is automated. E.g. when your WAVM instance is taken out of the roulation of the load balancer, you'll need bring a new one up yourself. WASD takes care of all of this.
With WASD Federations you're able to scale to 75TB of data (if I remember correctly), while with WAVM with SQL Server you can scale to 16TB tops.
Also with WASD Federations you can more granularly divide the SQL Workloads.
Regards,
Patriek
There is also the new Azure feature of persistent VMs (currently in preview) which will allow you to migrate your on-premises applications to cloud with minimal changes.
Further reading: Infrastructure as a Service Series: Running SQL Server in a Windows Azure Virtual Machine
.This guide might be helpful as well.
Edit
Here is a comparison with Sql Azure
While considering your scale options, be aware that, as of April 3 2014, Microsoft announced upcoming changes to SQL Premium, including ability to scale each SQL Database instance to 500GB (along with geo-replication, self-service restore, and higher uptime SLA). No date has been announced yet, but you can read about the announcement details here.
There is now a 1 Terrabyte tier available - see https://azure.microsoft.com/en-us/pricing/details/sql-database/ and look at the Premium level.