I am working on a Web application based on EF with over 1 GB seeded data. The application is hosted in Azure with Bizspark subscription account.
I created a App Service Plan with the Web application associated with an App Service sometime back. I started uploading data to Sql Server but this failed. I realized that the default size was 1GB so I upgraded the plan to a Standard Plan with 10 DTU and 10 GB yesterday and uploaded the data around 5 days back.
After which I due to certain issues, I wiped out the App Service Plan and created a new one. SQL Server size and setup was not modified.
I created a new plan and uploaded the application and observed the following -
Database tables got wiped out
Database prizing structure was reset to Basic
I upgraded the database plan once again to 10 GB and 10 DTU yesterday night. I see that the change has not taken affect yet.
How long does it take get the size fixed?
Will the tables have to be recreated?
9/11
I just tried uploading data via bcp tool. But I got the following error:
1000 rows sent to SQL Server. Total sent: 51000
Communication link failure
Text column data incomplete
Communication link failure
TCP Provider: An existing connection was forcibly closed by the remote host.
Communication link failure
This is new as yesterday before I changed the db size I got the following error:
9/10
1000 rows sent to SQL Server. Total sent: 1454000
The database 'db' has reached its size quota. Partition or delete data, drop indexes, or consult the documentation for possible resolutions.
BCP copy in failed
I don't understand the inconsistency in the failure message or even that the upload failed for the same data file.
Regards,
Lalit
Scaling up a database to a higher service tier should not take more than a few minutes from Basic to Standard. The schemas and table inside the database are left unchanged.
You may want to look into the Activity log of your Azure server to understand who initiated the scale down from Standard to Basic. Furthermore, you may want to turn on the Auditing feature to understand all the operations that are performed on your database.
On connectivity issues, you can start looking at this documentation page. It also looks like you have inserted rows several times into your database through the BCP command and this causes a space issue for the Basic tier.
Related
I have been trying to do a PITR of a 2GB S0 Azure SQL Server db. It has been running for over 24hrs. The DB restore progress has been saying 50% complete for 18
Hrs without any errors. Should I upgrade the server DTUs and size or the actual service tier?
According to this post. On SQL Database, the "horsepower" is measured by Database Throughput Units, or just "DTUs". This unit is measured by an integer and may variate from 5 to 1750. Every database edition has an offer of one or more "Service Objectives", which are directly related to the number of DTUs and the price to be played.
In the following image, you can find the list of "Service Objectives" (S0, P3, Basic, P11, S3, etc…) per SQL Database Edition and its respective prices. Notice that Microsoft is always updating its offer, so those prices and Service Objectives per Edition may be outdated when you read this post:
One option is a more conservative, responsible and dignified way to choose the number of DTUs, and is based on real data about your database activity. It is the DTU Calculator (http://dtucalculator.azurewebsites.net/), an online service that helps us by advising about the most appropriate Service Objective for a database. You just need to download a PowerShell script, available on the DTU Calculator website, and run it in the server where your database is located. As soon as you run this script, the following data will be measured and recorded in a CSV file:
Processor – % Processor Time
Logical Disk – Disk Reads/sec
Logical Disk – Disk Writes/sec
Database – Log Bytes Flushed/sec
Once the collection is done, you just need to upload the file generated by the script and interpret the results. Here is a sample of one of the charts generated by the DTU Calculator, indicating that 89.83% of the database load would run well with the Service Objective S3, of the "Standard" SQL Database edition.
Here is a decision tree that will help you to reach the optimal point for your database.
So I think you can increase the DTU appropriately to speed up the process. :)
If you are on a S0 you are using Azure SQL Database, not a Managed Instance.
2GB is quite small, it should have recovered the point in time restore in an hour or so.
Contact Microsoft Support.
We recently faced an issue in a server where 12000 concurrent users were trying to access an application but only 120 SQL Server connections were available.
Basic issue I've found is in the architecture of deployment of application and database as below:
DB & App on Same Server
Data and log files of all database whether system or user, are on system drive i.e. C:\
Questions:
By looking what metrics in perfmon or taking what steps can I prove the above points as the basic cause?
Other than the two causes mentioned above, how to correlate metrics/stats in perfmon with a particular SQL Server query?
I am using SQL Azure migration wizard for migrating one of my database to a different instance. It literally took more than 12 hours to do BCP out itself. The only change i have doneis to increase the packet size from 4096 to 65535(max). Is that wrong ? And i am doing this from a AWS server which is part of the same subnet where SQL server RDS instance is hosted
Analysis completed at 7/16/2016 1:53:31 AM -- UTC -> 7/16/2016 1:53:31 AM
Any issues discovered will be reported above.
Total processing time: 12 hours, 3 minutes and 14 seconds
There is a blog post from the SQL Server Customer Advisory Team (CAT) that goes into a few details about optimal settings to get data into and out of Azure SQL databases.
Best Practices for loading data to SQL Azure
When loading data to SQL Azure, it is advisable to split your data into multiple concurrent streams to achieve the best performance.
Vary the BCP batch size option to determine the best setting for your network and dataset.
Add non clustered indexes after loading data to SQL Azure.
If, while building large indexes, you see a throttling-related error message, retry using the online option.
I have a very simple setup using MSF from one client PC to one remote PC. Both are running SQL Server 2008 Express. The purpose is simply to send the data from a local server where the data is being collected at a competition to a web server for display.
However, this weekend I hit a situation where the local database is now continuously uploading 4 records to the remote database. There are no error messages at this point and the sync apparently completes, but next time I run it again, it reports 4 records uploaded, even though there have been
no changes to the local database.
The data is transferred over a 3/4G link and there are occasionally dropouts on the link, but in the past Sync has recovered on the next iteration. The only error message I saw that was unusual was "Cannot Enumerate Changes at the RelationalSyncProvider for table'Horses'." Ironically this is one table that doesn't change during the day! But it does contain 4 records - hmmm.
I have tried backup and restore to another SQL instance on my development machine (SQL Server 2008 R2), have run fixup and then Sync to the remote and I get the same situation of 4 records uploaded (and now 1 downloaded), with no errors, but again the same record counts every time I run - and I know nothing is being updated.
As far as I can tell the data is in Sync as I can't see any obvious anomalies, but that is hard to be certain of.
So ...
Can I tell which records are the issue/being uploaded?
What can I do to ensure the data has in fact synced?
What can I do to "reset" so that the two databases know they are actually in Sync, preferably short of backup and restore!
We have an application which has a SQL Server 2000 Database attached to it. After every couple of days the application hangs, and we have to restart SQL Server service and then it works fine. SQL Server logs show nothing about the problem. Can anyone tell me how to identify this issue? Is it an application problem or a SQL Server problem?
Thanks.
Is it an application problem or a SQL Server problem?
Is it possible to connect to MS SQL Server using Query Analyzer or another instance of your application?
General tips:
Use Activity Monitor to find information about concurrent processes, locks and resource utilization.
Use Sql Server Profiler to trace server and database activity, to capture and save data to a table or file to analyze it later.
You can use Dynamic Management Views (\Database name\Views\System Views folder (in the Management Studio)) to get more detailed information about MS SQL Server internals.
If you have the problems with perfomance (not your case) - you can use Perfomance Monitor and Data Collector Sets to gather perfomance information
Hard to predict the issue, I will suggest you to check your application first.Check what all operations you are performing against data base, are you taking care of connection pooling, unused open connections can create issues.
Check if you can get any log from your application. Without any log information hardly we can suggest anything.
Read this
Application may be hanging due to Deadlock
check the SP runs at that time using Profiler
and check the table manipulation(use nolock),
check the buffer size and segregate the DB into two or three module.