Running same sync process on local db VS Asure - sql-server

I have a WCF service on azure which perform multiple tasks.
Tasks:
1. download a file from a remote server (about 50MB)
2. preform bulk insert to a Azure Database (about 360,000 records) at once.
3. run 2 different SP (about 15 sec tops both)
The total process takes about 1 min on my local SQL server 2012 Database,
but when I try to run it from a cloud WCF (not Cloud Service) it takes more than the timeout connection can handle (4-30 min).
Still I don't understand why there is a significant difference of performance.
Cloud Resources? and if so how can I make it perform better same as if I ran it locally (as close as I can)?
Regards,
Avishai

Related

Slow query results transfer from MS SQL Server

I have two virtual servers in cloud: DB and Web. There is MS SQL Server Express 2012 instance on Db server. In Database in SQL Server there is the table 'Job' with relatively big records(10Kb per record).
When i run simple query from DB server, it takes less than one second to execute. But when i run it from Web server it takes more than one minute.
SELECT TOP 10000 * FROM Job
And it have direct relationship between retrived records count and time to execute. It takes about seven minutes to retrieve 100000 records from Web server.
In SQL Server Profiles it takes same CPU time to execute query from DB and Web server, but total time different.
The same effect is on any query which returns large amount of bytes.
I have another application servers in this cloud and they have that effect too.
But on development environment and other deploys all is ok. Query executes on Web server slightly slow than on DB Server.
I have checked network speed between Web and DB servers. Its stable and about 1Gb. Turning off Windows Firewall have no effect.

How to achieve real time reporting in Azure SQL server in one database?

We have a stored procedure in Azure SQL database (Pricing tier is premium with 250 DTU) which processes around 1.3 billion records and inserts results in tables which we display in reporting page. To run this stored procedure, it takes around 15 minutes and we have scheduled it weekly as Azure webjobs because we use same database for writing actual user logs.
But now, we want real time reporting max 5 minutes of differences and if I schedule webjobs to execute the stored procedure every 5 minutes then my application will shutdown.
Is there any other approach to achieve real time reporting?
Is there any Azure services available for it?
Can I use azure databricks to execute the stored procedure? Will it help?
Yes, you can use read queries on Premuim replica databases, by adding this to your connection string:
ApplicationIntent=ReadOnly;
https://learn.microsoft.com/en-us/azure/sql-database/sql-database-read-scale-out

Tomcat application runs slower when moved to PaaS platform on Azure

Currently my web application runs on two Cloud-based Windows 2012 R2 Servers - one running Tomcat and one running SQL Server Express. The Windows Servers are 4GB Intel XEON(R) CPU E5-2660 v2 #2.20 Gghz
I am testing my application on PaaS (Platform as a Service) in Azure.
I created a Linux Web App with Tomcat 9.0 to hold the application (P1V2) and an Azure SQL Server to hold the database (S2)
One test I did was to compare the time it takes to produce an Excel report (using Apache POI) on both systems.
On the Cloud system (running SQL Server Express) it took about 10 seconds. On Azure it took about 35 seconds.
Obviously I would like the Azure system to be at least as fast as the one based on SQL Server Express, especially as the cloud system runs SQL Server Express which is capped at 1GB and 1 core.
I have tried the following:
Checked to see if there any spikes in the dashboard chart for the
database. There are no significant ones - DTU's are maximum 25%
I added Query Performance Insight / Recommendations / Automate to automatically
tune the database. This did speed it up somewhat but by no means
enough.
I read Help, My Azure Site Performance Sucks! and Why is running a query on SQL Azure so much slower? and Azure SQL query slow
I checked that the database and application were in the same location. (West Europe)
I imagine that the problem is the database.
As an example, I found a query (using the Query Performance Insight / Long Running Queries) that runs in 2 seconds on Azure and in 0 seconds on SQL Server Express. Note that I am NOT asking how to optimize this query. Rather I am imagining that the fact that this query takes longer on Azure - with the same database schema, the same data and the same indexes - might be a clue as to how to speed up my application as a whole.
SELECT cp.*
,(
SELECT min(market_date)
FROM mydb.rates ms
WHERE ms.curr1 = cp.curr1
AND ms.curr2= cp.curr2
) MIN_MARKETDATE
FROM pairs cp
order by curr1, curr2
The easiest way to do an apples-to-apples comparison is to use the vCore model for your Azure SQL. You say you are using an S2 database which is 50 DTU or half of a core. You would need to scale up to at least an S3 to be the equivalent of a 1 core VM.
That will ensure you are testing with the same general setup and should help you to match performance.

Azure sql server size resets to 1GB

I am working on a Web application based on EF with over 1 GB seeded data. The application is hosted in Azure with Bizspark subscription account.
I created a App Service Plan with the Web application associated with an App Service sometime back. I started uploading data to Sql Server but this failed. I realized that the default size was 1GB so I upgraded the plan to a Standard Plan with 10 DTU and 10 GB yesterday and uploaded the data around 5 days back.
After which I due to certain issues, I wiped out the App Service Plan and created a new one. SQL Server size and setup was not modified.
I created a new plan and uploaded the application and observed the following -
Database tables got wiped out
Database prizing structure was reset to Basic
I upgraded the database plan once again to 10 GB and 10 DTU yesterday night. I see that the change has not taken affect yet.
How long does it take get the size fixed?
Will the tables have to be recreated?
9/11
I just tried uploading data via bcp tool. But I got the following error:
1000 rows sent to SQL Server. Total sent: 51000
Communication link failure
Text column data incomplete
Communication link failure
TCP Provider: An existing connection was forcibly closed by the remote host.
Communication link failure
This is new as yesterday before I changed the db size I got the following error:
9/10
1000 rows sent to SQL Server. Total sent: 1454000
The database 'db' has reached its size quota. Partition or delete data, drop indexes, or consult the documentation for possible resolutions.
BCP copy in failed
I don't understand the inconsistency in the failure message or even that the upload failed for the same data file.
Regards,
Lalit
Scaling up a database to a higher service tier should not take more than a few minutes from Basic to Standard. The schemas and table inside the database are left unchanged.
You may want to look into the Activity log of your Azure server to understand who initiated the scale down from Standard to Basic. Furthermore, you may want to turn on the Auditing feature to understand all the operations that are performed on your database.
On connectivity issues, you can start looking at this documentation page. It also looks like you have inserted rows several times into your database through the BCP command and this causes a space issue for the Basic tier.

SQL Azure migration wizard taking long time

I am using SQL Azure migration wizard for migrating one of my database to a different instance. It literally took more than 12 hours to do BCP out itself. The only change i have doneis to increase the packet size from 4096 to 65535(max). Is that wrong ? And i am doing this from a AWS server which is part of the same subnet where SQL server RDS instance is hosted
Analysis completed at 7/16/2016 1:53:31 AM -- UTC -> 7/16/2016 1:53:31 AM
Any issues discovered will be reported above.
Total processing time: 12 hours, 3 minutes and 14 seconds
There is a blog post from the SQL Server Customer Advisory Team (CAT) that goes into a few details about optimal settings to get data into and out of Azure SQL databases.
Best Practices for loading data to SQL Azure
When loading data to SQL Azure, it is advisable to split your data into multiple concurrent streams to achieve the best performance.
Vary the BCP batch size option to determine the best setting for your network and dataset.
Add non clustered indexes after loading data to SQL Azure.
If, while building large indexes, you see a throttling-related error message, retry using the online option.

Resources