We've been using Google's Datastream service for a little over 1 years with 2 Streams with about 150 tables in total.
Yesterday 2022/10/21 around 18:30 both Streams stopped replicating. No error messages appear in the logs.
No operations were performed on the MySQL databases;
No user permission changes were performed in MySQL;
enter image description here
Related
I'm developing a django api using Azure sql database, after making request for 1200 times its throwing the error "The session limit for the database is 1200 and has been reached" This is because Azure can have only 1200 concurrent sessions at a time, so after reaching this limit if i restart my server it will drop all the concurrent sessions and it will start again from 0.
Here in database im not using any sessions and authentication, its just a plain django application,
I even tried conn_max_age parameter in settings file with None 0 and 200 500 values also,
I found out that each time i'm making a request azure sql is creating a new entry in storage procedures,
I can see the row created with the command '''EXEC sp_who'''
Can anyone please help me with this.
This is the Azure SQL database resource limits.
Your SQL database price tier is 'Standard/S2', the max sessions is 1200.
For more details ,please see Resource limits for single databases using the DTU-based purchasing model.
To improve the Max concurrent works(requests), you need to scale up the service tier.
Hope this helps.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Your globally distributed auction application allows users to bid on items. Occasionally, users place identical bids at nearly identical times, and different application servers process those bids. Each bid event contains the item, amount , user, and timestamp. You want to collate those bid events into a single location in real time to determine which user bid first. What should you do?(choose one)
(A) Create a file on a shared file and have the application servers write all bid events to that file. Process the file with Apache Hadoop to identify which user bid first.
(B) Have each application server write the bid events to Cloud Pub/Sub as they occur. Push the events from Cloud Pub/Sub to a custom endpoint that writes the bid event information into Cloud SQL.
(C) Set up a MySQL database for each application server to write bid events into. Periodically query each of those distributed MySQL databases and update a master MySQL database with bid event information.
(D) Have each application server write the bid events to Google Cloud Pub/Sub as they occur. Use a pull subscription to pull the bid events using Google Cloud Dataflow. Give the bid for each item to the user in the bid event that is processed first.
Personally, I choose (D). Here are the following reasons:
Cloud Pub/Sub is a managed service and message-oriented middleware, in other words, it delivers messages.
"users place identical bids at nearly identical times, and different application servers process those bids."
With Pub/Sub you would just need to config publishers 1 from the end users, it will send the bids to the topic and later on you can process these data. So I will eliminate (A)(C) first, you don't want to manage you own Hadoop or MySQL server if you have a better option and that is Cloud Pub/Sub.
There is another key sentence
"collate those bid events into a single location in real time."
Cloud Dataflow(Apache Beam) 2 supports both streaming and batch processing. There is a function called Triggers, you can trigger by data's event time also same as the time that the user bid on.
You don't want to store these real time data into Cloud SQL 3.
I've setup two SQL DBs on Azure with geo-replication. The primary is in Brazil and a secondary in West Europe.
Similarly I have two web apps running the same web api. A Brazilian web app that reads and writes on the Brazilian DB and a European web app that reads on the European DB and writes in the Brazilian DB.
When I test response times on read-only queries with Postman from Europe, I first notice that on a first "cold" call the European Web app is twice as fast as the Brazilian one. However, immediate next calls response times on the Bazilian web app are 10% of the initial "cold" call whereas response times on the European web app remain the same. I also notice that after a few minutes of inactivity, results are back to the "cold" case.
So:
why do query response times drop in Brazil?
whatever the answer is to 1, why doesn't it happen in Europe?
why does the response times optimization occurring in 1 doesn't last after a few minutes of inactivity?
Note that both web apps and DB are created as copy/paste (except geo-replication) from each other in an Azure ARM json file.
Both web apps are alwaysOn.
Thank you.
UPDATE
Actually there are several parts in action in what I see as a end user. The webapps and the dbs. I wrote this question thinking the issue was around the dbs and geo-replication however, after trying #Alberto's script (see below) I couldn,' see any differences in wait_times when querying Brazil or Europe so the problem may be on the webapps. I don't know how to further analyse/test that.
UPDATE 2
This may be (or not) related to query store. I asked on a new more specific question on that subject.
UPDATE 3
Queries on secondary database are not slower. My question was raised on false conclusions. I won't delete it as others took time to answer it and I thank them.
I was comparing query response times through rest calls to a web api running EF queries on a SQL Server DB. As rest calls to the web api located in the region querying the db replica are slower than rest calls to the same web api deployed in another region targeting the primary db, I concluded the problem was on the db side. However, when I run the queries in SSMS directly, bypassing the web api, I observe almost no differences in response times between primary and replica db.
I still have a problem but it's not the one raised in that question.
On Azure SQL Database your database' memory utilization may be dynamically reduced after some minutes of inactivity, and on this behavior Azure SQL differs from SQL Server on-premises. If you run a query two or three times it then start to execute faster again.
If you examine the query execution plan and its wait stats, you may find a wait named MEMORY_ALLOCATION_EXT for those queries executing after the memory allocation has been shrinked by Azure SQL Database service. Databases with a lot activity and query execution may not see its memory allocation reduced. For a detailed information of my part please read this StackOverflow thread.
Take in consideration also both databases should have the same service tier assigned.
Use below script to determine query waits and see what is the difference in terms of waits between both regions.
DROP TABLE IF EXISTS #before;
SELECT [wait_type], [waiting_tasks_count], [wait_time_ms], [max_wait_time_ms],
[signal_wait_time_ms]
INTO #before
FROM sys.[dm_db_wait_stats];
-- Execute test query here
SELECT *
FROM [dbo].[YourTestQuery]
-- Finish test query
DROP TABLE IF EXISTS #after;
SELECT [wait_type], [waiting_tasks_count], [wait_time_ms], [max_wait_time_ms],
[signal_wait_time_ms]
INTO #after
FROM sys.[dm_db_wait_stats];
-- Show accumulated wait time
SELECT [a].[wait_type], ([a].[wait_time_ms] - [b].[wait_time_ms]) AS [wait_time]
FROM [#after] AS [a]
INNER JOIN [#before] AS [b] ON
[a].[wait_type] = [b].[wait_type]
ORDER BY ([a].[wait_time_ms] - [b].[wait_time_ms]) DESC;
I am working on a Web application based on EF with over 1 GB seeded data. The application is hosted in Azure with Bizspark subscription account.
I created a App Service Plan with the Web application associated with an App Service sometime back. I started uploading data to Sql Server but this failed. I realized that the default size was 1GB so I upgraded the plan to a Standard Plan with 10 DTU and 10 GB yesterday and uploaded the data around 5 days back.
After which I due to certain issues, I wiped out the App Service Plan and created a new one. SQL Server size and setup was not modified.
I created a new plan and uploaded the application and observed the following -
Database tables got wiped out
Database prizing structure was reset to Basic
I upgraded the database plan once again to 10 GB and 10 DTU yesterday night. I see that the change has not taken affect yet.
How long does it take get the size fixed?
Will the tables have to be recreated?
9/11
I just tried uploading data via bcp tool. But I got the following error:
1000 rows sent to SQL Server. Total sent: 51000
Communication link failure
Text column data incomplete
Communication link failure
TCP Provider: An existing connection was forcibly closed by the remote host.
Communication link failure
This is new as yesterday before I changed the db size I got the following error:
9/10
1000 rows sent to SQL Server. Total sent: 1454000
The database 'db' has reached its size quota. Partition or delete data, drop indexes, or consult the documentation for possible resolutions.
BCP copy in failed
I don't understand the inconsistency in the failure message or even that the upload failed for the same data file.
Regards,
Lalit
Scaling up a database to a higher service tier should not take more than a few minutes from Basic to Standard. The schemas and table inside the database are left unchanged.
You may want to look into the Activity log of your Azure server to understand who initiated the scale down from Standard to Basic. Furthermore, you may want to turn on the Auditing feature to understand all the operations that are performed on your database.
On connectivity issues, you can start looking at this documentation page. It also looks like you have inserted rows several times into your database through the BCP command and this causes a space issue for the Basic tier.
I'm running an integration flow which processing actions are on hold due to the following error:
com.sybase.jdbc4.jdbc.SybSQLWarning: The transaction log in database <database_name> is almost full. Your transaction is being suspended until space is made available in the log.
How can I erase the log or increase its size?
Thank you
From my understanding this message is related to your SybSQL Database. This is not related to HCI. So you should clear the database log.
On HCI side you cannot delete any log or influence log sizes. I had a quite similar request a while ago. I clarified with SAP Support and it is not possible to delete any log entries manually. Furthermore in the meantime I found that the log messages are deleted automatically after 6 months.