How is Pricing calculated in Heroku PostGres Addon - database

Heroku has now upgraded its plans. So I have a doubt on the Heroku postgres Mini plan. I have a site which uses the heroku postgres database. The site is just for my practice and in my free time i work or use it. The Heroku upgrade for mini plan reads like "Heroku prorates costs to the second. For example, if you have 1 Mini Postgres database provisioned for 1 hour, it costs you ~$0.004."
What exactly does provision mean over here? Since I have a postgres database addon does that mean it will run all 24 hours? Or hours will count only when I open my website which fetches data from the DB or when i connect to my database, does the hours begin to count. Please help!

Related

Requests from docker container in Google Cloud Run Service to Google Cloud SQL takes up to 2 minutes

Im using the Google Cloud Run service to host my Spring application in a docker container. The database is running in the Google SQL service. My problem is that the requests from the application to the database can take up to 2 minutes. See the Google Cloud Run log (the long requests are painted yellow). And here's the Dockerfile and Docker Compose File
The database is quite empty, it contains about 20 tables but each contains only few rows, so no request is bigger than few kB. And to make it more strange, after re-deploying the application the requests are fast again. But after few minutes, hours or even after a whole day the requests slow down again. When I start the application on my local machine the requests are always fast (to my local SQL and Google SQL instance), never had any slow connection. All actions within my application that doesn't require any DB request are still fast and takes only few ms.
Both services are running in the same region (europe-west) and CPU usage of the run service is never higher than 15%, of the Google SQL never above 3%. The Google SQL uses 1 CPU and 3.75GB, the Google run service has 4GB RAM and 2CPUs. But increasing the power of the Google Run Service and Google SQL doesn't improve the request latency. Google Cloud SQL is using MySQL 5.7 (like my local DB).
And after seeing the logs only warnings are shown in the filtered Google SQL log (I really dont know why this happens). Additionally here are my DB connection settings in the Spring config. But I dont think this has any impact, the config works perfect when connecting my local application to my local SQL instance or to the Google SQL instance.
But maybe one of you has an idea?
While not a real answer, there is a bug filed at Google that is tracking the issue:
https://issuetracker.google.com/issues/186313545
This is really hurting our customers experience and makes us loose trust in the service quality of cloud run. Even more so if there is no feedback from Google to know if they are even addressing the issue.
Edit:
The issue now seems to be resolved, according to the interactions in https://issuetracker.google.com/issues/186313545

Persistent disc on VM vs managed databases for Kubernetes cluster

Migrating a Postgres database from Heroku to Google Cloud in a Kubernetes and Docker setup.
Trying to decide what is a better approach.
1st approach - Use a persistent disc on the VM that is used by a deployed Postgres instance in the Kubernetes cluster.
2nd approach - Use a managed Postgres SQL database that the cluster deployments connect to.
I assume the main differences would be for the maintenance and updating of the database? Are there any big trade-offs of one setup vs the other?
This is an opinion question so I'll answer with an option.
Kubernetes Postgres
Pros:
You can manage your own Postgres cluster.
No vendor lock-in.
Postgres is local to your cluster. (It may not be too much of a difference)
Do your own maintenance.
Raw cost is less.
Cons:
If you run into any Postgres cluster problems you are responsible to fix them.
You have to manage your own storage
No vendor lock-in but you still need to move the data if you decide to switch providers.
You have to do your own backups.
Managed postgres SQL database
Pros:
GCP does it all for you
Any problems will be handled by GCP
Maintenance also handled by GCP.
Storage handled by GCP.
Backups performed by GCP
Cons:
Vendor lock-in
Postgres not local to your cluster.
Will probably cost more.

Querying Amazon Web Services RDS via Google App engine Standard Environment

Querying Amazon Web Services RDS via Google App engine Standard Environment seems to be very slow and is proportional to number of records that are being fetched.
For example, if the query returns one record, it takes 100ms and if it returns 10 records, it takes 1 sec. This is slowing our apis down a lot.
Has anyone else faced this? If yes, what have you to done to sort the issue?
PS: We were on google-cloud-sql prior to migrating to aws rds. Migrating due to financial constraints.

Azure database latency killing app

I've done a lot of reading about latency of azure database on stackoverflow and various blogs around the web. I cannot figure out what is going on, however, with the high latency I'm experiencing between my azure website and azure database.
I notice my app was running very slow so I clocked the time to run a query (the query themselves take ~0ms for the db to execute). On average, it is taking 175ms to execute a query and get a response from the db. If I do 10 queries in a single page load, that 1.75 seconds just in latency! I get much better performance than that from a budget host running Mysql.
Any advice on how to address this issue is appreciated.
It looks like the database was in a different region than my website. Moving it into the same datacenter took the latency down from ~175ms to ~30ms.

Where does Heroku store its databases?

In light of the recent AWS outage (and thus Heroku outage), I started thinking a little bit more about the Heroku stack. I was wondering - where does Heroku store my DB exactly? I have the free postgres shared DB that they give with each app ... does that sit on some EC2 cloud somewhere? It is not RDS right, since those are all MySQL DBs...?
Thanks,
Ringo
EC2, with the write ahead logs continuously shipped to S3 via WAL-E.
They are on EC2 instances running Postgresql - they are not RDS as that is only MySql as you say.
Usually you can tell they are Amazon instances from the DATABASE_URL config variable that Heroku sets.

Resources