Why is my Google Cloud SQL instance being billed every hour? - google-app-engine

It seems like I'm being overbilled but I want to make sure I am not misunderstanding how Per Use billing works. Here are the details:
I'm running a small test PHP application on Google App Engine with no visitors other than myself every once in a while.
I periodically reset the database via cron: originally every hour, then every 3 hours last month, now every 6 hours.
Pricing plan: Per Use
Storage Used: 0.1% of 250 GB
Type: First Generation
IPv4 address: None
File system replication: Synchronous
Tier: D0
Activation Policy: On demand
Here's the billing through the first 16 days of this months:
Google SQL Service D0 usage - hour 383 hour(s) $9.57
16 days * 24 hours = 384 hours * $.025 = $9.60 . So it appears I've been charged every hour this month. This also happened last month.
I understand that I am charged the full hour for every part of an hour that the SQL instance is active.
Still, with the minimal app usage and the database reset 4 times a day, I would expect the charges (even allowing for a couple extra hours of usage each day) to be closer to:
16 days * 6 hours = 80 hours * $.025 = $2.40.
Any explanation for the discrepency?

The logs are the source of truth usually. Check them to see if you are being visited by an aggressive crawler, a stuck task that keeps retrying etc.
Or you may have a cron job that is running and performing work. You can view that in the "task queue/cron jobs" section in the control panel.

You might be have assigned an Ipv4 address to your instance and Google Developer Console clearly states
You will be charged $0.01 each hour the instance is inactive and has an IPv4 address assigned.
This might be the reason of your extra bill.

Related

How to explain the high front end instance cost?

My simple python website hosted on AppEngine got some increase in traffic. It went from total of 447 visitors and 860 views in September (peak 33 visitors on a day) to 1K visitors and 1.5K views in October (peak 61 visitors on a day).
Meanwhile the cost went from $0.00 in September to $10.66 USD in October. The cost breakdown shows that the complete amount is attributed to front end instances, totaling 930.15 hours of usage. That is about 30 hours a day.
Beforehand I have set my max_instances and max_idle_instances to 1. With a single instance running, how is it possible to have 30 hours of usage in a day that lasts 24 hours?
I am using F4 instance class - once a month I parse an excel sheet (that doesn't depend on the number of visits), and more limited instance classes were exceeding soft memory limit of 256 MB. As well, my front end is optimized and it fits in less than 30Kb. So with only 1.5K views a month, how can I have that much front end instance hours?
It is possible that you consume more than 24 hours in a day because F4 instances consumes 4 hours in an hour.
See here.
For example, if you use an F4 instance for one hour, you see "Frontend Instance" billing for four instance hours at the F1 rate.
And App Engine bills depending on how much hours your instances are up. Even you have no traffic, you may be billed.

Go - AppEngine - Performance

I have Go app that receives JSON in POST and stores it in a Datastore (AppEngine)
The statistic for first 24 hours:
40 entities were stored in datastore. (every entity is small less 1K, JSON with 7-10 fields)
7.20 Instance hours consumed.
7 hours is much more then I expected. I expected to see 7 seconds or even 1 second.
Is that normal?
Instance hours means how long your app standup. As GAE will go idle if no request in 15 minutes, in your case, if there is a request every 15 minutes, you may max cost 40req*15min/60=10hour instance hours. So 7.2 instance hours is possible.

Backend instance hours count

I'm using the AppEngine's Backend instances and the daily free quota is 9 instance hours. However, I've been using a Backend with 10 instances for around 16-17 minutes and my usage has already crossed 66%.
The calculation I had in my mind was 17 mins * 10 instances = 170 mins ~ 2.8hrs which is definitely less than 66% of 9 hours.
Can someone explain me the billing scheme here?
From https://developers.google.com/appengine/docs/quotas#Requests:
Instance Hours (billable) In general, instance usage is billed on an
hourly basis based on the instance's uptime. Billing begins when the
instance starts and ends fifteen minutes after the instance shuts
down. You will be billed only for idle instances up to the number of
maximum idle instances set in the Performance Settings tab of the
Admin Console. Runtime overhead is counted against the instance
memory.
In your case, you'd have 17min of activity + 15min after activity = 32 minutes. So 320 minutes (32 * 10) is pretty close to 2/3 of 9 hours.
You should be able to see the details in the Usage History of your application.

Apache2: server-status reported value for "requests/sec" is wrong. What am I doing wrong?

I am running Apache2 on Linux (Ubuntu 9.10).
I am trying to monitor the load on my server using mod_status.
There are 2 things that puzzle me (see cut-and-paste below):
The CPU load is reported as a ridiculously small number,
whereas, "uptime" reports a number between 0.05 and 0.15 at the same time.
The "requests/sec" is also ridiculously low (0.06)
when I know there are at least 10 requests coming in per second right now.
(You can see there are close to a quarter million "accesses" - this sounds right.)
I am wondering whether this is a bug (if so, is there a fix/workaround),
or maybe a configuration error (but I can't imagine how).
Any insights would be appreciated.
-- David Jones
- - - - -
Current Time: Friday, 07-Jan-2011 13:48:09 PST
Restart Time: Thursday, 25-Nov-2010 14:50:59 PST
Parent Server Generation: 0
Server uptime: 42 days 22 hours 57 minutes 10 seconds
Total accesses: 238015 - Total Traffic: 91.5 MB
CPU Usage: u2.15 s1.54 cu0 cs0 - 9.94e-5% CPU load
.0641 requests/sec - 25 B/second - 402 B/request
11 requests currently being processed, 2 idle workers
- - - - -
After I restarted my Apache server, I realized what is going on. The "requests/sec" is calculated over the lifetime of the server. So if your Apache server has been running for 3 months, this tells you nothing at all about the current load on your server. Instead, reports the total number of requests, divided by the total number of seconds.
It would be nice if there was a way to see the current load on your server. Any ideas?
Anyway, ... answered my own question.
-- David Jones
Apache status value "Total Accesses" is total access count since server started, it's delta value of seconds just what we mean "Request per seconds".
There is the way:
1) Apache monitor script for zabbix
https://github.com/lorf/zapache/blob/master/zapache
2) Install & config zabbix agentd
UserParameter=apache.status[*],/bin/bash /path/apache_status.sh $1 $2
3) Zabbix - Create apache template - Create Monitor item
Key: apache.status[{$APACHE_STATUS_URL}, TotalAccesses]
Type: Numeric(float)
Update interval: 20
Store value: Delta (speed per second) --this is the key option
Zabbix will calculate the increment of the apache request, store delta value, that is "Request per seconds".

How do I measure response time in seconds given the following benchmarking data?

We recently got some data back on a benchmarking test from a software vendor, and I think I'm missing something obvious.
If there were 17 transactions (I assume they mean successfully completed requests) per second, and 1500 of these requests could be served in 5 minutes, then how do I get the response time for a single user? Is this sort of thing even possible with benchmarking? I have a lot of other data from them, including apache config settings, but I'm not sure how to do all the math.
Given the server setup they sent, I want to know how I can deduce the user response time. I have looked at other similar benchmarking tests, but I'm having trouble measuring requests to response time. What other data do I need to provide here to get that?
If only 1500 of these can be served per 5 minutes then:
1500 / 5 = 300 transactions per min can be served
300 / 60 = 5 transactions per second can be served
so how are they getting 17 completed transactions per second? Last time I checked 5 < 17 !
This doesn't seem to fit. Or am I looking at it wrongly?
I presume be user response time, you mean the time it takes to serve a single transaction:
If they can serve 5 per second than it takes 200ms (1/5) per transaction
If they can serve 17 per second than it takes 59ms (1/17) per transaction
That is all we can tell from the given data. Perhaps clarify how many transactions are being done per second.

Resources