I have Go app that receives JSON in POST and stores it in a Datastore (AppEngine)
The statistic for first 24 hours:
40 entities were stored in datastore. (every entity is small less 1K, JSON with 7-10 fields)
7.20 Instance hours consumed.
7 hours is much more then I expected. I expected to see 7 seconds or even 1 second.
Is that normal?
Instance hours means how long your app standup. As GAE will go idle if no request in 15 minutes, in your case, if there is a request every 15 minutes, you may max cost 40req*15min/60=10hour instance hours. So 7.2 instance hours is possible.
Related
My simple python website hosted on AppEngine got some increase in traffic. It went from total of 447 visitors and 860 views in September (peak 33 visitors on a day) to 1K visitors and 1.5K views in October (peak 61 visitors on a day).
Meanwhile the cost went from $0.00 in September to $10.66 USD in October. The cost breakdown shows that the complete amount is attributed to front end instances, totaling 930.15 hours of usage. That is about 30 hours a day.
Beforehand I have set my max_instances and max_idle_instances to 1. With a single instance running, how is it possible to have 30 hours of usage in a day that lasts 24 hours?
I am using F4 instance class - once a month I parse an excel sheet (that doesn't depend on the number of visits), and more limited instance classes were exceeding soft memory limit of 256 MB. As well, my front end is optimized and it fits in less than 30Kb. So with only 1.5K views a month, how can I have that much front end instance hours?
It is possible that you consume more than 24 hours in a day because F4 instances consumes 4 hours in an hour.
See here.
For example, if you use an F4 instance for one hour, you see "Frontend Instance" billing for four instance hours at the F1 rate.
And App Engine bills depending on how much hours your instances are up. Even you have no traffic, you may be billed.
It seems like I'm being overbilled but I want to make sure I am not misunderstanding how Per Use billing works. Here are the details:
I'm running a small test PHP application on Google App Engine with no visitors other than myself every once in a while.
I periodically reset the database via cron: originally every hour, then every 3 hours last month, now every 6 hours.
Pricing plan: Per Use
Storage Used: 0.1% of 250 GB
Type: First Generation
IPv4 address: None
File system replication: Synchronous
Tier: D0
Activation Policy: On demand
Here's the billing through the first 16 days of this months:
Google SQL Service D0 usage - hour 383 hour(s) $9.57
16 days * 24 hours = 384 hours * $.025 = $9.60 . So it appears I've been charged every hour this month. This also happened last month.
I understand that I am charged the full hour for every part of an hour that the SQL instance is active.
Still, with the minimal app usage and the database reset 4 times a day, I would expect the charges (even allowing for a couple extra hours of usage each day) to be closer to:
16 days * 6 hours = 80 hours * $.025 = $2.40.
Any explanation for the discrepency?
The logs are the source of truth usually. Check them to see if you are being visited by an aggressive crawler, a stuck task that keeps retrying etc.
Or you may have a cron job that is running and performing work. You can view that in the "task queue/cron jobs" section in the control panel.
You might be have assigned an Ipv4 address to your instance and Google Developer Console clearly states
You will be charged $0.01 each hour the instance is inactive and has an IPv4 address assigned.
This might be the reason of your extra bill.
So I am currently performing a test, to estimate how much can my Google app engine work, without going over quotas.
This is my test:
I have in the datastore an entity, that according to my local
dashboard, needs 18 write operations. I have 5 entries of this type
in a table.
Every 30 seconds, I fetch those 5 entities mentioned above. I DO
NOT USE MEMCACHE FOR THESE !!!
That means 5 * 18 = 90 read operations, per fetch right ?
In 1 minute that means 180, in 1 hour that means 10800 read operations..Which is ~20% of the daily limit quota...
However, after 1 hour of my test running, I noticed on my online dashboard, that only 2% of the read operations were used...and my question is why is that?...Where is the flaw in my calculations ?
Also...where can I see in the online dashboard how many read/write operations does an entity need?
Thanks
A write on your entity may need 18 writes, but a get on your entity will cost you only 1 read.
So if you get 5 entries every 30 secondes during one hour, you'll have about 5reads * 120 = 600 reads.
This is in the case you make a get on your 5 entries. (fetching the entry with it's id)
If you make a query to fetch them, the cost is "1 read + 1 read per entity retrieved". Wich mean 2 reads per entries. So around 1200 reads in one hour.
For more details informations, here is the documentation for estimating costs.
You can't see on the dashboard how many writes/reads operations an entity need. But I invite you to check appstats for that.
I'm using the AppEngine's Backend instances and the daily free quota is 9 instance hours. However, I've been using a Backend with 10 instances for around 16-17 minutes and my usage has already crossed 66%.
The calculation I had in my mind was 17 mins * 10 instances = 170 mins ~ 2.8hrs which is definitely less than 66% of 9 hours.
Can someone explain me the billing scheme here?
From https://developers.google.com/appengine/docs/quotas#Requests:
Instance Hours (billable) In general, instance usage is billed on an
hourly basis based on the instance's uptime. Billing begins when the
instance starts and ends fifteen minutes after the instance shuts
down. You will be billed only for idle instances up to the number of
maximum idle instances set in the Performance Settings tab of the
Admin Console. Runtime overhead is counted against the instance
memory.
In your case, you'd have 17min of activity + 15min after activity = 32 minutes. So 320 minutes (32 * 10) is pretty close to 2/3 of 9 hours.
You should be able to see the details in the Usage History of your application.
We recently got some data back on a benchmarking test from a software vendor, and I think I'm missing something obvious.
If there were 17 transactions (I assume they mean successfully completed requests) per second, and 1500 of these requests could be served in 5 minutes, then how do I get the response time for a single user? Is this sort of thing even possible with benchmarking? I have a lot of other data from them, including apache config settings, but I'm not sure how to do all the math.
Given the server setup they sent, I want to know how I can deduce the user response time. I have looked at other similar benchmarking tests, but I'm having trouble measuring requests to response time. What other data do I need to provide here to get that?
If only 1500 of these can be served per 5 minutes then:
1500 / 5 = 300 transactions per min can be served
300 / 60 = 5 transactions per second can be served
so how are they getting 17 completed transactions per second? Last time I checked 5 < 17 !
This doesn't seem to fit. Or am I looking at it wrongly?
I presume be user response time, you mean the time it takes to serve a single transaction:
If they can serve 5 per second than it takes 200ms (1/5) per transaction
If they can serve 17 per second than it takes 59ms (1/17) per transaction
That is all we can tell from the given data. Perhaps clarify how many transactions are being done per second.