I see some enigmatic time fields in the standard Google app-engine logs which make me curious:
2011-05-09 21:56:00.577 /whatever 200 211ms 775cpu_ms 589api_cpu_ms
0.1.0.1 - - [09/May/2011:21:56:00 -0700] "GET /whatever HTTP/1.1"
200 34 - "AppEngine-Google; (+http://code.google.com/appengine)"
"****.appspot.com" ms=212 cpu_ms=776 api_cpu_ms=589 cpm_usd=0.021713
queue_name=__cron task_name=dc4d411120bc75ea8bbea773d23eaecc
Particularly: ms, cpu_ms, api_cpu_ms, each of them two times with slightly different values.
Additionally, when I log timing information myself with a simple structure below for the GET request, it prints a somewhat lower value. In this case, particulary 182 msecs, against the official 775.
protected void doGet(HttpServletRequest req, HttpServletResponse resp) {
long t0 = System.currentTimeMillis();
//Do the stuff
long t1 = System.currentTimeMillis();
log.info("Completed in " + (t1-t0) + " msecs.\n");
}
So, my questions are: Why the difference between my measured time result and the cpu_ms value and how could I lower it? What are the exact meaning of time values in GAE log fields?
I want to optimize my code and I realized based on the aforementioned facts, that most time (nearly 600 msecs!) doesn't spent directly during doGet request. (I use JPA, URLFetch and this is a cron task.)
211ms: It's the response's time, as it will be perceived by the user who requested the page. You will try to decrease it, in order to improve the speed of your website.
775cpu_ms: According to the App Engine documentation, "CPU time is reported in "seconds," which is equivalent to the number of CPU cycles that can be performed by a 1.2 GHz Intel x86 processor in that amount of time. The actual number of CPU cycles spent varies greatly depending on conditions internal to App Engine, so this number is adjusted for reporting purposes using this processor as a reference measurement."
Then, it's normal not to have the "real" time: it should be different from what you measured with System.currentTimeMillis() because it's adjusted. Instead, you should use the Quota API to monitor the CPU usage: see documentation here. CPU time is billable (the free quota is 6.5 CPU-hours per day, and you can pay for more CPU time). Then, you will try to decrease it, in order to pay less.
589api_cpu_ms: It's the adjuested CPU time spent by the API usage (Datastore, User API, etc.)
Related
Using Gmail API, I keep getting hundreds of error 429 "User-rate limit exceeded".
My script sends 100 emails at a time, every 10 minutes.
Shouldn't this be within the API limits?
The error pops up after only 5 or 6 successful sends.
Thanks
Have a look at the Gmail Usage Limits:
So, if you send 100 emails during a very short time, this corresponds to 10 000 units - thats is 40 times more than the allowed quota per second.
So while short bursts are allowed, if you exceed the quota significantly this might be beyond the scope of the allowed burst.
In this case you should implement exponential backoff as recommended by Google.
how can I check my requirement of 100 requests are processed in less than 1 second in my gatling3 report. I ran this using jenkins.
my simulation looks like as below
rampConcurrentUsers(1) to (100) during (161 second),
constantConcurrentUsers(100) during (1 minute)
Below is my response time percentile graph of two executions for an interval of one second.
enter image []1 here
What does the min,max here will tell us, i am assuming the percentages 25%-99% are the completion of the request.
Those graph sections are not what you're after - they show the distribution of response times and the number of active users.
So min is the fastest response time
max is the longest
95% is the response time for which 95% of your requests were under
and so on...
So what you could do is look at the section of your graph corresponding to the 100 constant concurrent users injection stage. In this part you would require that the max response time always be under 1 second
(Note: there's something odd with your 2nd report - I assume it didn't come from running the stated injection profile as it has more than 100 concurrent users active)
It seems like I'm being overbilled but I want to make sure I am not misunderstanding how Per Use billing works. Here are the details:
I'm running a small test PHP application on Google App Engine with no visitors other than myself every once in a while.
I periodically reset the database via cron: originally every hour, then every 3 hours last month, now every 6 hours.
Pricing plan: Per Use
Storage Used: 0.1% of 250 GB
Type: First Generation
IPv4 address: None
File system replication: Synchronous
Tier: D0
Activation Policy: On demand
Here's the billing through the first 16 days of this months:
Google SQL Service D0 usage - hour 383 hour(s) $9.57
16 days * 24 hours = 384 hours * $.025 = $9.60 . So it appears I've been charged every hour this month. This also happened last month.
I understand that I am charged the full hour for every part of an hour that the SQL instance is active.
Still, with the minimal app usage and the database reset 4 times a day, I would expect the charges (even allowing for a couple extra hours of usage each day) to be closer to:
16 days * 6 hours = 80 hours * $.025 = $2.40.
Any explanation for the discrepency?
The logs are the source of truth usually. Check them to see if you are being visited by an aggressive crawler, a stuck task that keeps retrying etc.
Or you may have a cron job that is running and performing work. You can view that in the "task queue/cron jobs" section in the control panel.
You might be have assigned an Ipv4 address to your instance and Google Developer Console clearly states
You will be charged $0.01 each hour the instance is inactive and has an IPv4 address assigned.
This might be the reason of your extra bill.
My program fetches ~100 entries in a loop. All entries are fetched using get_by_key_name(). Appstats show that some get_by_key_name() requests are taking as much as 750ms! (other big values are 355ms, 260ms, 230ms). Average for other fetches ranges from 30ms to 100ms. These times are in real_time and hence contribute towards 'ms' and not 'cpu_ms'.
Due to the above, total time taken to return the webpage is very high ms=5754, where cpu_ms=1472. (above times are seen repeatedly for back to back requests.)
Environment: Python 2.7, webapp2, jinja2, High Replication, No other concurrent requests to the server, Frontend Instance Class is F1, No memcache set yet, max idle instances is automatic, min pending latency is automatic, using db (NOT NDB).
Any help will be greatly appreciated as I based whole database design on fetching entries from the datastore using only get_by_key_name()!!
Update:
I tried profiling using time.clock() before and immediately after every get_by_key_name() method call. The difference I get from time.clock() for every single call is 10ms! (Just want to clarify that the get_by_key_name() is called on different Kinds).
According to time.clock() the total execution time (in wall-clock time) is 660ms. But the real-time is 5754 (=ms), and cpu_ms is 1472 per GAE logs.
Summary of Questions:
*[Update: This was addressed by passing list of keys] Why get_by_key_name() is taking that long?*
Why ms of 5754 is so much more than cpu_ms of 1472. Is task execution in halted/waiting-state for 75% (1-1472/5754) of the time due to which real-time (wall clock) time taken is so long as far as end user is concerned?
If the above is true, then why time.clock() shows that only 660ms (wall-clock time) elapsed between start of the first get_by_key_name() request and the last (~100th) get_by_key_name() request; although GAE shows this time as 5754ms?
We recently got some data back on a benchmarking test from a software vendor, and I think I'm missing something obvious.
If there were 17 transactions (I assume they mean successfully completed requests) per second, and 1500 of these requests could be served in 5 minutes, then how do I get the response time for a single user? Is this sort of thing even possible with benchmarking? I have a lot of other data from them, including apache config settings, but I'm not sure how to do all the math.
Given the server setup they sent, I want to know how I can deduce the user response time. I have looked at other similar benchmarking tests, but I'm having trouble measuring requests to response time. What other data do I need to provide here to get that?
If only 1500 of these can be served per 5 minutes then:
1500 / 5 = 300 transactions per min can be served
300 / 60 = 5 transactions per second can be served
so how are they getting 17 completed transactions per second? Last time I checked 5 < 17 !
This doesn't seem to fit. Or am I looking at it wrongly?
I presume be user response time, you mean the time it takes to serve a single transaction:
If they can serve 5 per second than it takes 200ms (1/5) per transaction
If they can serve 17 per second than it takes 59ms (1/17) per transaction
That is all we can tell from the given data. Perhaps clarify how many transactions are being done per second.