Appengine over quota - google-app-engine

I'm using appengine already for a long time, but today, my application is already after 3 hours over quota (> my daily cost). The dashboard howevers show almost no resources have been used, so it should be nowhere near this daily limit.
Also strange is that, dispite the fact that the dashboard says my daily limit is reached, I noticed that I have no problem for retrieving. Only writing to the datastore gives an over quota exception (com.google.apphosting.api.ApiProxy$OverQuotaException: The API call datastore_v3.Put() required more quota than is available). The statistics below however show there where not a lot of writes. If I look to Quota details, all indicators are Okay.
Billing Status: Enabled ( Daily budget: $2.00 ) - Settings Quotas reset every 24 hours. Next reset: 21 hrs
Resource Usage Billable Price Cost
Frontend Instance Hours 4.30 Instance Hours 0.00 $0.08/ Hour $0.00
Backend Instance Hours 0.00 Instance Hours 0.00 $0.08/ Hour $0.00
Datastore Stored Data 2.86 GBytes 1.86 $0.008/ GByte-day $0.02
Logs Stored Data 0.04 GBytes 0.00 $0.008/ GByte-day $0.00
Task Queue Stored Task Bytes0.00 GBytes 0.00 $0.008/ GByte-day $0.00
Blobstore Stored Data 0.00 GBytes 0.00 $0.0043/ GByte-day $0.00
Code and Static File Storage0.12 GBytes 0.00 $0.0043/ GByte-day $0.00
Datastore Write Operations 0.06 Million Ops 0.01 $1.00/ Million Ops $0.02
Datastore Read Operations 0.01 Million Ops 0.00 $0.70/ Million Ops $0.00
Datastore Small Operations 0.00 Million Ops 0.00 $0.10/ Million Ops $0.00
Outgoing Bandwidth 0.01 GBytes 0.00 $0.12/ GByte $0.00
Recipients Emailed 0 0 $0.01/ 100 Recipients$0.00
Stanzas Sent 0 0 $0.10/ 100K Stanzas $0.00
Channels Created 0% 0 of 95,040 0 $0.01/ 100 Opens $0.00
Logs Read Bandwidth 0.00 GBytes 0.00 $0.12/ GByte $0.00
PageSpeed Outgoing Bandwidth0.01 GBytes 0.01 $0.39/ GByte $0.01
SSL VIPs 0 0 $1.30/ Day $0.00
SSL SNI Certificates 0 0 $0.06/ Day $0.00
Estimated cost for the last 3 hours: $2.00* / $2.00

Related

How to explain the high front end instance cost?

My simple python website hosted on AppEngine got some increase in traffic. It went from total of 447 visitors and 860 views in September (peak 33 visitors on a day) to 1K visitors and 1.5K views in October (peak 61 visitors on a day).
Meanwhile the cost went from $0.00 in September to $10.66 USD in October. The cost breakdown shows that the complete amount is attributed to front end instances, totaling 930.15 hours of usage. That is about 30 hours a day.
Beforehand I have set my max_instances and max_idle_instances to 1. With a single instance running, how is it possible to have 30 hours of usage in a day that lasts 24 hours?
I am using F4 instance class - once a month I parse an excel sheet (that doesn't depend on the number of visits), and more limited instance classes were exceeding soft memory limit of 256 MB. As well, my front end is optimized and it fits in less than 30Kb. So with only 1.5K views a month, how can I have that much front end instance hours?
It is possible that you consume more than 24 hours in a day because F4 instances consumes 4 hours in an hour.
See here.
For example, if you use an F4 instance for one hour, you see "Frontend Instance" billing for four instance hours at the F1 rate.
And App Engine bills depending on how much hours your instances are up. Even you have no traffic, you may be billed.

Formula for a conditional cumulative sum in Mac Numbers

I have a spreadsheet in Mac1 Numbers2 that tracks amounts due and amounts paid, ordered by date.
Transaction Date
Customer Name
Amount Due
Amount Paid
...
...
...
...
16 Nov 2022
Name1
$70.00
$70.00
16 Nov 2022
Name2
$70.00
$0.00
16 Nov 2022
Name3
$0.00
$70.00
16 Nov 2022
Name2
$0.00
$70.00
...
...
...
...
I would like to add an additional column, Running Total, that shows for each transaction, the accumulated credit/amount due for that row's customer up to that row's date.
Transaction Date
Customer Name
Amount Due
Amount Paid
Running Total
...
...
...
...
...
16 Nov 2022
Name1
$70.00
$70.00
$0.00
16 Nov 2022
Name2
$70.00
$0.00
-$70.00
16 Nov 2022
Name3
$0.00
$70.00
$70.00
16 Nov 2022
Name2
$0.00
$70.00
$0.00
...
...
...
...
...
I have a separate sheet that shows the complete running sum for each customer from the beginning of time through the present, but I'm at a loss for how to create a row-by-row, grouped running sum (or even whether Numbers allows for it).
I have tried to create formulas with SUMIF, but have not made headway in understanding how to code the kind of filter I need, so have not even created a runnable test formula. Various Google searches for creating running sums by group/category in Numbers, Excel, and GoogleSheets have not yielded results.
In a database, this would be trivial, but I'm restricted to Numbers.
1 MacOS Monterey 12.5
2 Numbers 12.2

Create a running total in SQL with hours but only using work hours

This might be a strange question... but will try to explain the best i can ...
BTW : There is no chance in implementing through Stored Procedures... it should be made in SQL Query only ... But if the only option is SP, then i have to adapt to that ...
I have a table with the following elements :
RUN
WORKORDER
LOCATION
TRAVELTIME
NUMEQUIP
TOT_TIME
NO99
1
Start
NO99
2
Customer 1
112
1
8
NO99
3
Customer 2
18
11
88
NO99
4
Customer 3
22
93
744
NO99
5
Customer 4
34
3
24
I need to add a running DATE and HOUR by calculating the amount of time it takes from one line tho another BUT, and this is important, to take into consideration working hours ( from 9:00 to 13:00 and from 14:00 to 18:00 ( in US format : from 9am to 1pm, and 2pm to 6pm)... As example ... considering that my start date and time would be 10/May/2022 9:00 :
RUN
WORKORDER
LOCATION
TRAVELTIME
NUMEQUIP
TOT_TIME
DATE
TIME
NO99
1
Start
10/05/22
9:00
NO99
2
Customer 1
112
1
8
10/05/22
10:52
NO99
3
Customer 2
18
11
88
10/05/22
11:18
NO99
4
Customer 3
22
93
744
10/05/22
14:08
NO99
5
Customer 4
34
3
24
12/05/22
10:06
This result is achieved by calculating the estimated time to make the trip between customers (TRAVELTIME), and after arriving is also added the time spent on maintenance (TOT_TIME that is Number of Equipments (NUMEQUIP) vs 8 minutes per equipment)... By this, and since customer 3 will have 744 minutes (12 h and 58 minutes) in maintenance... and those minutes will spawn through 3 days, the result should be as shown...
With the following query i can have almost the desired effect... but cannot take into account only work hours ... and all time is continuous...
Select
RUN,WORKORDER,LOCATION,TRAVELTIME,
DateAdd(mi,temprunningtime-TOT_TIME,'9:00') As TIME,
NUMEQUIP,NUMEQUIP*8 AS TOT_TIME,sum(MYTABLE.TRAVELTIME +
MYTABLE.TOT_TIME) OVER (ORDER BY MYTABLE.ORDER) AS temprunningtime
FROM MYTABLE
With this query (slightly altered) i get an running TIME, but does not take into account the 13:00-14:00 stop, and the 18:00-9:00 stop...
It might be a bit confusing but any ideias on this would be very appreciated... and i will try to explain anyway i can...

POST key=value pair in siege

First of all I am not getting POST worked in siege,
siege https://apicdntest.fadv.com/orders/kelly POST
This does not hit the URL, but when I remove the POST key, then it hits the URL with GET
What I exactly need is I want hit the URL with key/value pair, Just like below
siege https://apicdntest.fadv.com/orders/kelly post MyXML='<root><test>test</test></root>'
Somebody please help me doing this in siege ?
Get result is below
Transactions: 4 hits
Availability: 100.00 %
Elapsed time: 2.52 secs
Data transferred: 0.00 MB
Response time: 0.26 secs
Transaction rate: 1.59 trans/sec
Throughput: 0.00 MB/sec
Concurrency: 0.41
Successful transactions: 0
Failed transactions: 0
Longest transaction: 0.26
Shortest transaction: 0.00
Post result
Transactions: 0 hits
Availability: 0.00 %
Elapsed time: 16.20 secs
Data transferred: 0.00 MB
Response time: 0.00 secs
Transaction rate: 0.00 trans/sec
Throughput: 0.00 MB/sec
Concurrency: 0.00
Successful transactions: 0
Failed transactions: 4
Longest transaction: 0.00
Shortest transaction: 0.00
Try reformatting your request like so:
siege "https://apicdntest.fadv.com/orders/kelly POST"
Note the quotes, which according to Jeffrey Fulmer (author of Siege) are required: "Because POST urls contain spaces before and after POST, you must always quote them at the commandline."

Mongodb - 100% cpu, 95%+ lock, low disk activity

I have a mongo database setup where I am running a lot of findAndModify-queries against. At first it was performing fine, doing ~400 queries and ~1000 updates per second (according to mongostat). This caused a 80-90% lock percentage, but it seemed reasonable given the amount of data throughput.
After a while it has slowed to a crawl and is now doing a meager ~20 queries / ~50 updates per second.
All of the queries are on one collection. The majority of the collections have a set of basic data (just key: value entries, no arrays or similar) that is untouched and then an array of downloads with the format + number of bytes downloaded. Example:
downloads: [
{
'bytes: 123131,
'format': extra
},
{
'bytes: 123131,
'format': extra_hd
}
...
]
A bit of searching tells me that big arrays are not good, but if the majority of documents only have 10-15 entries in this array (with a few outliers that have 1000+) should it still affect my instance this bad?
CPU load is near 100% constantly, lock % is near 100% constantly. The queries I use are indexed (I confirmed via explain()) so this should not be an issue.
Running iostat 1 gives me the following:
disk0 cpu load average
KB/t tps MB/s us sy id 1m 5m 15m
56.86 122 6.80 14 5 81 2.92 2.94 2.48
24.00 9 0.21 15 1 84 2.92 2.94 2.48
21.33 3 0.06 14 2 84 2.92 2.94 2.48
24.00 3 0.07 15 1 84 2.92 2.94 2.48
33.14 7 0.23 14 1 85 2.92 2.94 2.48
13.68 101 1.35 15 2 84 2.92 2.94 2.49
30.00 4 0.12 14 1 84 2.92 2.94 2.49
16.00 4 0.06 14 1 85 2.92 2.94 2.49
28.00 4 0.11 14 2 84 2.92 2.94 2.49
33.60 5 0.16 14 1 85 2.92 2.94 2.49
I am using mongodb 2.4.8, and while upgrading is an option I would prefer to avoid it. It is running on my local SSD disk on OS X. It will be transferred to run on a server, but I would like to fix or at least understand the performance issue before I move it.

Resources