I am using Google App Engine Flexible environment (Node.js). Is there any reason why both the liveness and readiness checks are fired 6 times each at every interval second specified? (these are all at the same timestamp)
A GET 200 2 B 2 ms GoogleHC/1.0 /readiness_check GET 200 2 B 2 ms GoogleHC/1.0
A GET 200 2 B 2 ms GoogleHC/1.0 /readiness_check GET 200 2 B 2 ms GoogleHC/1.0
A GET 200 2 B 1 ms GoogleHC/1.0 /readiness_check GET 200 2 B 1 ms GoogleHC/1.0
A GET 200 2 B 1 ms GoogleHC/1.0 /readiness_check GET 200 2 B 1 ms GoogleHC/1.0
A GET 200 2 B 3 ms GoogleHC/1.0 /readiness_check GET 200 2 B 3 ms GoogleHC/1.0
A GET 200 2 B 1 ms GoogleHC/1.0 /readiness_check GET 200 2 B 1 ms GoogleHC/1.0
A GET 200 2 B 2 ms GoogleHC/1.0 /liveness_check GET 200 2 B 2 ms GoogleHC/1.0
A GET 200 2 B 2 ms GoogleHC/1.0 /liveness_check GET 200 2 B 2 ms GoogleHC/1.0
A GET 200 2 B 2 ms GoogleHC/1.0 /liveness_check GET 200 2 B 2 ms GoogleHC/1.0
A GET 200 2 B 2 ms GoogleHC/1.0 /liveness_check GET 200 2 B 2 ms GoogleHC/1.0
A GET 200 2 B 1 ms GoogleHC/1.0 /liveness_check GET 200 2 B 1 ms GoogleHC/1.0
A GET 200 2 B 1 ms GoogleHC/1.0 /liveness_check GET 200 2 B 1 ms GoogleHC/1.0
And is it normal for the readiness checks to continue indefinitely? I would have thought the readiness checks would stop after an instance was deemed "ready." Just doesn't seem necessary to have both readiness and liveness checks continuously hitting my instance when just the liveness checks would seemingly suffice. If anyone knows of a better way to configure this so that it isn't so redundant I would greatly appreciate it. The relevant pieces of my app.yaml can be seen below:
runtime: nodejs
env: flex
readiness_check:
path: '/readiness_check'
check_interval_sec: 20
timeout_sec: 4
failure_threshold: 2
success_threshold: 2
app_start_timeout_sec: 300
liveness_check:
path: '/liveness_check'
check_interval_sec: 30
timeout_sec: 4
failure_threshold: 3
success_threshold: 2
initial_delay_sec: 300
Thanks!
The purpose of readiness checks to continue forever is so that you can, at your own will, remove a VM out of traffic rotation (e.g. you want to reload some cache, etc). Same happens with liveness. It also allows the load balancer to detect problems and remove VM out of rotation (if your app crashed and is restarting for instance) or heal it (if, say, the VM died altogether).
Regarding the number of requests - it depends on the number of instances you deployed (are you running 2 instances ?) and it works with 3 parallel requests per instance (from 3 different health checkers) so that we ensure availability and reliability of the results.
In the grand scheme of things, unless you are too aggressive and implement custom health check handlers that are too expensive, this should be minimal traffic for your VM.
Hope it helps,
Andre
Related
This might be a strange question... but will try to explain the best i can ...
BTW : There is no chance in implementing through Stored Procedures... it should be made in SQL Query only ... But if the only option is SP, then i have to adapt to that ...
I have a table with the following elements :
RUN
WORKORDER
LOCATION
TRAVELTIME
NUMEQUIP
TOT_TIME
NO99
1
Start
NO99
2
Customer 1
112
1
8
NO99
3
Customer 2
18
11
88
NO99
4
Customer 3
22
93
744
NO99
5
Customer 4
34
3
24
I need to add a running DATE and HOUR by calculating the amount of time it takes from one line tho another BUT, and this is important, to take into consideration working hours ( from 9:00 to 13:00 and from 14:00 to 18:00 ( in US format : from 9am to 1pm, and 2pm to 6pm)... As example ... considering that my start date and time would be 10/May/2022 9:00 :
RUN
WORKORDER
LOCATION
TRAVELTIME
NUMEQUIP
TOT_TIME
DATE
TIME
NO99
1
Start
10/05/22
9:00
NO99
2
Customer 1
112
1
8
10/05/22
10:52
NO99
3
Customer 2
18
11
88
10/05/22
11:18
NO99
4
Customer 3
22
93
744
10/05/22
14:08
NO99
5
Customer 4
34
3
24
12/05/22
10:06
This result is achieved by calculating the estimated time to make the trip between customers (TRAVELTIME), and after arriving is also added the time spent on maintenance (TOT_TIME that is Number of Equipments (NUMEQUIP) vs 8 minutes per equipment)... By this, and since customer 3 will have 744 minutes (12 h and 58 minutes) in maintenance... and those minutes will spawn through 3 days, the result should be as shown...
With the following query i can have almost the desired effect... but cannot take into account only work hours ... and all time is continuous...
Select
RUN,WORKORDER,LOCATION,TRAVELTIME,
DateAdd(mi,temprunningtime-TOT_TIME,'9:00') As TIME,
NUMEQUIP,NUMEQUIP*8 AS TOT_TIME,sum(MYTABLE.TRAVELTIME +
MYTABLE.TOT_TIME) OVER (ORDER BY MYTABLE.ORDER) AS temprunningtime
FROM MYTABLE
With this query (slightly altered) i get an running TIME, but does not take into account the 13:00-14:00 stop, and the 18:00-9:00 stop...
It might be a bit confusing but any ideias on this would be very appreciated... and i will try to explain anyway i can...
New to R and new to this forum, tried searching, hope i dont embarass myself by failing to identify previous answers.
So i got my data, and i intend to do some kind of glmm's in the end but thats far away in the future, first im going to do some simple glm/lm's to learn what im doing
first about my data:
I have data sampled from 2 "general areas" on opposite sides of the country.
in these general areas there are roughly 50 trakts placed (in a grid, random staring point)
Trakts have been revisited each year for a duration of 4 years
A tract contains 16 sample plots, i intend to work on trakt-level so i use the means of the 16 sample plots for each trakt.
2x4x50 = 400 rows (actual number is 373 rows when i have removed trakts where not enough plots could be sampled due to terrain etc)
the data in my excel file is currently divided like this:
rows = trakts
Columns= the measured variable
i got 8-10 columns i want to use
short example how the data looks now:
V1 - predictor, 4 different columns
V2 - Response variable = proportional data, 1-4 columns depending on which hypothesis i end up testing,
the glmm in the end would look something like, (V2~V1+V1+V1,(area,year))
Area Year Trakt V1 V2
A 2015 1 25.165651 0
A 2015 2 11.16894652 0.1
A 2015 3 18.231 0.16
A 2014 1 3.1222 N/A
A 2014 2 6.1651 0.98
A 2014 3 8.651 1
A 2013 1 6.16416 0.16
B 2015 1 9.12312 0.44
B 2015 2 22.2131 0.17
B 2015 3 12.213 0.76
B 2014 1 1.123132 0.66
B 2014 2 0.000 0.44
B 2014 3 5.213265 0.33
B 2013 1 2.1236 0.268
How should i get started on this?
8 different files?
Nested by trakts ( do i start nesting now or later when i'm doing glmms?)
i load my data into r through the read.tables function
If i run: sapply(dataframe,class)
V1 and V2 are factors, everything else integer
if i run sapply(dataframe,mode)
everything is numeric
so finally to my actual problems, i have been trying to do normality tests (only trid shapiro so far) but i keep getting errors that imply my data is not numeric
also, when i run a normality test, do i only run one column and evaluate it before moving on to the next column or should i run several columns? the entire dataset?
should i in my case run independent normality tests for each of my areas and year?
hope it didnt end up to cluttered
best regards
I'm using ReportBuilder 2.0 / SQL Server 2008.
I have a report that uses visibility settings on the row groups which results in some row group headings being hidden, which in turn makes report totals seem incorrect. I can't change the visibility settings (for business reasons); what I'm looking for is a way to test EITHER for hidden items, OR for apparently incorrect totals. Consider the following dataset:
ItemCode SubPhaseCode SubPhase BidItem XTDPrice
1 1 Water Utility 1 5000
2 1 Water Utility 2 4000
3 2 Electrical Utility 3 75,000
4 2 Electrical Utility 3 75,000
5 2 Electrical Utility 3 100000
6 2 Electrical Utility 4 2500
7 2 Electrical Utility 4 2500
8 2 Electrical Utility 4 5064
9 2 Electrical Utility 5 3000
10 2 Electrical Utility 5 3000
11 2 Electrical Utility 5 5796
12 3 Gas Utility 6 60000
13 3 Gas Utility 6 60000
14 3 Gas Utility 6 61547
15 4 Other Utility 7 6000
16 4 Other Utility 7 7000
There are 3 Row Groups on the report, one for SubPhaseCode ("Group1"), and two for BidItem("Group2" and "DetailsGroup"):
Link to Design View Screenshot
The Row Visibility property for Group1 (SubPhaseCode) is:
=IIF(Fields!SubPhaseCode.Value = 3, true, false)
This results in the heading for the SubPhase "Gas" being hidden. This means that, when the report is run, I get something like the following:
Total 475407
Water 9000
-Utility 1 5000
-Utility 2 4000
Electrical 271860
-Utility 3 250000
-Utility 4 10064
-Utility 5 11796
-Utility 6 181547
Other 13000
-Utility 7 13000
The fact that SubPhase 3 ("Gas") is hidden results in 2 apparent errors:
1) The sum for "Electrical" (271860) appears incorrect for the 4 items below it (because there should be another row heading above "Utility 6")
2) The total of 475407 appears incorrect for the 3 groups below it (9000 + 271860 + 13000).
What I am looking for is a way to change the formatting of the headings (especially the Group Headings) if the numbers below them apparently don't add up. I understand how to implement conditional formatting and have done this for the Total. I am unclear how this could be implemented for the Row Group.
I would basically need some kind of a test, for each Row Heading, to see if the following heading would be hidden, according to the rules. This sounds to me like a "NEXT" function, which I know doesn't exist.
Other searches have indicated that I might need to add the desired data to the dataset or modify the underlying SP. Just wondering if there are any simpler solutions.
Thanks much for the help!
I'd avoid to sum the values in the SubPhase group SUM().
Try:
=SUM(IIF(Fields!SubPhaseCode.Value=3,0,Fields!XTDPrice.Value))
Let me know if this helps.
I should run only 30 tasks per hour and maximum 2 tasks per minute. I am not sure how to configure these 2 conditions at the same time. Currently I have the following setup:
- name: task-queue
rate: 2/m # 2 tasks per minute
bucket_size: 1
max_concurrent_requests: 1
retry_parameters:
task_retry_limit: 0
min_backoff_seconds: 10
But I don't understand how to add first condition there.
I know the syntax for using the Rank and Dense Rank functions
but I can't find any uses in the real world for this .
For example DENSE_RANK
ranking userid
1 500
1 500
2 502
2 502
and Rank
Ranking UserID
1 500
1 500
1 500
1 500
1 500
1 500
1 500
8 502
8 502
8 502
8 502
8 502
8 502
8 502
15 504
I can't understand how the 1,1 2,2 values would be useful in the real world.
On the other hand, I do understand very clearly what the real-world uses for row_number over partition are; I just can't find what can I do with this kind of information (dense & regular rank)
you could use it to find the top n rows for each group
There is a very good explanation here:
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:2920665938600
Specific examples would be arbitrary and not necessarily helpful. So long as you understand what they do (q.v. #Kevin Burton's link), and can remember at least vaguely that this functionality exists, then if or when a situation comes up where they would be useful--if not critical--you'll be able to pull them out of the database developer's bag of tricks. (I've used RANK once, maybe twice, and it was very useful each time, but I can't--and don't need to--recall the details without looking them up.)