POST key=value pair in siege - benchmarking

First of all I am not getting POST worked in siege,
siege https://apicdntest.fadv.com/orders/kelly POST
This does not hit the URL, but when I remove the POST key, then it hits the URL with GET
What I exactly need is I want hit the URL with key/value pair, Just like below
siege https://apicdntest.fadv.com/orders/kelly post MyXML='<root><test>test</test></root>'
Somebody please help me doing this in siege ?
Get result is below
Transactions: 4 hits
Availability: 100.00 %
Elapsed time: 2.52 secs
Data transferred: 0.00 MB
Response time: 0.26 secs
Transaction rate: 1.59 trans/sec
Throughput: 0.00 MB/sec
Concurrency: 0.41
Successful transactions: 0
Failed transactions: 0
Longest transaction: 0.26
Shortest transaction: 0.00
Post result
Transactions: 0 hits
Availability: 0.00 %
Elapsed time: 16.20 secs
Data transferred: 0.00 MB
Response time: 0.00 secs
Transaction rate: 0.00 trans/sec
Throughput: 0.00 MB/sec
Concurrency: 0.00
Successful transactions: 0
Failed transactions: 4
Longest transaction: 0.00
Shortest transaction: 0.00

Try reformatting your request like so:
siege "https://apicdntest.fadv.com/orders/kelly POST"
Note the quotes, which according to Jeffrey Fulmer (author of Siege) are required: "Because POST urls contain spaces before and after POST, you must always quote them at the commandline."

Related

AWS Blazing text supervised hyperparameter not logging objective metric

I am running a Hyperparameter tuning job using Sagemakers built in training image for Blazing text (blazingtext:latest) however when my jobs complete they only log out #train accuracy:
...
06:00:36 ##### Alpha: 0.0000 Progress: 100.00% Million Words/sec: 0.00 #####
06:13:19 Training finished.
06:13:19 Average throughput in Million words/sec: 0.00
06:13:19 Total training time in seconds: 1888.88
06:13:19 #train_accuracy: 0.4103
06:13:19 Number of train examples: 55783
The Hyperparameter job does not allow for me to pick #train_accuracy as an objective metric, only "validation:accuracy" or train:mean_rho appear in the dropdown.
After the job completes under "Best training job" tab I see:
Best training job summary data is available when you have completed training jobs that are emiting metrics.
Am I missing something obvious?
Ensure there is a validation channel in addition to the "train" channel :

Merge pull replication failes with error

2017-07-19 09:04:17.542 [0%] [944896 sec remaining] Web synchronization progress: 94% complete.
Article Upload Statistics:
FILE_REPLICA:
Relative Cost: 4.87%
PUBLISH_DOCUMENTS:
Updates: 827
Relative Cost: 76.73%
WF_ACTIVE_ROUTING_HISTORY:
Relative Cost: 4.29%
WF_RUN_ROUTING_HISTORY_REV:
Relative Cost: 1.87%
WF_RUN_STAGE_RES_LIST_PRES:
Relative Cost: 1.83%
WF_RUN_STAGE_STATUS_PRES:
Relative Cost: 1.83%
ORDER_RES_GROUP:
Relative Cost: 5.54%
WF_RUN_ROUTING_HISTORY:
Relative Cost: 3.04%
Article Download Statistics:
FILE_REPLICA:
Relative Cost: 7.61%
PUBLISH_DOCUMENTS:
Relative Cost: 4.18%
WF_ACTIVE_ROUTING_HISTORY:
Relative Cost: 29.20%
WF_RUN_ROUTING_HISTORY_REV:
Relative Cost: 13.25%
WF_RUN_STAGE_RES_LIST_PRES:
Relative Cost: 19.39%
WF_RUN_STAGE_STATUS_PRES:
Relative Cost: 6.54%
ORDER_RES_GROUP:
Relative Cost: 9.05%
WF_RUN_ROUTING_HISTORY:
Relative Cost: 10.78%
Session Statistics:
Upload Updates: 827
Deadlocks encountered: 18
Change Delivery Time: 753 sec
Schema Change and Bulk Insert Time: 5 sec
Delivery Rate: 1.10 rows/sec
Total Session Duration: 6556 sec
=============================================================
2017-07-19 09:04:17.596 Connecting to Subscriber 'VMSQL2014'
2017-07-19 09:04:17.609 The upload message to be sent to Publisher 'VMSQL2014' is being generated
2017-07-19 09:04:17.613 The merge process is using Exchange ID '86D0215F-E4E3-4FC1-99F4-BC9E05ACDA21' for this web synchronization session.
2017-07-19 09:04:20.168 Uploading data changes to the Publisher
2017-07-19 09:04:22.980 A query executing on Subscriber 'VMSQL2014' failed because the connection was chosen as the victim in a deadlock. Please rerun the merge process if you still see this error after internal retries by the merge process.
2017-07-19 09:04:25.513 [0%] [1227049 sec remaining] Request message generated, now making it ready for upload.
2017-07-19 09:04:25.561 [0%] [1227049 sec remaining] Upload request size is 260442 bytes.
2017-07-19 09:04:27.462 [0%] [1227049 sec remaining] Uploaded a total of 55 chunks.
2017-07-19 09:04:27.466 [0%] [1227049 sec remaining] The request message was sent to 'https://webserver/SQLReplication/replisapi.dll'
2017-07-19 09:09:28.676 The operation timed out
2017-07-19 09:09:28.679 Category:NULL
Source: Merge Process
Number: -2147209502
Message: The operation timed out
2017-07-19 09:09:28.680 Category:NULL
Source: Merge Process
Number: -2147209502
Message: The processing of the response message failed.
It says there were deadlocks encountered. A deadlock is when two transactions are trying to affect the same row. Likely, someone / some other program is writing to the same row you want to write to, thus locking you out and not letting you write.
You can:
Implement a retry procedure so your merge tries again if deadlocked.
Lock your database out from other programs / users while you run this.
There are likely other options to get around this issue. Google: "avoid deadlock"

How to properly configure task queue to run 30 tasks per hour?

I should run only 30 tasks per hour and maximum 2 tasks per minute. I am not sure how to configure these 2 conditions at the same time. Currently I have the following setup:
- name: task-queue
rate: 2/m # 2 tasks per minute
bucket_size: 1
max_concurrent_requests: 1
retry_parameters:
task_retry_limit: 0
min_backoff_seconds: 10
But I don't understand how to add first condition there.

Mongodb - 100% cpu, 95%+ lock, low disk activity

I have a mongo database setup where I am running a lot of findAndModify-queries against. At first it was performing fine, doing ~400 queries and ~1000 updates per second (according to mongostat). This caused a 80-90% lock percentage, but it seemed reasonable given the amount of data throughput.
After a while it has slowed to a crawl and is now doing a meager ~20 queries / ~50 updates per second.
All of the queries are on one collection. The majority of the collections have a set of basic data (just key: value entries, no arrays or similar) that is untouched and then an array of downloads with the format + number of bytes downloaded. Example:
downloads: [
{
'bytes: 123131,
'format': extra
},
{
'bytes: 123131,
'format': extra_hd
}
...
]
A bit of searching tells me that big arrays are not good, but if the majority of documents only have 10-15 entries in this array (with a few outliers that have 1000+) should it still affect my instance this bad?
CPU load is near 100% constantly, lock % is near 100% constantly. The queries I use are indexed (I confirmed via explain()) so this should not be an issue.
Running iostat 1 gives me the following:
disk0 cpu load average
KB/t tps MB/s us sy id 1m 5m 15m
56.86 122 6.80 14 5 81 2.92 2.94 2.48
24.00 9 0.21 15 1 84 2.92 2.94 2.48
21.33 3 0.06 14 2 84 2.92 2.94 2.48
24.00 3 0.07 15 1 84 2.92 2.94 2.48
33.14 7 0.23 14 1 85 2.92 2.94 2.48
13.68 101 1.35 15 2 84 2.92 2.94 2.49
30.00 4 0.12 14 1 84 2.92 2.94 2.49
16.00 4 0.06 14 1 85 2.92 2.94 2.49
28.00 4 0.11 14 2 84 2.92 2.94 2.49
33.60 5 0.16 14 1 85 2.92 2.94 2.49
I am using mongodb 2.4.8, and while upgrading is an option I would prefer to avoid it. It is running on my local SSD disk on OS X. It will be transferred to run on a server, but I would like to fix or at least understand the performance issue before I move it.

Appengine over quota

I'm using appengine already for a long time, but today, my application is already after 3 hours over quota (> my daily cost). The dashboard howevers show almost no resources have been used, so it should be nowhere near this daily limit.
Also strange is that, dispite the fact that the dashboard says my daily limit is reached, I noticed that I have no problem for retrieving. Only writing to the datastore gives an over quota exception (com.google.apphosting.api.ApiProxy$OverQuotaException: The API call datastore_v3.Put() required more quota than is available). The statistics below however show there where not a lot of writes. If I look to Quota details, all indicators are Okay.
Billing Status: Enabled ( Daily budget: $2.00 ) - Settings Quotas reset every 24 hours. Next reset: 21 hrs
Resource Usage Billable Price Cost
Frontend Instance Hours 4.30 Instance Hours 0.00 $0.08/ Hour $0.00
Backend Instance Hours 0.00 Instance Hours 0.00 $0.08/ Hour $0.00
Datastore Stored Data 2.86 GBytes 1.86 $0.008/ GByte-day $0.02
Logs Stored Data 0.04 GBytes 0.00 $0.008/ GByte-day $0.00
Task Queue Stored Task Bytes0.00 GBytes 0.00 $0.008/ GByte-day $0.00
Blobstore Stored Data 0.00 GBytes 0.00 $0.0043/ GByte-day $0.00
Code and Static File Storage0.12 GBytes 0.00 $0.0043/ GByte-day $0.00
Datastore Write Operations 0.06 Million Ops 0.01 $1.00/ Million Ops $0.02
Datastore Read Operations 0.01 Million Ops 0.00 $0.70/ Million Ops $0.00
Datastore Small Operations 0.00 Million Ops 0.00 $0.10/ Million Ops $0.00
Outgoing Bandwidth 0.01 GBytes 0.00 $0.12/ GByte $0.00
Recipients Emailed 0 0 $0.01/ 100 Recipients$0.00
Stanzas Sent 0 0 $0.10/ 100K Stanzas $0.00
Channels Created 0% 0 of 95,040 0 $0.01/ 100 Opens $0.00
Logs Read Bandwidth 0.00 GBytes 0.00 $0.12/ GByte $0.00
PageSpeed Outgoing Bandwidth0.01 GBytes 0.01 $0.39/ GByte $0.01
SSL VIPs 0 0 $1.30/ Day $0.00
SSL SNI Certificates 0 0 $0.06/ Day $0.00
Estimated cost for the last 3 hours: $2.00* / $2.00

Resources