httperf benchmarking for file upload using POST - benchmarking

I want to benchmark uploading of files to a remote server using HTTPERF
I know that there is a wsesslog option where I can give entries for form data.
But can I send a post request for a file as well?
something like:
httperf --hog --server 127.0.0.1 --port 5000
--add-header="Content-Type: application/x-www-form-urlencoded\n" --wsesslog=150,0,httperf_content
httperf_content file contains
/ method=POST contents="file=PATH_TO_FILE"

Related

Failed to load resource: net::ERR_CONNECTION_TIMED_OUT on remote but works fine on localhost

i have react with asp.net core website . it worked fine on localhost but when published on iis remote server the timeout error occurs.
the front-end (react client) and back-end(server) asp.netcore webapi work independently.
before uploading i changed the following in program.cs in webapi.
usUrl("https://localhost:4000")
to useUrl("https://www.virtualcollege.pk:4000")
i also changed the front-end baseurl similarly.
moreover, the connectionstrings in appsettings.json is correct for both databases.
i added migration and updated the databases successfully.
the website is live but timeout error occur :
virtualcollege.pk
i also tried the url with "https://myip-address:4000"
thanks in advance for help.
if i remove port number from url and publish on local folder than upload to remote server . the webapi.exe on local machine runs as follows:
You have to open incoming request for 4000 port. Try some methods below.
Windows Server
Please check this link or this one
Ubuntu/Debian
sudo ufw allow 4000/tcp
sudo ufw status // check status
CentOS
First, you should disable selinux, edit file /etc/sysconfig/selinux so it looks like this:
SELINUX=disabled
SELINUXTYPE=targeted
Save file and restart system.
Then you can add the new rule to iptables:
iptables -A INPUT -m state --state NEW -p tcp --dport 4000 -j ACCEPT
and restart iptables with /etc/init.d/iptables restart

Error in Google App Engine - Log Service - SQLite

I am using Google App Engine on Ubuntu within Linux Subsystem for Windows.
When I start dev_appserver.py I receive errors with the following line resulting in this, which I am understanding to be a corrupted sqlite data file.
File "/../google-cloud-sdk/platform/google_appengine/google/appengine/api/logservice/logservice_stub.py", line 181, in start_request
host, start_time, method, resource, http_version, module))
DatabaseError: database disk image is malformed
Based upon this post I am understanding there is a log.db referenced.
GoogleAppEngineLauncher: database disk image is malformed
However, when I run the script referenced, the resultant path does not contain a log.db leading me to believe this is a different issue.
Any help in identifying the appropriate database, for the purposes of removing, would be appreciated.
Per comment added --clear_datastore=1 and did not notice a change
dev_appserver.py --host 127.0.0.1 --port 8080 --admin_port 8082 --storage_path=temp/storage --skip_sdk_update_check true --clear_datastore=1 main/app.yaml main/sync.yaml

Max size of a fat JAR for Apache Flink

I've build an Apache Flink app and packaged it in a fat JAR with Gradle Shadow Plugin. The resulting file size is ~114 MiB. When I'm trying to upload it with Flink's web UI it is stuck in "Saving…" phase. And if I use curl to upload it manually the result is "413 Request Entity Too Large":
$ curl -X POST -H "Expect:" -i -F "jarfile=#flink-all.jar" http://ec2-18-204-247-166.compute-1.amazonaws.com:8081/jars/upload
HTTP/1.1 413 Request Entity Too Large
content-length: 0
What are the options, then?
UPD: I can see the JAR in /tmp/flink-web-UUID/flink-web-upload/UUID/flink-all.jar but it is not recognized by Flink (not visible on UI).
Ok, it was easy to fix.
First, I've scanned their repo for "Too Large" string and found this class. Looks like SERVER_MAX_CONTENT_LENGTH is responsible for max object size. It is set here from the configuration option rest.server.max-content-length. The default is 100 MiB.
TLDR:
Setting rest.server.max-content-length in flink-conf.yaml to 209715200 (200 MiB) solved the issue.

How to upload file using loopback api explorer?

I'm using loopback Api Explorer I need to upload a file by explore how can I upload that because I don't find any option to upload file please refer the screenshot
.
Simply put, the answer is that you can't. Uploading a file requires multi-part form data. This isn't currently possible via the loopback-component-explorer. You should checkout the loobpack-component-storage instead. There is an example here; I recommend using the example-2.0.
You can test it with something like POSTMAN.
But, the only that you need, is the path of the file, not the file.
Simpler than using Postman would be using curl direct on the terminal :
Here is the command I use when need(I work with some services using loopback/explorer as well) :
curl -i -X POST -H "Content-Type: multipart/form-data" -F "blob=#/path/to/your/file.jpg" -v http://HOST:PORT/pathToYourEndpoint?access_token=xxxxxxxxxxx

Google compute engine returned 399 internal server error

Google compute engine console return 399 error code already asks my question but the solution is not as suggested there. Since the URL is little old starting a new thread.
I am trying to do a wget using:
wget https://console.developers.google.com/m/cloudstorage/b/m-lab/o/ndt/2012/05/23/20120523T000000Z-mlab1-ams01-ndt-0000.tgz
I see the error:
Resolving console.developers.google.com (console.developers.google.com)... 216.239.32.27
Connecting to console.developers.google.com (console.developers.google.com)|216.239.32.27|:443... connected.
HTTP request sent, awaiting response... 399 Internal Server Error
2014-08-26 20:02:18 ERROR 399: Internal Server Error.
I am new to Linux commands so wanted to know if am missing something obvious.
The address works when I use Chrome downloader but fails with wget with me as well
I have never seen this behaviour before
You can also use cURL to download files, I used the -v switch and got a dns error(no idea why)
curl -v http://console.developers.googlO.com/m/cloudstorage/b/m-lab/o/ndt/2012/05/23/20120523T000000Z-mlab1-ams01-ndt-0000.tgz
We cannot download with traditional tools we have to use gsutil utility provided by google, using which automation is possible.
You need to use the following URI pattern:
http://storage.googleapis.com/<bucket>/<object>
In this case, you can download that file using the command:
wget http://storage.googleapis.com/m-lab/ndt/2012/05/23/20120523T000000Z-mlab1-ams01-ndt-0000.tgz

Resources