I've build an Apache Flink app and packaged it in a fat JAR with Gradle Shadow Plugin. The resulting file size is ~114 MiB. When I'm trying to upload it with Flink's web UI it is stuck in "Saving…" phase. And if I use curl to upload it manually the result is "413 Request Entity Too Large":
$ curl -X POST -H "Expect:" -i -F "jarfile=#flink-all.jar" http://ec2-18-204-247-166.compute-1.amazonaws.com:8081/jars/upload
HTTP/1.1 413 Request Entity Too Large
content-length: 0
What are the options, then?
UPD: I can see the JAR in /tmp/flink-web-UUID/flink-web-upload/UUID/flink-all.jar but it is not recognized by Flink (not visible on UI).
Ok, it was easy to fix.
First, I've scanned their repo for "Too Large" string and found this class. Looks like SERVER_MAX_CONTENT_LENGTH is responsible for max object size. It is set here from the configuration option rest.server.max-content-length. The default is 100 MiB.
TLDR:
Setting rest.server.max-content-length in flink-conf.yaml to 209715200 (200 MiB) solved the issue.
Related
application.yml configure
Error information
Nginx is also configured to upload files with a size of 50m,But more than 1M images, will upload failure, why?Help me? thanks
Please have a try:
Before Spring Boot 2.0:
spring.http.multipart.max-file-size=50MB
spring.http.multipart.max-request-size=50MB
After Spring Boot 2.0:
spring.servlet.multipart.max-file-size=50MB
spring.servlet.multipart.max-request-size=50MB
I am using Google App Engine on Ubuntu within Linux Subsystem for Windows.
When I start dev_appserver.py I receive errors with the following line resulting in this, which I am understanding to be a corrupted sqlite data file.
File "/../google-cloud-sdk/platform/google_appengine/google/appengine/api/logservice/logservice_stub.py", line 181, in start_request
host, start_time, method, resource, http_version, module))
DatabaseError: database disk image is malformed
Based upon this post I am understanding there is a log.db referenced.
GoogleAppEngineLauncher: database disk image is malformed
However, when I run the script referenced, the resultant path does not contain a log.db leading me to believe this is a different issue.
Any help in identifying the appropriate database, for the purposes of removing, would be appreciated.
Per comment added --clear_datastore=1 and did not notice a change
dev_appserver.py --host 127.0.0.1 --port 8080 --admin_port 8082 --storage_path=temp/storage --skip_sdk_update_check true --clear_datastore=1 main/app.yaml main/sync.yaml
I have an internal tool that lets me edit configuration files and then the config files gets synced to Google Storage (* * * * * gsutil -m rsync -d /data/www/config_files/ gs://my-site.appspot.com/configs/).
How can I use these config files across multiple instances in Google App Engine? (I don't want to use the Google PHP SDK to read / write to the config files in the bucket).
Only thing I can come up with is making a cron.yaml file that downloads the configs from the bucket to /app/configs/ every minute, but then I'd have to reload php-fpm every minute as well.
app.yaml:
runtime: custom
env: flex
service: my-site
env_variables:
CONFIG_DIR: /app/configs
resources:
cpu: 1
memory_gb: 0.5
disk_size_gb: 10
automatic_scaling:
min_num_instances: 2
max_num_instances: 20
cpu_utilization:
target_utilization: 0.5
Dockerfile:
FROM eu.gcr.io/google-appengine/php71
RUN mkdir -p /app;
ADD . /app
RUN chmod -R a+r /app
I am assuming you are designing a solution where you can use pull requests on the GCS bucket to get configuration and update your apps en mass quickly.
There are many points in the process, depending on your exact flow, where you can insert a "please update now" command. For example, why can't you simply queue a task as you update the configuration in your GCS bucket? That task will basically down the configuration and redeloy your application.
Unless you are thinking about using multiple applications that have access to that bucket, and you want to be able to update them at the same time centrally. In that case, your cron job solution makes sense. Dan's suggestion definitely works, but I think you can make it easier by using version numbers. Simply have another file with a version number in it, the cron job pulls that file, compares it and then performs an update if the version is newer. It's very similar to Dan's solution except you don't really need to hash anything. If you are updating GCS with your configurations, might as well tag on another file with the version information.
Another solution is to expose a handler in all those applications, for example an "/update" handler. Whenever it's hit, the application performs the update. You can hit that handler whenever you actually update the configuration in your GCS. This is more of a push solution. The advantage is that you have more control over which applications gets the updates, this might be useful if you aren't sure about a certain configuration yet so you don't want to update everything at once.
We did not want to add a handler in our application for this. We thought it was best to use supervisord.
additional-supervisord.conf:
[program:sync-configs]
command = /app/scripts/sync_configs.sh
startsecs = 0
autorestart = false
startretries = 1
sync_configs.sh:
#!/usr/bin/env bash
while true; do
# Sync configs from Google Storage.
gsutil -m rsync -c -r ${CONFIG_BUCKET} /app/config
# Reload PHP-FPM
ps ax | grep php-fpm | cut -f2 -d" " - | xargs kill -s USR2
# Wait 60 seconds.
sleep 60
done
Dockerfile:
COPY additional-supervisord.conf /etc/supervisor/conf.d/
Google compute engine console return 399 error code already asks my question but the solution is not as suggested there. Since the URL is little old starting a new thread.
I am trying to do a wget using:
wget https://console.developers.google.com/m/cloudstorage/b/m-lab/o/ndt/2012/05/23/20120523T000000Z-mlab1-ams01-ndt-0000.tgz
I see the error:
Resolving console.developers.google.com (console.developers.google.com)... 216.239.32.27
Connecting to console.developers.google.com (console.developers.google.com)|216.239.32.27|:443... connected.
HTTP request sent, awaiting response... 399 Internal Server Error
2014-08-26 20:02:18 ERROR 399: Internal Server Error.
I am new to Linux commands so wanted to know if am missing something obvious.
The address works when I use Chrome downloader but fails with wget with me as well
I have never seen this behaviour before
You can also use cURL to download files, I used the -v switch and got a dns error(no idea why)
curl -v http://console.developers.googlO.com/m/cloudstorage/b/m-lab/o/ndt/2012/05/23/20120523T000000Z-mlab1-ams01-ndt-0000.tgz
We cannot download with traditional tools we have to use gsutil utility provided by google, using which automation is possible.
You need to use the following URI pattern:
http://storage.googleapis.com/<bucket>/<object>
In this case, you can download that file using the command:
wget http://storage.googleapis.com/m-lab/ndt/2012/05/23/20120523T000000Z-mlab1-ams01-ndt-0000.tgz
I've been running nagios for about two years, but recently this problem started appearing with one of my services.
I'm getting
CRITICAL - Socket timeout after 10 seconds
for a check_http -H my.host.com -f follow -u /abc/def check, which used to work fine. No other services are reporting this problem. The remote site is up and healthy, and I can do a wget http://my.host.com/abc/def from the nagios server, and it downloads the response just fine. Also, doing a check_http -H my.host.com -f follow works just fine, i.e. it's only when I use the -u argument that things break. I also tried passing it a different user agent string, no difference. I tried increasing the timeout, no luck. I tried with -v, but all it get is:
GET /abc/def HTTP/1.0
User-Agent: check_http/v1861 (nagios-plugins 1.4.11)
Connection: close
Host: my.host.com
CRITICAL - Socket timeout after 10 seconds
... which does not tell me what's going wrong.
Any ideas how I could resolve this?
Thanks!
Try using the -N option of check_http.
I ran into similar problems, and in my case the web server didn't terminate the connection after sending the response (https was working, http wasn't). check_http tries to read from the open socket until the server closes the connection. If that doesn't happen then the timeout occurs.
The -N option tells check_http to receive only the header, but not the content of the page / document.
I tracked my issue down to an issue with the security providers configured in the most recent version of OpenSUSE.
From summary of other web pages it appears to be an issue with an attempt to use TLSv2 protocol which does not appear to work correctly, or is missing something in the default configurations to allow it to work.
To overcome the problem I commented out the security provider in question from the JRE security configuration file.
#security.provider.10=sun.security.pkcs11.SunPKCS11
The security.provider. value may be different in your configuration, but essentially the SunPKCS11 provider is at issue.
This configuration is normally found in
$JAVA_HOME/lib/security/java.security
of the JRE that you are using.
Fixed with this url in nrpe.cfg: (on Deb 6.0 Squeeze using nagios-nrpe-server)
command[check_http]=/usr/lib/nagios/plugins/check_http -H localhost -p 8080 -N -u /login?from=%2F
For whoever is interested, I stumbled in this problem too and the problem ended up being in mod_itk on the web server.
A patch is available, even if it seems it's not included in the current CentOS or Debian packages:
https://lists.err.no/pipermail/mpm-itk/2015-September/000925.html
In my case /etc/postfix/main.cf file was not good configured.
My mailserverrelay was not defined and was also very restrictive.
I should to add:
relayhost = mailrelay.ext.example.com
smtpd_relay_restrictions = permit_mynetworks permit_sasl_authenticated defer_unauth_destination