GAE Standard Php, Linux development server error - google-app-engine

Using the Google Cloud SDK, as opposed to the App Launcher that is being phased out, I'm trying to setup the development Php environment on a Linux host. I've got the recommended Php version installed and here are the results of attempting to start a server.
INFO 2017-09-24 02:44:31,139 devappserver2.py:115] Skipping SDK update check.
INFO 2017-09-24 02:44:31,305 api_server.py:299] Starting API server at: http://localhost:42195
INFO 2017-09-24 02:44:31,408 dispatcher.py:224] Starting module "default" running at: http://localhost:8080
INFO 2017-09-24 02:44:31,410 admin_server.py:116] Starting admin server at: http://localhost:8000
ERROR 2017-09-24 02:44:32,434 module.py:1588]
INFO 2017-09-24 02:44:33,412 shutdown.py:45] Shutting down.
INFO 2017-09-24 02:44:33,413 api_server.py:940] Applying all pending transactions and saving the datastore
INFO 2017-09-24 02:44:33,413 api_server.py:943] Saving search indexes

Related

Apache Flink Kubernetes Job Arguments

I'm trying to setup a cluster (Apache Flink 1.6.1) with Kubernetes and get following error when I run a job on it:
2018-10-09 14:29:43.212 [main] INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - --------------------------------------------------------------------------------
2018-10-09 14:29:43.214 [main] INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Registered UNIX signal handlers for [TERM, HUP, INT]
Exception in thread "main" java.lang.NoSuchMethodError: org.apache.flink.runtime.entrypoint.ClusterConfiguration.<init>(Ljava/lang/String;Ljava/util/Properties;[Ljava/lang/String;)V
at org.apache.flink.runtime.entrypoint.EntrypointClusterConfiguration.<init>(EntrypointClusterConfiguration.java:37)
at org.apache.flink.container.entrypoint.StandaloneJobClusterConfiguration.<init>(StandaloneJobClusterConfiguration.java:41)
at org.apache.flink.container.entrypoint.StandaloneJobClusterConfigurationParserFactory.createResult(StandaloneJobClusterConfigurationParserFactory.java:78)
at org.apache.flink.container.entrypoint.StandaloneJobClusterConfigurationParserFactory.createResult(StandaloneJobClusterConfigurationParserFactory.java:42)
at org.apache.flink.runtime.entrypoint.parser.CommandLineParser.parse(CommandLineParser.java:55)
at org.apache.flink.container.entrypoint.StandaloneJobClusterEntryPoint.main(StandaloneJobClusterEntryPoint.java:153)
My job takes a configuration file (file.properties) as a parameter. This works fine in standalone mode but apparently the Kubernetes cluster cannot parse it
job-cluster-job.yaml:
args: ["job-cluster", "--job-classname", "com.test.Abcd", "-Djobmanager.rpc.address=flink-job-cluster",
"-Dparallelism.default=1", "-Dblob.server.port=6124", "-Dquery.server.ports=6125", "file.properties"]
How to fix this?
Update: The job was built for Apache 1.4.2 and this might be the issue, looking into it.
The job was built for 1.4.2, the class with the error (EntrypointClusterConfiguration.java) was added in 1.6.1 (https://github.com/apache/flink/commit/ab9bd87e521d19db7c7d783268a3532d2e876a5d#diff-d1169e00afa40576ea8e4f3c472cf858) it seems, so this caused the issue.
We updated the job's dependencies to point to new 1.6.1 release and the arguments are parsed correctly.

zeppelin | 0.8.0 | Disable Helium

We are running Zeppelin on docker containers in a locked-down enterprise environment. When Zeppelin starts, it tries to connect to AWS, times-out after a while, but successfully starts. The log trace is below -
INFO [2018-09-03 14:26:25,131] ({main} Notebook.java[<init>]:128) - Notebook indexing finished: 0 indexed in 0s
INFO [2018-09-03 14:26:25,133] ({main} Helium.java[loadConf]:103) - Add helium local registry /opt/zeppelin-0.8.0/helium
INFO [2018-09-03 14:26:25,134] ({main} Helium.java[loadConf]:100) - Add helium online registry https://s3.amazonaws.com/helium-package/helium.json
WARN [2018-09-03 14:26:25,138] ({main} Helium.java[loadConf]:111) - /opt/zeppelin-0.8.0/conf/helium.json does not exists
ERROR [2018-09-03 14:28:32,864] ({main} HeliumOnlineRegistry.java[getAll]:80) - Connect to s3.amazonaws.com:443 [s3.amazonaws.com/54.231.81.59] failed: Connection timed out
INFO [2018-09-03 14:28:33,840] ({main} ContextHandler.java[doStart]:744) - Started o.e.j.w.WebAppContext#ef9296d{/,file:/opt/zeppelin-0.8.0/webapps/webapp/,AVAILABLE}{/opt/zeppelin-0.8.0/zeppelin-web-0.8.0.war}
INFO [2018-09-03 14:28:33,846] ({main} AbstractConnector.java[doStart]:266) - Started ServerConnector#1b1c538d{HTTP/1.1}{0.0.0.0:9991}
INFO [2018-09-03 14:28:33,847] ({main} Server.java[doStart]:379) - Started #145203ms
We have no use-case for Helium (as of now) and the delay in the zeppelin restart affects us. Is there a way we can disable this dependency on Helium?
Thanks!
There was PR3082 ([ZEPPELIN-3636] Add timeout for s3 amazon bucket endpoint) that allows not to wait to Amazon.
PR was merged to master, perhaps will be merged to branch-0.8.

Apache Zeppelin - Disconnected status

I have successfully installed and started Zeppelin on ec2 cluster with spark 1.3 and hadoop 2.4.1 on yarn.(as given in https://github.com/apache/incubator-zeppelin)
However, I see zeppelin started with 'disconnected' status (on the right corner).
As per log, I find that both the zeppelin port and the websocket port (zeppeling port + 1) have been started with no error. Also, both the ports are not used by any other process and I see
zeppelin process (pid) running on both the ports. The IP table is blank.
log:
INFO [2015-06-30 03:20:31,294] ({main} QuartzScheduler.java[initialize]:305) - Scheduler meta-data: Quartz Scheduler (v2.2.1) 'DefaultQuartzScheduler' with instanceId 'NON_CLUSTERED'
Scheduler class: 'org.quartz.core.QuartzScheduler' - running locally.
NOT STARTED.
Currently in standby mode.
Number of jobs executed: 0
Using thread pool 'org.quartz.simpl.SimpleThreadPool' - with 10 threads.
Using job-store 'org.quartz.simpl.RAMJobStore' - which does not support persistence. and is not clustered.
INFO [2015-06-30 03:20:31,294] ({main} StdSchedulerFactory.java[instantiate]:1339) - Quartz scheduler 'DefaultQuartzScheduler' initialized from default resource file in Quartz package: 'quartz.properties'
INFO [2015-06-30 03:20:31,294] ({main} StdSchedulerFactory.java[instantiate]:1343) - Quartz scheduler version: 2.2.1
INFO [2015-06-30 03:20:31,295] ({main} QuartzScheduler.java[start]:575) - Scheduler DefaultQuartzScheduler_$_NON_CLUSTERED started.
INFO [2015-06-30 03:20:31,510] ({main} ServerImpl.java[initDestination]:94) - Setting the server's publish address to be /
INFO [2015-06-30 03:20:31,625] ({main} StandardDescriptorProcessor.java[visitServlet]:284) - NO JSP Support for /, did not find org.apache.jasper.servlet.JspServlet
INFO [2015-06-30 03:20:32,374] ({main} AbstractConnector.java[doStart]:338) - Started SocketConnector#0.0.0.0:8083
INFO [2015-06-30 03:20:32,374] ({main} ZeppelinServer.java[main]:108) - Started
INFO [2015-06-30 03:20:30,181] ({main} ZeppelinConfiguration.java[create]:98) - Load configuration from file:/home/ec2-user/incubator-zeppelin/conf/zeppelin-site.xml
INFO [2015-06-30 03:20:30,336] ({main} NotebookServer.java[creatingwebSocketServerLog]:65) - Create zeppelin websocket on 0.0.0.0:8084
INFO [2015-06-30 03:20:30,537] ({main} ZeppelinServer.java[main]:106) - Start zeppelin server
INFO [2015-06-30 03:20:30,539] ({main} Server.java[doStart]:272) - jetty-8.1.14.v20131031
zeppelin-env.sh:
export ZEPPELIN_PORT=8083
export HADOOP_CONF_DIR=/mnt/disk1/hadoop-2.4.1/etc/hadoop
export SPARK_HOME=/mnt/disk2/spark
In zeppelin-site.xml, I have only set server ip address and port and -1 for websocket port.
When I access websocket port thru chorme I get "no data received..err_empty_reponse" and "Unable to load the webpage because the server sent no data' error.
Am I missing anything during installation or in configuration? Any help is appreciated. Thanks.
I have some experiences using apache zeppelin with IE or Chrome. Just add the your IP address into trusted sites with internet option. Close the IE or Chrome and restart it. And then opening IE or Chrome browser, you can see main page of apache zeppelin.
Try to set the property zeppelin.server.allowed.origins to *in the conf/zeppelin-site.xmland check if it's a websocket issue. After you can list the origins that you would like to allow.

Pydev + Google App Engine + Eclipse = dev_appserver terminates right on

i've been a happy user for pydev, eclipse and gae.
Now i've changed my computer and reinstalled the latest jee eclipse, pydev and gae. As usual, I configure my pydev gae project (I use App Engine Modules), have it launched by the eclipse debugger.
The issue is that as soon as I launch a cron task through the admin interface (localhost:8000/cron), every thread terminates and the dev_appserver ends (exit value : 137)
Everything run smooth when launching dev_appserver.py by hand and with pycharm (i'd like to keep using pydev!)
versions
Google Cloud SDK 0.9.33
app-engine-python 1.9.12
eclipse 4.4.1
pydev 3.8.0.201425
Debug windows content
<terminated>xxxx xxxx (1) [PyDev Google App Run]
<terminated>dev_appserver.py
dev_appserver.py
dev_appserver.py
dev_appserver.py
<terminated, exit value: 137>dev_appserver.py
console ouput
pydev debugger: starting (pid: 20240)
INFO 2014-10-15 10:27:19,522 api_server.py:171] Starting API server at: http://localhost:34966
INFO 2014-10-15 10:27:19,523 dispatcher.py:174] Starting dispatcher running at: http://localhost:8080
INFO 2014-10-15 10:27:19,553 dispatcher.py:186] Starting module "default" running at: http://localhost:8081
INFO 2014-10-15 10:27:19,586 dispatcher.py:186] Starting module "static-backend" running at: http://localhost:8082
INFO 2014-10-15 10:27:19,589 admin_server.py:117] Starting admin server at: http://localhost:8000
pydev debugger: starting (pid: 20264)

dev_appserver returns error "unexpected port response from runtime"

Last week, I have successfully tried the Helloworld example with sdk release 1.7.6 and python 2.7 on Windows XP SP3. Today it will not run at all and generates this error.
Can anybody help?
D:\helloworld>dev_appserver.py d:\helloworld
INFO 2013-03-24 20:16:18,187 sdk_update_checker.py:244] Checking for updates
to the SDK.
INFO 2013-03-24 20:16:19,062 sdk_update_checker.py:272] The SDK is up to dat
e.
INFO 2013-03-24 20:16:19,421 api_server.py:152] Starting API server at: http
://localhost:1868
INFO 2013-03-24 20:16:19,437 dispatcher.py:98] Starting server "default" run
ning at: http ://localhost:8080
INFO 2013-03-24 20:16:19,483 admin_server.py:117] Starting admin server at:
http ://localhost:8000
ERROR 2013-03-24 20:16:29,717 http_runtime.py:221] unexpected port response f
rom runtime ['before instance\r\n']; exiting the development server
INFO 2013-03-24 20:16:30,546 api_server.py:517] Applying all pending transac
tions and saving the datastore
INFO 2013-03-24 20:16:30,546 api_server.py:520] Saving search indexes
Could you please file a bug at:
https://code.google.com/p/googleappengine/issues/list
Also, have you added any print statements to the libraries in your Python installation?
Tim Hoffman's response:
"""Check you do not have any print statements in your code. If you do they will write to stdout which the new dev server doesn't like as it uses stdin/stdout to talk between the main task and the workers.""" is not correct. Your application can print to stdout and stderr. In your case it looks like something is printing to stdout before your application is loaded.
Check you do not have any print statements in your code. If you do they will write to stdout which the new dev server doesn't like as it uses stdin/stdout to talk between the main task and the workers. You can read more on how the new dev server functions - and how debugging with pdb etc will have to work
You can run the old version of the server by instead running old_dev_appserver.py

Resources