Go AppEngine remote_api sample not working - google-app-engine

Are the Go AppEngine samples up to date?
I'm running into issues getting example/remote_api/datastore_info.go working for my test AppEngine running on localhost.
I've changed the client.PostForm from:
resp, err := client.PostForm("https://www.google.com/accounts/ClientLogin", v)
to:
resp, err := client.PostForm("http://localhost:35058/_ah/remote_api", v)
(35058 is the port reported for api_server during startup).
I've tried both 1.9.3 and latest 1.9.4 versions.
The api server reports:
ERROR 2014-05-06 20:57:56,378 api_server.py:215] Exception while handling
Traceback (most recent call last):
File "/root/go_appengine/google/appengine/tools/devappserver2/api_server.py", line 194, in _handle_POST
request.ParseFromString(wsgi_input)
File "/root/go_appengine/google/net/proto/ProtocolBuffer.py", line 88, in ParseFromString
self.MergeFromString(s)
File "/root/go_appengine/google/net/proto/ProtocolBuffer.py", line 95, in MergeFromString
self.MergePartialFromString(s)
File "/root/go_appengine/google/net/proto/ProtocolBuffer.py", line 109, in MergePartialFromString
self.TryMerge(d)
File "/root/go_appengine/google/appengine/ext/remote_api/remote_api_pb.py", line 210, in TryMerge
d.skipData(tt)
File "/root/go_appengine/google/net/proto/ProtocolBuffer.py", line 529, in skipData
self.skipData(t)
File "/root/go_appengine/google/net/proto/ProtocolBuffer.py", line 529, in skipData
self.skipData(t)
File "/root/go_appengine/google/net/proto/ProtocolBuffer.py", line 537, in skipData
raise ProtocolBufferDecodeError, "corrupted"
ProtocolBufferDecodeError: corrupted

There were some bug fixes in 1.9.6; can you try with the latest SDK?

I've been exactly the same problem in whatever call to my development server
Traceback (most recent call last):
File "/home/mike/go_appengine/google/appengine/tools/devappserver2/api_server.py", line 238, in _handle_POST
request.ParseFromString(wsgi_input)
File "/home/mike/go_appengine/google/net/proto/ProtocolBuffer.py", line 140, in ParseFromString
self.MergeFromString(s)
File "/home/mike/go_appengine/google/net/proto/ProtocolBuffer.py", line 152, in MergeFromString
self.MergePartialFromString(s)
File "/home/mike/go_appengine/google/net/proto/ProtocolBuffer.py", line 168, in MergePartialFromString
self.TryMerge(d)
File "/home/mike/go_appengine/google/appengine/ext/remote_api/remote_api_pb.py", line 210, in TryMerge
d.skipData(tt)
File "/home/mike/go_appengine/google/net/proto/ProtocolBuffer.py", line 677, in skipData
raise ProtocolBufferDecodeError, "corrupted"
ProtocolBufferDecodeError: corrupted
I've go version go1.4.2 (appengine-1.9.24) linux/amd64
Problem was I'm using the IP for "API" instead the IP for default module
INFO 2015-08-13 19:42:03,901 devappserver2.py:763] Skipping SDK update check.
INFO 2015-08-13 19:42:03,947 api_server.py:205] Starting API server at: http://localhost:60852
INFO 2015-08-13 19:42:03,971 dispatcher.py:197] Starting module "default" running at: http://localhost:49333
INFO 2015-08-13 19:42:03,972 admin_server.py:118] Starting admin server at: http://localhost:8000
You must use module host/por to place calls to your go application; API ip is for remote api, I think.

Related

VOLTTRON Simple Web agent

On release 8.1.1 I am trying to experiment with the simple web agent.
Running through the setup process
volttron -vv -l volttron.log --bind-web-address http://0.0.0.0:8080 &
Everything seem to install OK for http protrocol on the vcfg and starting the agent starts fine but going to the browser I get an empty page response.
And in terminal an error here's the Full traceback:
.do_close of <WSGIServer, (<gevent._socket3.socket [closed] at 0x7f64342242c)> failed with SSLError
Traceback (most recent call last):
File "src/gevent/greenlet.py", line 854, in gevent._gevent_cgreenlet.Greenlet.run
File "/home/ben/Desktop/volttron/env/lib/python3.8/site-packages/gevent/baseserver.py", line 34, in _handle_and_close_when_done
return handle(*args_tuple)
File "/home/ben/Desktop/volttron/env/lib/python3.8/site-packages/gevent/server.py", line 233, in wrap_socket_and_handle
with _closing_socket(self.wrap_socket(client_socket, **self.ssl_args)) as ssl_socket:
File "/home/ben/Desktop/volttron/env/lib/python3.8/site-packages/gevent/_ssl3.py", line 793, in wrap_socket
return SSLSocket(sock=sock, keyfile=keyfile, certfile=certfile,
File "/home/ben/Desktop/volttron/env/lib/python3.8/site-packages/gevent/_ssl3.py", line 311, in init
raise x
File "/home/ben/Desktop/volttron/env/lib/python3.8/site-packages/gevent/_ssl3.py", line 307, in init
self.do_handshake()
File "/home/ben/Desktop/volttron/env/lib/python3.8/site-packages/gevent/_ssl3.py", line 663, in do_handshake
self._sslobj.do_handshake()
ssl.SSLError: [SSL: HTTP_REQUEST] http request (_ssl.c:1131)
2021-09-29T13:38:34Z <Greenlet at 0x7f64341fc480: _handle_and_close_when_done(<bound method StreamServer.wrap_socket_and_handle , <bound method StreamServer.do_close of <WSGIServer, (<gevent._socket3.socket [closed] at 0x7f643419195)> failed with SSLError
Traceback (most recent call last):
File "src/gevent/greenlet.py", line 854, in gevent._gevent_cgreenlet.Greenlet.run
File "/home/ben/Desktop/volttron/env/lib/python3.8/site-packages/gevent/baseserver.py", line 34, in _handle_and_close_when_done
return handle(*args_tuple)
File "/home/ben/Desktop/volttron/env/lib/python3.8/site-packages/gevent/server.py", line 233, in wrap_socket_and_handle
with _closing_socket(self.wrap_socket(client_socket, **self.ssl_args)) as ssl_socket:
File "/home/ben/Desktop/volttron/env/lib/python3.8/site-packages/gevent/_ssl3.py", line 793, in wrap_socket
return SSLSocket(sock=sock, keyfile=keyfile, certfile=certfile,
File "/home/ben/Desktop/volttron/env/lib/python3.8/site-packages/gevent/_ssl3.py", line 311, in init
raise x
File "/home/ben/Desktop/volttron/env/lib/python3.8/site-packages/gevent/_ssl3.py", line 307, in init
self.do_handshake()
File "/home/ben/Desktop/volttron/env/lib/python3.8/site-packages/gevent/_ssl3.py", line 663, in do_handshake
self._sslobj.do_handshake()
ssl.SSLError: [SSL: HTTP_REQUEST] http request (_ssl.c:1131)
2021-09-29T13:38:34Z <Greenlet at 0x7f643423c6a0: _handle_and_close_when_done(<bound method StreamServer.wrap_socket_and_handle , <bound method StreamServer.do_close of <WSGIServer, (<gevent._socket3.socket [closed] at 0x7f64342242c)> failed with SSLError
EDIT
So if I do a nano ~/.volttron/config it looks like this below. I did change the bind-web-address for the IP address of my test bench instance. Hopefully that wasn't a mistake it looked like the initial bind-web-address was the name of the computer. --bind-web-address http://ben-hp-probook-6550b:8080
message-bus = zmq
vip-address = tcp://127.0.0.1:22916
instance-name = benshome
bind-web-address = http://192.168.0.105:8080
web-ssl-cert = /home/ben/.volttron/certificates/certs/platform_web-server.crt
web-ssl-key = /home/ben/.volttron/certificates/private/platform_web-server.pem
web-secret-key = 0e3b19770c0a8c0a08f274fcdabaf939fecc16601283266934c5ab258a1ed20cf440fde2c83cb8660dac569d31b5cdaf3ab7354a39b0640f355f9c5407c5fce619
I think I did first try HTTPS then resorted to HTTP. Anyways when I start VOLTTRON do I still need a --bind-web-address arg if the ~/.volttron/config is already setup with one?
I've a tried both when starting VOLTTRON to use the --bind flag or not but still unable to bring up a webpage on the IP address of the machine running VOLTTRON of 192.168.0.105. This would be the simple web agent, right?
I was able to reproduce this when I ran through vcfg and specified https, but then did what you did and passed the bind-web-address to the volttron command itself.
However, you shouldn't do this. The instructions assume you haven't gone through the vcfg process and therefore you would have to specify the bind web address on the command line.
Since you went through the vcfg process your config file (~/.volttron/config) will have your hostname:port as the bind-web-address. If it has https in it that is the reason it is not working for you.

S3 permission error when running sagemaker python sdk sklearn in local mode

I created a training script with hard coded input. It works as expected using a training job but I couldn't make it work using local mode.
It brings up a container on my local docker and exits with code (1)
Code:
estimator = SKLearn(entry_point="train_model.py",
train_instance_type="local")
estimator.fit()
Here is the exception:
2020-02-22 06:21:05,470 sagemaker-containers INFO Imported framework sagemaker_sklearn_container.training
2020-02-22 06:21:05,480 sagemaker-containers INFO No GPUs detected (normal if no gpus installed)
2020-02-22 06:21:05,504 sagemaker_sklearn_container.training INFO Invoking user training script.
2020-02-22 06:21:06,407 sagemaker-containers ERROR Reporting training FAILURE
2020-02-22 06:21:06,407 sagemaker-containers ERROR framework error:
Traceback (most recent call last):
File "/miniconda3/lib/python3.7/site-packages/sagemaker_containers/_trainer.py", line 81, in train
entrypoint()
File "/miniconda3/lib/python3.7/site-packages/sagemaker_sklearn_container/training.py", line 36, in main
train(framework.training_env())
File "/miniconda3/lib/python3.7/site-packages/sagemaker_sklearn_container/training.py", line 32, in train
training_environment.to_env_vars(), training_environment.module_name)
File "/miniconda3/lib/python3.7/site-packages/sagemaker_containers/_modules.py", line 301, in run_module
_files.download_and_extract(uri, _env.code_dir)
File "/miniconda3/lib/python3.7/site-packages/sagemaker_containers/_files.py", line 129, in download_and_extract
s3_download(uri, dst)
File "/miniconda3/lib/python3.7/site-packages/sagemaker_containers/_files.py", line 164, in s3_download
s3.Bucket(bucket).download_file(key, dst)
File "/miniconda3/lib/python3.7/site-packages/boto3/s3/inject.py", line 246, in bucket_download_file
ExtraArgs=ExtraArgs, Callback=Callback, Config=Config)
File "/miniconda3/lib/python3.7/site-packages/boto3/s3/inject.py", line 172, in download_file
extra_args=ExtraArgs, callback=Callback)
File "/miniconda3/lib/python3.7/site-packages/boto3/s3/transfer.py", line 307, in download_file
future.result()
File "/miniconda3/lib/python3.7/site-packages/s3transfer/futures.py", line 106, in result
return self._coordinator.result()
File "/miniconda3/lib/python3.7/site-packages/s3transfer/futures.py", line 265, in result
raise self._exception
File "/miniconda3/lib/python3.7/site-packages/s3transfer/tasks.py", line 255, in _main
self._submit(transfer_future=transfer_future, **kwargs)
File "/miniconda3/lib/python3.7/site-packages/s3transfer/download.py", line 345, in _submit
**transfer_future.meta.call_args.extra_args
File "/miniconda3/lib/python3.7/site-packages/botocore/client.py", line 357, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/miniconda3/lib/python3.7/site-packages/botocore/client.py", line 661, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (403) when calling the HeadObject operation: Forbidden
An error occurred (403) when calling the HeadObject operation: Forbidden
tmpe_msr8pi_algo-1-kt1vh_1 exited with code 1
I found out that docker restart solved the issue.
After a while it happened again - and it solved it again.
I'm using docker for windows, and the issue is probably related to the created container configuration

Executable path error while using chrome driver with selenium

I am trying to open chrome using selenium but getting this error . I have tried many ways but still this issue is not resolved.
Traceback (most recent call last):
File "sl.py", line 11, in <module>
chrome.Initialize()
File "/home/daffolap/api_test/api_test/utils.py", line 14, in Initialize
driver = webdriver.Chrome(executable_path=r"/home/daffolap/api_test/chromedriver2.exe")
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/chrome/webdriver.py", line 73, in __init__
self.service.start()
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/common/service.py", line 83, in start
os.path.basename(self.path), self.start_error_message)
selenium.common.exceptions.WebDriverException: Message: 'chromedriver`enter code here`2.exe' executable needs to be in PATH. Please see https://sites.google.com/a/chromium.org/chromedriver/home
driver=None
class Webbrowser:
#staticmethod
def Initialize():
global driver
driver = webdriver.Chrome(executable_path=r"/home/daffolap/api_test/chromedriver2.exe")
driver.maximize_window()
driver.implicitly_wait(5)
return driver

How to set up CKAN in RHEL

I was following the steps given here to install CKAN from source in my Redhat Enterprise Linux 6 system.
At the step 6, I get the following output:
> (default)[hrishi#rd ~]$ cd /usr/lib/ckan/default/src/ckan
(default)[hrishi#rd ckan]$ paster db init -c /etc/ckan/default/development.ini
2015-05-15 11:23:35,695 ERROR [ckan.lib.search.common] HTTP code=404, reason=Not Found
Traceback (most recent call last):
File "/usr/lib/ckan/default/src/ckan/ckan/lib/search/common.py", line 51, in is_available
conn.query("*:*", rows=1)
File "/usr/lib/python2.6/site-packages/solr/core.py", line 703, in query
return self.select(*args, **params)
File "/usr/lib/python2.6/site-packages/solr/core.py", line 798, in __call__
xml = self.raw(**params)
File "/usr/lib/python2.6/site-packages/solr/core.py", line 823, in raw
rsp = conn._post(self.selector, request, conn.form_headers)
File "/usr/lib/python2.6/site-packages/solr/core.py", line 639, in _post
return check_response_status(self.conn.getresponse())
File "/usr/lib/python2.6/site-packages/solr/core.py", line 1097, in check_response_status
raise ex
SolrException: HTTP code=404, reason=Not Found
2015-05-15 11:23:35,697 WARNI [ckan.lib.search] Problems were found while connecting to the SOLR server
2015-05-15 11:23:35,927 ERROR [ckan.lib.search.common] HTTP code=404, reason=Not Found
Traceback (most recent call last):
File "/usr/lib/ckan/default/src/ckan/ckan/lib/search/common.py", line 51, in is_available
conn.query("*:*", rows=1)
File "/usr/lib/python2.6/site-packages/solr/core.py", line 703, in query
return self.select(*args, **params)
File "/usr/lib/python2.6/site-packages/solr/core.py", line 798, in __call__
xml = self.raw(**params)
File "/usr/lib/python2.6/site-packages/solr/core.py", line 823, in raw
rsp = conn._post(self.selector, request, conn.form_headers)
File "/usr/lib/python2.6/site-packages/solr/core.py", line 639, in _post
return check_response_status(self.conn.getresponse())
File "/usr/lib/python2.6/site-packages/solr/core.py", line 1097, in check_response_status
raise ex
SolrException: HTTP code=404, reason=Not Found
2015-05-15 11:23:35,928 WARNI [ckan.lib.search] Problems were found while connecting to the SOLR server
Initialising DB: SUCCESS
Could someone tell me where I went wrong?
Also, I followed these steps to setup Solr in my system.
(just a wild guess) A common pitfall on RHEL systems is that the default SELinux configuration might not allow internal http connections. So try
/usr/sbin/setsebool httpd_can_network_connect 1
(or simply disable SELinux).
And, of course, check your config ini if you really have the correct address and port of the Solr server. ;-)

Nose GAE : Cannot import dev_appserver, but app engine is still in PYTHONPATH

I am getting the following error when trying to run the nosetest from my GAE project:
nosetests --nologcapture --with-gae --without-sandbox --gae-lib-root=/Users/dsinha/Dropbox/code/google-cloud-sdk/platform/google_appengine
but I get the following error:
Traceback (most recent call last):
File "/usr/local/bin/nosetests", line 8, in <module>
load_entry_point('nose==1.3.4', 'console_scripts', 'nosetests')()
File "/Library/Python/2.7/site-packages/nose/core.py", line 121, in __init__
**extra_args)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/unittest/main.py", line 94, in __init__
self.parseArgs(argv)
File "/Library/Python/2.7/site-packages/nose/core.py", line 145, in parseArgs
self.config.configure(argv, doc=self.usage())
File "/Library/Python/2.7/site-packages/nose/config.py", line 346, in configure
self.plugins.configure(options, self)
File "/Library/Python/2.7/site-packages/nose/plugins/manager.py", line 284, in configure
cfg(options, config)
File "/Library/Python/2.7/site-packages/nose/plugins/manager.py", line 99, in __call__
return self.call(*arg, **kw)
File "/Library/Python/2.7/site-packages/nose/plugins/manager.py", line 167, in simple
result = meth(*arg, **kw)
File "/Library/Python/2.7/site-packages/nosegae.py", line 87, in configure
from google.appengine.tools import old_dev_appserver as dev_appserver
ImportError: cannot import name old_dev_appserver
The sys.path reads:
'/Users/dsinha/Downloads/eclipse/plugins/org.python.pydev_3.9.0.201411111611/pysrc', '/Users/dsinha/Dropbox/code/google-cloud-sdk/platform/google_appengine/lib/antlr3', '/Users/dsinha/Dropbox/code/google-cloud-sdk/platform/google_appengine/lib/fancy_urllib', '/Users/dsinha/Dropbox/code/google-cloud-sdk/platform/google_appengine/lib/ipaddr', '/Users/dsinha/Dropbox/code/google-cloud-sdk/platform/google_appengine/lib/yaml-3.10', '/Users/dsinha/Dropbox/code/google-cloud-sdk/platform/google_appengine/lib/rsa', '/Users/dsinha/Dropbox/code/google-cloud-sdk/platform/google_appengine/lib/pyasn1', '/Users/dsinha/Dropbox/code/google-cloud-sdk/platform/google_appengine/lib/pyasn1_modules', '/Users/dsinha/Dropbox/code/google-cloud-sdk/platform/google_appengine', '/Users/dsinha/Dropbox/code/google-cloud-sdk/platform/google_appengine/lib/simplejson', '/Users/dsinha/Dropbox/code/google-cloud-sdk/platform/google_appengine/lib/django-1.4', '/Users/dsinha/Dropbox/code/google-cloud-sdk/platform/goo...
so the app engine libraries should be getting pulled in. Actually its first failing to pull in dev_appserver, and then trying and failing to pull old_dev_appserver
The dir inside the appengine/lib file is:
bash-3.2$ ls
__init__.py appcfg.py backends_xml_parser.py dispatch_xml_parser.py handler_generator.py php_cli.py value_mixin.pyc
__init__.pyc appcfg_java.py boolean_action.py docker handler_generator.pyc queue_xml_parser.py web_xml_parser.py
adaptive_thread_pool.py appengine_rpc.py boolean_action.pyc dos_xml_parser.py jarfile.py queue_xml_parser.pyc web_xml_parser.pyc
api_server.py appengine_rpc.pyc bulkload_client.py download_appstats.py java_quickstart.py remote_api_shell.py xml_parser_utils.py
app_engine_config_exception.py appengine_rpc_httplib2.py bulkloader.py endpointscfg.py java_quickstart.pyc requeue.py xml_parser_utils.pyc
app_engine_config_exception.pyc augment_mimetypes.py cron_xml_parser.py gen_protorpc.py java_utils.py sdk_update_checker.py yaml_translator.py
app_engine_web_xml_parser.py augment_mimetypes.pyc dev-channel-js.js handler.py java_utils.pyc sdk_update_checker.pyc yaml_translator.pyc
app_engine_web_xml_parser.pyc backends_conversion.py devappserver2 handler.pyc os_compat.py value_mixin.py
bash-3.2$ pwd
/Users/dsinha/Dropbox/code/google-cloud-sdk/platform/google_appengine/google/appengine/tools
I also tried to find the modules available inside the google.appengine.tools package:
>>> import pkgutil
>>> [name for _, name, _ in pkgutil.iter_modules(['testpkg'])]
[]
This problem started occuring after I upgraded to App Engine 1.9.10 (to use the async search features). In a problem that I think is related, when I try to run the debug server from PyDev, it just silently terminated on any page request (localhost:8080).
Running dev_appserver . from the command line works fine though.
Nose-GAE broke with App Engine 1.9.17: https://github.com/Trii/NoseGAE/issues/6
Downgrading to 1.9.15 made the problem go away temporarily while waiting for the issue to be resolved by nose-gae

Resources