Starting and stopping App Engine instances with Docker - google-app-engine

I've set up an App Engine project locally using Docker (on OSX), and have been running a server using the usual "gcloud preview app run app.yaml" command. From what I can tell, this keeps creating new images over and over again. After an hour or so of work I end up with something like 30 docker images, each taking 130MB.
Eventually I'm told I can no longer bind to localhost:8080. I tried killing all containers and images, but still cannot use localhost:8080 until I reboot.
Seems like I'm not using Docker/gcloud correctly. Anyone have an idea what I might be doing wrong? Is there another way I should be restarting App Engine instances other than hitting command C and running the "run" command again?
UPDATE: After looking closer, I noticed I'm getting this message when I run an app locally and a container is created: "http: Hijack is incompatible with use of CloseNotifier". I'm not familiar enough with Docker to understand what's going on here. All searches seem to point to Go, which I am not using.
UPDATE 2: Here is the trace:
Creating container...
INFO 2015-05-05 02:23:28,293 containers.py:560] Container 1564ce4344957114312d6d1dc696ffbb4176b40ace6dcff5e4239e13ee04a8f6 created.
Exception in thread Thread-2:
Traceback (most recent call last):
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", line 810, in __bootstrap_inner
self.run()
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", line 763, in run
self.__target(*self.__args, **self.__kwargs)
File "/Users/judeosborn/google-cloud-sdk/platform/google_appengine/google/appengine/tools/docker/containers.py", line 643, in _ListenToLogs
for line in log_lines:
File "/Users/judeosborn/google-cloud-sdk/./lib/docker/docker/client.py", line 225, in _multiplexed_response_stream_helper
socket = self._get_raw_response_socket(response)
File "/Users/judeosborn/google-cloud-sdk/./lib/docker/docker/client.py", line 167, in _get_raw_response_socket
self._raise_for_status(response)
File "/Users/judeosborn/google-cloud-sdk/./lib/docker/docker/client.py", line 119, in _raise_for_status
raise errors.APIError(e, response, explanation=explanation)
APIError: 500 Server Error: Internal Server Error ("http: Hijack is incompatible with use of CloseNotifier")
INFO 2015-05-05 02:23:28,606 module.py:1745] New instance for module "default" serving on:
http://localhost:8080

There's an ongoing issue with Docker 1.6.x [reference] that prevents gcloud to work well with Managed VMs (as you seem to be using). Easiest workaround until it gets fixed is to downgrade Docker in your development machine to version 1.5.0, which is the latest version known to work.
For Ubuntu, you can do something like:
$ curl -sSL https://get.docker.com/ubuntu | sed 's/lxc-docker/lxc-docker-1.5.0/' | sudo sh
For other Linux distros, you might have to modify that sed pattern, though.
On the other hand, if you're using Boot2Docker under Mac OS X, follow these steps:
Fully uninstall your previous Boot2Docker/Docker setup; there is a nice guide here
Reinstall Boot2Docker/Docker following instructions here. IMPORTANT: You MUST stop right after completing "Install Boot2Docker" step and before "Start the Boot2Docker Application". Once you get there, open up a terminal and execute the following commands:
$ mkdir ~/.boot2docker
$ echo 'ISOURL="https://github.com/boot2docker/boot2docker/releases/download/v1.5.0/boot2docker.iso"' > ~/.boot2docker/profile
At this point, you can continue with "Start the Boot2Docker Application" section and finish the installation. You should now have a valid Docker launchpad with which to start Managed VMs. It'd be nice to double check that you have the right versions installed by issuing:
$ boot2docker ssh docker version | egrep "(Client|Server) version"
The output should look like:
Client version: 1.5.0
Server version: 1.5.0
Now you can try again your original command:
$ gcloud preview app run app.yaml

Try running:
$ ps uax | egrep "gcloud|appserver"
If you see anything running, kill it... you may even need to kill -9 it.

Related

VOLTTRON install on rasbian buster

Can I get a tip for installing on rasp buster? Im hung up on the install directions to check the status of the rabbitMQ server. Traceback of bash console:
(volttron) pi#raspberry:~/Desktop/volttron $ echo 'export RABBITMQ_HOME=$HOME/rabbitmq_server/rabbitmq_server-3.7.7'|sudo tee --append ~/.bashrc
export RABBITMQ_HOME=$HOME/rabbitmq_server/rabbitmq_server-3.7.7
(volttron) pi#raspberry:~/Desktop/volttron $ source ~/.bashrc
pi#raspberry:~/Desktop/volttron $ RABBITMQ_HOME/sbin/rabbitmqctl status
bash: RABBITMQ_HOME/sbin/rabbitmqctl: No such file or directory
There are a few tracebacks earlier on the installation...
If it makes a difference or not here is the entire bash console process. The git gist link I just created the name install.py even though its just bash commands copied pasted per install directions...
`pi#raspberry:~/Desktop $ git clone https://github.com/VOLTTRON/volttron --branch releases/7.x`
It looks like there are a couple of different issues going on here:
The issue you quote above (RABBITMQ_HOME/sbin/rabbitmqctl: No such file or directory) is that your shell isn't finding the rabbitmqctl command. It looks like you added the RABBITMQ_HOME environment variable to your .bashrc, but used the string RABBITMQ_HOME instead of the variable expansion $RABBITMQ_HOME when you tried to run the command. Try running it as $RABBITMQ_HOME/sbin/rabbitmqctl status instead.
The rabbitmqctl status command will check the status of the rabbitmq application, but I don't think you've done anything to start it yet (that happens when you bootstrap the platform and/or start the platform configured to use the RMQ broker)
I think that the traces earlier in the installation process are problematic (appears to be the same error hit two different ways), but you just haven't run into them yet. I haven't seen any issues building gevent on the RPi 4 with buster (though it is pretty slow), but the ctypes error makes me wonder if there's an issue with the underlying c library it is trying to build on top of. I did notice that you're getting amd64 erlang packages, are you running Raspbian on an x86 processor? (if so this isn't a permutation we've tried and you may be hitting some package compatibility edge case we haven't seen)
One thing to try is to manually install cython into your virtualenvironment and then try running the bootstrap script again with the virtualenvironment activated. You could also try and pip install gevent==20.6.1 directly in that virtualenvironment (this is what the bootstrap script was doing at the failure point). VOLTTRON depends on gevent, so if that isn't installing the platform won't be able to run.

Conemu doesn't work with wsl since windows update

Since I have updated windows, my conemu terminal is giving me the following error each time a session is created:
wslbridge error: failed to start backend process
note: backend error output: -v: -c: line 0: unexpected EOF while looking for matching `''
-v: -c: line 1: syntax error: unexpected end of file
ConEmuC: Root process was alive less than 10 sec, ExitCode=0.
Press Enter or Esc to close console...
Has anyone an idea to bring conemu to a wsl terminal? Thank you
A similar error is caused by upgrading WSL from v1 to v2.
If you read through the discussion on this github issue for ConEmu you'll find a variety of instructions that can be distilled into:
Change the command for the task {Bash::bash} to the following:
wsl.exe
A GitHub user posted this workaround which worked for me:
I've fixed the issue by doing this:
Download latest cygwin1-20200531.dll.xz from https://cygwin.com/snapshots/ and unpack the file as cygwin1.dll into ConEmu\wsl\ (replacing the original file there)
Download #Biswa96's wslbridge2 from https://github.com/Biswa96/wslbridge2/releases and unpack to the same directory
Replacing {WSL::bash} task's Command with:
set "PATH=%ConEmuBaseDirShort%\wsl;%PATH%" & %ConEmuBaseDirShort%\conemu-cyg-64.exe %ConEmuBaseDirShort%\wsl\wslbridge2.exe -cur_console:pm:/mnt -eConEmuBuild -eConEmuPID -eConEmuServerPID -l
I can now access my Ubuntu under W10 just like before the W10 upgrade. Backscroll and arrows in VIM work as expected.
The key part of step 3 is to replace conemu-cyg-64.exe --wsl with conemu-cyg-64.exe %ConEmuBaseDirShort%\wsl\wslbridge2.exe.
Longer term, it looks like the author of ConEmu is working on switching to the new Windows PTY API, which will eliminate the need for the wslbridge hack (and many others) entirely.
I had the same issue with last update windows
(Feature update to Windows 10, version 2004 - Successfully installed on ‎9/‎1/‎2020)
the error does not seem to be related to the version of WSL from 1 to 2:
$ wsl -l -v
NAME STATE VERSION
Ubuntu-20.04 Running 1
Nevertheless, this workaround worked for me as well, thank you so much!
Exactly this goes through upgrading WSL from v1 to v2.
You have to open cmder and in the startup command or Task enter {wsl.exe} and ready
cmder is working again.
Yes, the new command for WSL2 is much simpler, but just running wsl does not cause .profile to be read because launching this way does not request login shell, and it launches as root.
A better command is to specify the user id and to invoke the shell of your choice (bash is most common) with the appropriate option. For bash a login shell is desirable so .profile, .bashrc and .bash_aliases get sourced, if present. The -l (lower case L) does that:
wsl -U yourUserName bash -l

Get catalina.out in Apache tomcat log

Currently trying to open the file gives this error:
C:\Users\....\....\apache-tomcat-8.0.45\logs>catalina.out
The process cannot access the file because it is being used by another process.
What I have done is to have application running in the webapps and start tomcat by using following command:
catalina.bat jpda start
And now I want to see the logs in windows. In Ubuntu, tail -f catalina.out can be used. But how to see tomcat logs in windows forcefully?
As stated in answers here, you can try more catalina.out or type catalina.out

Managed VM Deployment hangs on "Copying certificates for secure access..."

I'm running the following command to deploy my Managed VMs app (on Windows 10):
gcloud preview app deploy app.yaml --project=<PROJECT> --promote
The deployment starts bug hangs on the following line:
Copying certificates for secure access. You may be prompted to create an SSH keypair.
And after some time I get the error:
ERROR: (gcloud.preview.app.deploy) Unable to copy certificates.
I've already:
Made sure that there are SSH keys in ~\.ssh\google_compute_engine
Tried to run with --quiet - same results
Renamed ssh-term.exe to ssh.exe - same results
Run the command as an administrator.
Run the command with --verbosity debug, which prints the following line multiple times: DEBUG: File [f] does not exist locally.
Any help will be much appreciated!
Found the cause! It was the project's firewall that blocked SSH by default. Fixed that and it worked.
Glad you fixed it, I had the same problem and will use your fix. I did happen accros a work around. By using the Container Build API to perform the build.
enter the command
gcloud config set app/use_cloud_build true
Before you
gcloud preview app deploy
Cite: https://github.com/isusanin/google-cloud-sdk/issues/533

Opendaylight (odl) ovs-vsctl not found error

I am following this tutorial: https://wiki.opendaylight.org/view/Getting_started
I am trying to use the following code in opendaylight using karaf
ovs-vsctl show
But the command window says Command not found: ovs-vsctl
I have installed all the necessary libraries and the local host server (http://localhost:8181/dlux/index.html) is running fine. But somehow odl can't find ovs.
Can anyone tell me what's the error? I am running win 8.
Thank you
You need to run this command outside of karaf terminal.
Firstly, you should have ovs(Open Virtual Switch) or Mininet installed, and then create one or two open switches.
Basically, you started the SDN controller in karaf, and now in the step you are encountering problem, the switches need to be assigned ODL controller as their manager.
You must check also that ovsdb is already installed in karaf.
For that, try to execute the next command:
feature:list | grep ovsdb
That command will display all the ovsdb components/features that are available in your karaf distribution. The third column will indicate you if a given component is already installed or not (if you see an X, that means that the component is installed). If you want to install a component/feature:
feature:install <name_of_the_feature>
After that, try to execute it outside of karaf, as Sidhant01 has indicated you before.
Try to do it with sudo:
sudo ovs-vsctl show.
If you want to configure ovsdb in an active mode:
tools-vm:~$ sudo ovs-vsctl set-manager tcp:127.0.0.1:6640
tools-vm:~$ sudo ovs-vsctl show
98d8cf7a-44b1-4b02-a60c-7d832409d06f
Manager "tcp:127.0.0.1:6640"
is_connected: true
ovs_version: "2.0.2"
Cheers

Resources