I'm using the Managed VM functionality to run a WebSocket server that I'd like to expose to the Internet on any port (preferably port 80) through a URL like: mvm.mydomain.com
I'm not having much success yet.
Here are the relevant parts of various files I'm using to accomplish this:
Dockerfile:
EXPOSE 8080 8081
At the end of the Dockerfile, a Python app is started: it responds to health checks on port 8080 (I can verify this works) and responds to WebSocket requests on port 8081.
app.yaml:
module: mvm
version: 1
runtime: custom
vm: true
api_version: 1
network:
forwarded_ports: ["8081"]
I deploy this app to the cloud using:
$ gcloud preview app deploy .
In the cloud console, I make sure TCP ports 8080 and 8081 are accepted for incoming traffic. I also observe the IP address assigned to the GCE instance (mvm:1) is: x.y.z.z.
$ curl http://x.y.z.z:8080/_ah/health
$ curl http://mvm.my-app-id.appspot.com/_ah/health
Repond both with 200 OK.
Connecting the WebSocket server using some JavaScript works as well:
new WebSocket('ws://x.y.z.z:8081');
So far so good. Except this didn't work (timeout):
new WebSocket('ws://mvm.my-app-id.appspot.com:8081');
I'd like to know why the above WebSocket command doesn't work.
Perhaps something I don't understand in the GAE/GCE port forwarding interaction?
If this could be made to work somehow, I envision the following would be the last steps to finish it.
dispatch.yaml:
dispatch:
# Send all websocket traffic to the ManagedVM module.
- url: "mvm.mydomain.com/*"
module: mvm
I also setup the GAE custom domain CNAME at mvm.mydomain.com.
Connecting the WebSocket server using JavaScript should then work like:
new WebSocket('ws://mvm.mydomain.com:8081');
It may very well be that port forwarding from appspot.com isn't performed, given that prior to the (relatively recent) release of managed VMs, the only traffic that went to appspot.com was on port 80 or 443. I'd suggest using the IP-of-instance method you found to work.
If you don't find that fully satisfying, you should go to the public issue tracker for app engine and post a feature request to have the appspot.com router detect whether a request is heading for a module that corresponds to a managed VM and attempt the port forwarding in that case.
The thing is, putting the raw port on the end of the domain like that means that your browser will use the port you specified as a connection parameter to appspot.com, not as a query param, so appspot.com will have to listen on all ports and redirect if valid. This could be insecure/inefficient, so maybe the port number could be a query param or part of the domain string, similar to how version and module can be specified...
At any rate, given the way in which ports work, I would highly doubt, if your very simple example caused a fail, that app engine's appspot.com domain was even set up to handle port forwarding to managed VM containers at all at present.
Related
I read into this article:
How to properly configure VPC firewall for App Engine instances?
This was a huge help in getting the firewall setup in the first place - so for those who have found this and are struggling with that - follow along. https://cloud.google.com/appengine/docs/flexible/python/using-shared-vpc is a good reference, as there are some accounts that need permissions "added" to make the magic happen.
My issue - I have two containerized services running in AppEngine one default (website), one API. I've configured the API to run in a VPC/subnet separate from the default created one. I have not made any changes to the firewall settings directly hanging off the App Engine settings as those are global, and do not let you target a specific instance - and the website needs to remain public, while the API should require whitelisting access.
dispatch.yaml for configuring subdomain mapping
dispatch:
- url: "www.example.com/*"
service: default
- url: "api.example.com/*"
service: api
API yaml settings:
network:
name: projects/mycool-12345-project/global/networks/apis
subnetwork_name: apis
instance_tag: myapi
Create a VPC network
name - apis
subnet name - apis
creation mode - automatic
routing mode - regional
dns policy - none
max MTU - 1460
Add firewall rules
allow 130.211.0.0/22, 35.191.0.0/16 port 10402,8443 tag aef-instance priority 1000
deny 0.0.0.0/0 port 8443 tag myapi priority 900
allow 130.211.0.0/22, 35.191.0.0/16 port 8443 tag myapi priority 800
this works - but I cannot specify the "white list IP".
if I do the following and disable the "allow 130 / 35 networks 8443/800"
allow my.ip.number.ihave port 8443 tag myapi priority 800
it never trips this rule, it never recognizes my IP.
what change / how do you configure the firewall in the VPC so it receives the public IP. When I reviewed the logs, it said it denied my request because my IP address was 35.x.x.x.
I would recommend to contact GCP support in that case. If I'm not wrong, you can directly whitelist the IP addresses at App Engine level, but it's not a standard procedure
I am working on an angular app using the angular cli to set things up. Running the ng serve command spawns a server at this address <my_ec2_host_name>:4200. When I try to access the page on the browser it doesn't work (connection timed out error). I believe this is because of security reasons so I added the following rule to my security groups for the ec2 instance:
Port 4200 should now be accessible but I still can't get the page to load. Can someone think of how to get this to work?
Start angular with below command.
ng serve --host=0.0.0.0 --disable-host-check
it will disable host check and allow to access with IP
You can set up the host option like this:
ng serve -host 0.0.0.0
The steps you are doing are correct for opening a port via Security Groups in the EC2 console. Make sure you are modifying the correct security group, and make sure that your changes have been saved.
Your container may have additional firewalls in place, so you will want to check the OS documentation. For Example, RHEL uses iptables as a further security measure: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Security_Guide/sect-Security_Guide-IPTables.html.
That looks correct. Are you sure that your server is running and listening for connections?
You should ssh to that server and verify that the page can be loaded locally. Eg:
curl http://<YOUR HOST IP ADDRESS>:4200
eg: curl http://54.164.10.123:4200
You should be careful to use the public ip address (eg: IPv4 Public IP when you're in the EC2 console). I've run into problems in the past where I've got a server listening on one IP address (often localhost) and not the public ip address.
Also maybe a problem: Is your host inside a VPC of some sort?
I have a double project solution: 1) Angular front-end 2) WebAPI back end.
We are deploying to Amazon EC2 instance. On that box I create one website on port 80 (stopping the default) for the Angular code. I also create a second website on a non 80 port for the WebApi. The solution doesn't work on the EC2 box at the moment only on my dev box with dev type settings. Before I choose which remedy path I was wondering what is best practice.
Obviously, one puts the Angular on port 80 because it is html content but what about the api, does one put this on another port or does one use a dns subdomain and still port 80. [At some point I'll need to do https as well so that is a factor, too many ports?]
Both html and webapi's should be served by a single server ultimately.
This is because browers enforce CORS i.e same origin policy. If you receive html content from 'http://domainname:80/index.html', you cannot make ajax and http put/get queries to 'http://domainname:8080/api/feature' and so on.
That being said, you can have a front end listener like nginx or tomcat on port 80 and serve the angular app + all other static html directly on port 80.
i.e you get your page at http://domainname:80/index.html and you can host all the api calls on a different port, but ask nginx to redirect those calls to a different port based on some rule you have to define , subdomain or anything which does not ask for index.html , make them redirect to your another server running on port 8080. Make sure to block public access to this port in your production environment so that nobody can directly call your api's
I have installed my web application on 2 Windows based VMs of GCE.My application runs on 8080 port.
Steps followed for Netwrok Load Balancer :
1) I created health checks for 8080 port.
2) Added both my VMs and helathchecks to target pool.
3) In forwarding rule I created a rule for 8080 port for that particular Target Pool.
After this go to Target Pools and check the health of the VMs
Here a red symbol is shown against both the instances and message shown as "instance is unhealthy for ".
I have added port 8080 in Firewall rules.
If any one can help, if I am doing anything wrong or there is some other way to setup the Load Balancer.
I believe this issue is not related to the fact that you are listening in port 8080. Health check will pass as long as your instances are able to communicate with the Metaserver (169.254.169.254 [1]) and response with a valid HTTP page.
You must be sure you have allowed communication on port 8080 on the Google Firewall and on your Windows firewall instance [2]. As a debugging you can try to ping the Metaserver and capturing IP packages to confirm if there is a 3 way handshake between the Metaserver and your GCE instance. Additionally you might want to try to do the setup with the same instances on port 80 to confirm if it is actually related to the port.
[1] https://cloud.google.com/compute/docs/metadata
[2] https://cloud.google.com/compute/docs/networking
I'm using OpenVZ Web Panel to manage my VPS servers and when I scanned my server with nmap I saw:
PORT STATE SERVICE VERSION
22/tcp open ssh OpenSSH 6.0p1 Debian 4 (protocol 2.0)
135/tcp filtered msrpc
139/tcp filtered netbios-ssn
445/tcp filtered microsoft-ds
3000/tcp open http **WEBrick httpd 1.3.1 (Ruby 1.8.7 (2012-02-08))**
Service Info: OS: Linux; CPE: cpe:/o:linux:kernel
How do I hide the **WEBrick httpd 1.3.1 (Ruby 1.8.7 (2012-02-08))**?
Late to the party as I am, I encountered this question so I might as well answer it. I don't find your requirements entirely clear, so I'll give a conditional answer:
If you don't want WEBrick to be visible at all, remove or comment its virtual host entry
If you don't want WEBrick to be running on :3000, you have two choices:
Change the virtual host entry so that it listens on :80 instead
Put nginx in front of it, proxying somedomain:3000 to 127.0.0.1:80 and change WEBrick's virtual host entry so that it listens on 127.0.0.1:80 (you will need a domain name pointed at this machine)
If you want WEBrick to be running but only accessible locally, change its virtual host entry so that it listens on 127.0.0.1:3000
You cannot have WEBrick running and publicly accessible without nmap being able to discover it, because nmap discovers it the same way any client discovers it: by attempting to establish a connection with the indicated IP address and port.