Connect to on-premises device through CGP app engine - google-app-engine

I would like to know how to send a TCP request from Flex App engine (python application) to the on-premises device TCP port 9701 and get the data back from the device.
Option 1- Set up Cloud VPN and put the firewall hardware in front of the on-premises existing router(if it is not VPN IPSEC supported)
Option 2- Set the on-premises router as DMZ mode with IP mapping and port forwarding.
Could anyone try it and give me some idea of how that works and using any hardware firewall that worked with the GCP VPN?
Thanks in advance!

Your question is actually very complex. I will briefly touch upon both of your options.
Option 1- Set up Cloud VPN and put the firewall hardware in front of
the on-premises existing router(if it is not VPN IPSEC supported)
To set up Google Cloud VPN will require hardware routers on your side that support Google's requirements. Most cheap routers will not meet the minimum requirements.
This method is called site-to-site and you are basically connecting your internal network to your Google Cloud networks (VPCs). This requires a good understanding of VPNs and routing. The benefit is that all your traffic is secure and encrypted. Your internal systems can access your Google systems using their private IP addresses.
Your router must have a public static reliable IPv4 address.
Your internal network addressing cannot overlap with your VPCs.
If you put a firewall in front of your VPN router, the firewall must support passing thru ESP (IPsec) and IKE traffic.
Your router must support prefragmentation.
Dynamic routing (BGP) is preferred, Static routing is supported.
Option 2- Set the on-premises router as DMZ mode with IP mapping and
port forwarding.
This method does not involve Cloud VPNs. Your side is public and your Google resources (App Engine) just access your public IP address. There is no added encryption or security in this configuration unless you add it yourself. For low-cost setups that do not require traffic security beyond HTTPS, this is OK usually. However, you have not provided your network map, services, etc to review how you should NAT/PAT and secure your traffic.
A word about DMZ. Most people assume that this is secure. It is not unless you also have an intelligent firewall in front of your DMZ. A DMZ just passes traffic blindly from port A on the public side to Port B on the private side. Many a system has been hacked because the admin thought that DMZ translated to security. Any system connected thru a DMZ should be considered high-risk to attacks and being breached.
What is the best solution? Redesign your requirements so that App Engine does not need to get into your internal network.

Complementing the last answer.
If you can't bought or you don't known what HW is supported by the cloud VPN service, you can can use a VM or onpremise server like as Firewall using pfsense, this is freebsd distribution that have the capability to manage your network security like a NGFW and can be installed on bare metal or in a VM.
having said that, to configure site to site VPN connection between your own network and GCP network you can follow this tutorial, this tutorial explain step by step the configuration that you need to perform in GCP and in pf sense.

Related

How can i host my react js website publicly on my raspberry pi

i made a react js website and everything is working fine but i can't figure out how to host it on my rasp pi4 and make it publicly visitable by other people. I also bought a domain. So my question is: How can i make my reactjs site public and running on my rasp. Thanks!
You have to configure a web server like Apache and build the React app with npm or yarn. Then copy the build files to the /var/www/html/ directory. You will also have to look into port forwarding your router to your local web server and open it to the public. Just be careful with security.
You need to configure a web server to host the site. Either Apache2 or Nginx (I have a personal preference to Nginx, but either works fine for this). Under Debian/Raspberry Pi OS, /var/www/html should be served on port 80 on all of the pi's IP addresses. Place the site files there and make sure you can access the site from the pi's IP address.
For making the site available outside of your network, you will either need to look at port forwarding, hosting a VPS with a public IP, or using a tunnel. Port forwarding is likely going to be the hardest option and may not always work, but doesn't require any external services outside of your DNS provider. Tunneling is probably the second easiest, and using a VPS is almost certainly the easiest.
For port forwarding, yu will need to verify that your ISP does not use CGNAT, otherwise this will not work. Assuming they don't, you will need to access your routers configuration and set up port 80 on TCP to forward to your pi's IP address. I would assign your pi a static IP address, either on the pi itself, or using DHCP reservations. Next, you need to see if your DNS provider offers Dynamic DNS. If not, you will need to manually update your DNS settings in the case your networks public IP changes (unless you purchased a static IP from your provider). In this setup, you point your domain at your networks public IP. Traffic goes directly between the client's browser and your pi.
Tunneling is a fair bit easier. I personally use Cloudflare for my DNS (I set my domain with my registrar to point to Cloudflare, then used their tunneling tool (Cloudflared) to tunnel traffic from their servers to my pi. There are other tunneling services, but I think Cloudflare's is the best out of all of the ones I used. In this setup, you point your domain at Cloudflare, which forwards the traffic to you via the tunnel. Traffic goes from the browser to Cloudflare to your pi.
Using a VPS is probably the easiest, and your knowledge of working with the pi applies to working with a VPS, assuming you run Debian linux or similar on your VPS. You would install the web server on your VPS, put the app on the VPS, and point your domain at your VPS's public IP. In this setup, traffic goes from the client browser to your VPS. This is the only non-free option (excluding the price of the domain itself), and keeps your local private network safer by not putting public services on it. You can also run a tunnel between your pi and your VPS if you want (see https://www.jeffgeerling.com/blog/2022/ssh-and-http-raspberry-pi-behind-cg-nat for an example), but I don't personally see the point unless you really want the app to be served from your pi.

Cloud Run static outbound IP address does not go through Google App Engine firewall

I have a python (flask) application running on Google App Engine (flex); the application is protected by the GAE firewall where:
Default rule is 'Deny' all ingress
There is a whitelist of IP addresses from which traffic is allowed.
I have some microservices deployed on Cloud Run (fully managed) which:
Receive requests from the GAE app (e.g. for heavy duty tasks)
Send the results of whatever they process as http requests back to handlers/endpoints in the GAE app
Thus the GAE app is the main point of interaction with clients and a dispatcher of heavy tasks, while the processing of those tasks is carried out by the microservices. I have set up a static outbound IP address of the Cloud Run hosted service which verfiedly works and traffic is routed through the NAT gateway as required in the documentation. The respective NAT IP address is on the firewall whitelist.
The problem is that the firewall still does not let in the Cloud Run >>> GAE app requests which bounce back with 403 statuses (of course, if I change the default firewall rule to 'Allow', traffic goes through). If I host the same microservice in a docker container on a GCE VM with a static IP address like this everything works flawlessly. This makes me hypothesize that albeit Cloud Run outbound traffic is indeed routed through the static IP address when traffic is towards addressees outside GCP, when I try to ping an internal (project-wise) asset it still goes though some dynamically selected IP (i.e. the static IP solution simply does not work). Unfortunately the logs don't show the 403-ed attempt so I can't see from what IP addresses those request seem to come (from a GAE standpoint).
I would be very grateful for ideas how this can be fixed as it greatly diminishes the value of the otherwise wonderful idea to have static outbound IP addresses for Cloud Run.
First, thank you both for your help and suggestions, they are very helpful. I found the solution with some kind help from Google:
When the Cloud Run microservice and the GAE app are hosted in the same project traffic is still routed through internal channels and appears to come from IP address 0.0.0.0 which can be whitelisted (so it would work) as long as one considers this address encompasses GCP assets which are parts of other projects too (to the best of my understanding)
A more robust solution seems to be setting up an externally facing load balancer as described here and putting it in front of the GAE app; in such a case, Cloud Run will indeed consistently use its static outbound IP address as described in the documentation
You are correct saying that the static IP is not honoured when packets are routed internally to GCP.
I think this is what you want. You have to allow in the firewall one of the IPs mentioned there (not sure which one right now).
Just as you and #Ema mentioned, this is an expected behavior having in mind that the traffic from Cloud Run to App Engine is intern.
When you use Cloud Nat to send all traffic there, it does happen. If you create a container and ping, let's say to www.github.com. You will find that the traffic goes through the IP you set. On the other hand, if you ping to www.google.com, given that the traffic is intern, and the site to reach out is in the same infrastructure, the request doesn't even goes through public internet.
Additionally, just to keep in mind Static outbound IP address is still in Beta and it is not recommended to use Beta features/products in production environments.
As you mentioned and as it is stated in Allowing requests from your services:
Creating a rule for IP 0.0.0.0 will apply to all Compute Engine instances with Private Google Access enabled, not only the ones you own. Similarly, allowing requests from 0.1.0.40 or 10.0.0.1 will allow any App Engine app to make URL Fetch requests to your app.
This questions might be of your interest:
What are the outbound IP ranges for GCP managed Cloud Run?
Possible to get static IP address for Google Cloud Functions?

How to restrict public access to google app engine flexible environment?

I have many microservices in app engine only for internal use. But, by default, app engine opens service-project.appspot.com domain to public, and anyone can access them via http or https.
Is there a way to restrict access only for certain IP address?
The trivial way i can think of is checking source IP address in application code.
Or, I can create custom docker image with nginx configuration which checks source ip address. But, these are not quite clean solutions because access control is actually independent from application, and I don't want to hard code static IP address inside the container.
I assumed there is a way to setup firewall rule for app engine, but I could not find it. Identity-Aware Proxy seems like another option, but it is not available for app engine flex.
I know this is cold comfort, but we're working on re-enabling App Engine flex support for IAP. It's going to be more than just a few days, though.
https://cloud.google.com/appengine/docs/flexible/java/migrating#users has some options that might be more palatable than hardcoding IPs. You won't be able to use GCE firewall rules because the appspot.com traffic is coming through Cloud HTTP Load Balancer, so the GCE instance firewall only sees the IP of the load balancer. If you do want to verify IPs within your app, use X-Forwarded-For as described at https://cloud.google.com/compute/docs/load-balancing/http/#components .
Hope this helps! --Matthew, Cloud IAP engineer

What measures does google cloud take to protect the instances from IP spoofing?

I am running my server on google app engine and i have all of my services (e.g MongoDB, Redis, Elasticsearch) are deployed on compute engine. Now i wanted to connect my compute engine instances from App engine only that's why i deleted all of my firewall rules of my compute engines which were connecting them from external ip's, now only the instances that are within the internal network of my google cloud project can connect to themselves, now i am just wondering about IP spoofing that as nobody from outside my internal network can connect to my instances now can they fake their ip by telling my firewall that their ip is the ip which any of my instance is having because if that can happen then my whole security will be breached.
Now one question does google cloud project's firewall implement any measures to secure our instances from IP Spoofing or we have to setup something in order to avoid that.
If any of you have any idea about this please enlighten me.
Thanks
It's not quite clear which spoofing scenario you are concerned about. These two come to mind:
External party spoofing packets for your internal network, ie. the 10.0.0.0/8 range. This is not possible as packets inside your network can only come from VMs and VPNs in that private network.
Spoofing packets from other Google / GCE IP ranges; eg. the ones used for external addresses: This should be caught by Google's network ACLs.
I would however not recommend to authenticate based on IP address. For example, if you are communicating over external IP addresses between GCE/GAE entities, it's easy to be too broad, also allowing other GCE/GAE customers. Even if you only whitelist single IP addresses there is a risk that over time, your setup becomes more complex. Imagine for example, if an employee deletes a GCE instance without also removing the IP from the whitelist. In that case, the IP would be released and available to other GCE customers who could then access your service.
Therefore, it's usually safer to use an application level authentication mechanism such as SSL client certificates.

Connecting to device behind firewall

I have a wpf app that needs to communicate(exchange data) with a custom designed device (we can modify the code for the device). Do I have any options to connect to the device if it is behind a firewall via http? I was hoping there would be a method where the admin would not have to forward any specific ports or do anything on his end. I assume the issue is how would I address the device from my app. I know SOAP over SMTP is one option. Is another option where the device could chatter out to my application via http?
This problem is solved by relay services like Yaler or My-devices (I did not test this last one).
UPNP is supported by some firewalls to simplify this. Otherwise you are usually stuck opening ports on the firewall manually or using some 3rd party proxy server for a rendezvous server.
A lot of firewalls are setup to allow access on port 80 (HTTP) otherwise the users wouldn't be able to browse web sites on the internet. You can try and see if port 80 is open to traffic. If you can modify the code for both the device and the client you can use port 80 to communicate with your own protocol - you don't necessarily need to use HTTP.
Any kind of RESTful architecture over http will do it. If this is the best option for you depends on what APIs / libraries are available on your custom device.

Resources