React PWA without https - what are the limitations? - reactjs

Service workers require HTTPS ... If your production web server does
not support HTTPS, then the service worker registration will fail, but
the rest of your web app will remain functional.
quoted from the docs at https://create-react-app.dev/docs/making-a-progressive-web-app.
What does it mean the rest of your web app will remain functional, but service worker registration will fail? In other words, if my app remains functional, do I really care if service worker has failed? (what are the limitations?)

So your app would still work, but you would lose all of the functionality provided with the service worker. At the bottom of the "Why Opt-in?" section of the Making a Progressive Web App it states:
The workbox-webpack-plugin is integrated into production configuration, and it will take care of generating a service worker file that will automatically precache all of your local assets and keep them up to date as you deploy updates. The service worker will use a cache-first strategy for handling all requests for local assets, including navigation requests for your HTML, ensuring that your web app is consistently fast, even on a slow or unreliable network.
So you could use it as normal but you would lose:
Offline support,
Precaching of your local assets,
Cache-first approach for your local assets and navigation requests,
Performance improvements of your application in slow or unreliable network conditions.
Whether or not you care if it fails is directly related to if you value these features in your application. If they are critical to your application, you probably care a lot. If it doesn't matter to you either way or affect your end-user, it's probably not a big deal.
You can find out more about Service Workers and why they only work using HTTPS in the Service Workers API documentation

Related

Securing a Google App Engine service deployed in Node/Java against a scripting attack

I have an app engine service deployed in GAE (written in Node) to accept a series of click stream events from my website. The data is pushed as an CORS ajax call. I think, since the POST request can be seen in browser through the developer tools, somebody can use the app engine URL to post similar data from the browser console.( like in firefox, we can resend the URL. Chrome also has this features i guess)
A few options I see here is,
Use the firewall setting to allow only my domain to send data to the GAE.
This can still fail me since the requests can be made from the browser console repetitively)
Use a WAF ( Web Application Firewall) as a proxy to create some custom rule.
What should be my approach to secure my GAE service?
I don't think the #1 approach would actually work for you: if the post requests are visible in the browser's development tools it means they're actually made by the client, not by your website, so you can't actually use firewall rules to secure them.
Since requests are coming into the service after fundamentally originating on your own website/app (where I imagine the clicks in the mentioned sequences happen) I'd place the sanity check inside that website/app (where you already have a lot more context info available to make decisions) rather than in the service itself (where, at best, you'd have to go through loops to discover/restore the original context required to make intelligent decisions). Of course, this assumes that the website/app is not a static/dumb site and has some level of intelligence.
For example, for the case using browser development tools for replaying post requests you described - the website app could have some 'executed' flag attached to the respective request, set on the first invocation (when the external service is also triggered) and could thus simply reject any subsequent cloned/copy of the request.
With the above in place I'd then replace sending the service request from the client. Instead I'd use your website/app to create and make the service request instead, after passing through the above-mentioned sanity checks. This is almost trivial to secure with simple firewall rules - valid requests can only come in from your website/app, not from the clients. I suspect this is closer to what you had in mind when you listed the #1 approach.

With AngularJS based single page apps hosted on premise, how to connect to AWS cloud servers

Maybe this is a really basic question, but how do you architect your system such that your single page application is hosted on premise with some hostname, say mydogs.com but you want to host your application services code in the cloud (as well as database). For example, let's say you spin up an Amazon EC2 Container Service using docker and it is running NodeJS server. The hostnames will all have ec2_some_id.amazon.com. What system sits in from of the Amazon EC2 instance where my angularjs app connects to? What architecture facilitate this type of app? Especially AWS based services.
One of the important aspects setting up the web application and the backend is to server it using a single domain avoiding cross origin requests (CORS). To do this, you can use AWS CloudFront as a proxy, where the routing happens based on URL paths.
For example, you can point the root domain to index.html while /api/* requests to the backend endpoint running in EC2. Sample diagram of the architecture is shown below.
Also its important for your angular application to have full url paths. One of the challenges having these are, for routes such as /home /about and etc., it will reload a page from the backend for that particular path. Since its a single page application you won't be having server pages for /home and /about & etc. This is where you can setup error pages in CloudFront so that, all the not found routes also can be forwarded to the index.html (Which serves the AngularJS app).
The only thing you need to care about is the CORS on whatever server you use to host your backend in AWS.
More Doc on CORS:
https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS
Hope it helps.
A good approach is to have two separated instances. It is, an instance to serve your API (Application Program Interface) and another one to serve your SPA (Single Page Application).
For the API server you may want more robust service because it's the one that will suffer the most receiving tons of requests from all client instances, so this one needs to have more performance, band, etc. In addition, you probably want your API server to be scalable when needed (depends on the load over it); maybe not, but is something to keep in mind if your application is supposed to grow fast. So you may invest a little bit more on this one.
The SPA server in the other hand, is the one that will only serve static resources (if you're not using server side rendering), so this one is supposed to be cheaper (if not free). Furthermore, all it does is to serve the application resources once and the application actually runs on client and most files will end up being cached by the browser. So you don't need to invest much on this one.
Anyhow, you question about which service will fit better for this type of application can't be answered because it doesn't define much about that you may find the one that sits for the requisites you want in terms of how your application will be consumed by the clients like: how many requests, downloads, or storage your app needs.
Amazon EC2 instance types

Firebase: How to awake App Engine when client changes db?

I'm running a backend app on App Engine (still on the free plan), and it supports client mobile apps in a Firebase Realtime Database setup. When a client makes a change to the database, I need my backend to review that change, and potentially calculate some output.
I could have my App Engine instance sit awake and listen on Firebase ports all the time, waiting for change anywhere in the database, but That would keep my instance awake 24/7 and won't support load balancing.
Before I switched to Firebase, my clients would manually wake up the backend by sending a REST request of the change they want to perform. Now, that Firebase allows the clients to make changes directly, I was hoping they won't need to issue a manual request. I could continue to produce a request from the client, but that solution won't be robust, as it would fail to inform the server if for some reason the request didn't come through, and the user switched off the client before it succeeded to send the request. Firebase has its own mechanism to retain changes, but my request would need a similar mechanism. I'm hoping there's an easier solution than that.
Is there a way to have Firebase produce a request automatically and wake up my App Engine when the db is changed?
look at the new (beta) firebase cloud functions. with that, you can have node.js code run, pre-process and call your appengine on database events.
https://firebase.google.com/docs/functions/
Firebase currently does not have support for webhooks.
Have a look to https://github.com/holic/firebase-webhooks
From Listening to real-time events from a web browser:
Posting events back to App Engine
App Engine does not currently support bidirectional streaming HTTP
connections. If a client needs to update the server, it must send an
explicit HTTP request.
The alternative doesn't quite help you as it would not fit in the free quota. But here it is anyways. From Configuring the App Engine backend to use manual scaling:
To use Firebase with App Engine standard environment, you must use
manual scaling. This is because Firebase uses background threads to
listen for changes and App Engine standard environment allows
long-lived background threads only on manually scaled backend
instances.

Advantage of Deploying AngularJS application with Restful services on diffeent servers

I want to deploy angularJs app on somw web server and restful services on application servers like tomcat.
Can any one please let me know that what will be the advantage and disadvantages of deploying angularJs app with Restful services on different server or on same servers.
which option will be good including authorization and performance.
Since the html / angularjs code will be downloaded on the clients devices and then the webservice will be called by those clients there is no gain on the response time if the app and the ws are on the same server.
For the rest, it all depends on the load of your website. Distributing the html code to the clients does not take that much of a load, but you will have an apache (or ngix or wathever) + a tomcat + your database running on the same server, it will be ok for most cases, it depends on the success of your website but usually when you have to ask yourself how you are going to manage such a load you have the means to rethink the architecture!
The most important is to have your db and your tomcat on the same server!
For the authorizations, if you use a REST webservice you will have to deal with those damn CORS headers whether or not the app and the ws are on the same server.
Overall, having 2 servers will be more flexible and share the load more evenly but it will also increase the cost, so you will probably be fine with only one!

Google App Engine internal network

Is it possible to route HTTP traffic between google app engine applications without going through the public internet?
For example, if I'm running a Web Service API on one application and want to build a second application on top of it without traffic going through the internet - for performance reasons.
Between separate apps running on different domains? I suspect not.
But you can use backends to do different work behind the scenes:
Backends are special App Engine instances that have no request deadlines, higher memory and CPU limits, and persistent state across requests. They are started automatically by App Engine and can run continously for long periods. Each backend instance has a unique URL to use for requests, and you can load-balance requests across multiple instances.
When I look at the logs between the backend and the front end instances I see IPs like
0.1.0.3
So yes, those communication paths are internal. I'd hazard a guess that as so much of the internet is google you could say requests between different apps might not travel on the public internet.
Logs indicate low latency communication between front and back ends, not under any particular load however. Your milage may vary.
Backends in Python

Resources