Background
We have a large Ionic v1 app that runs on Android (by obtaining it from the Google Play Store) and also on our development machines via "ionic serve".
The app uses a Google App Engine (GAE) website as a backend server.
The server maintains sessions for each user by means of a cookie. We don't store much data in the session, but we need to securely identify the user making each request. When the app runs successfully, the GAE server code creates a cookie that contains the session ID and sends it to the Ionic client code when responding to each HTTP request.
Note:
That the Ionic code does not access the cookie in any way. It is only
necessary that the same cookie be sent back to the GAE server with each
subsequent request so that the GAE code recognizes the user.
The goal
We would like to serve the Ionic code by use of Firebase Hosting. We can in fact do so in both of two modes:
Keeping the Ionic code on our dev machine, running "firebase serve", and going to "localhost:5000" on the browser
Deploying the Ionic code to the Firebase host and going to "xxxx.firebaseapp.com" on the browser
Everything works! Uh, except for one little thing, which we've been trying to solve for weeks...
The problem
The cookie used by the GAE code to manage session continuity, and sent in responses to HTTP requests generated by the GAE code, does not come back in the next request from the Ionic app running on Firebase. So the GAE app always responds as though the user is not yet logged in.
In fact, further testing shows that the session cookie sent in responses to HTTP requests does not even get set in the browser (so of course it's not sent back to the GAE code with the next HTTP request). The GAE code on the backend server always responds as if this is the first HTTP request of a session.
What we've solved already
The problem is not the fact that Ionic does not support cookies. We know this is not the problem because the app runs fine as an Android app and also via "ionic serve". In both cases, the GAE backend is able to maintain sessions using a cookie to store the session ID from one request to the next.
The problem does not get solved by using "memcache" instead of cookies for GAE session support, because even if you use memcache, you still need the cookie for the session ID. If you wish, you can go with the default and let GAE session support use cookies; in that case, it will use the same cookie for both the session ID and any other session data.
The problem does not get solved by using "__session" as the name of the cookie. Firebase does in fact support using such a cookie name, but apparently only in the context of running Firebase Hosting with Cloud Functions. Cloud Functions are for running backend code, not client code that the user interacts with. We could see no way to make an Ionic app run as a Cloud Function. And without Cloud Functions, the "__session" cookie set by the GAE backend apparently gets stripped by the browser client running the app, along with all other cookies.
Adding "Access-Control-Allow-Origin/-Credentials/-Methods/-Headers" headers to the GAE-code generated response, and setting crossDomain: true xhrFields: { withCredentials: true } on the client side, does not improve the situation. Still, no cookie from the GAE code gets set on the browser.
Any help will be appreciated.
Related
I have an in-development ReactJS application that I run locally from my computer (on localhost). I have also set up local certs so that the application runs on HTTPS (https://localhost).
I also have a backend application located at an HTTP endpoint hosted in the cloud. Of course, the backend application will eventually be located at an HTTPS endpoint eventually, but that process hasn't been started yet.
I am trying to hit that HTTP endpoint from my local HTTPS ReactJS application by using fetch. However, something is upgrading the connection from HTTP to HTTPS automatically.
The only relevant information I have found on this is this post. I have tried the accepted answer (setting the referrerPolicy to unsafe-url) but that did not work for me.
Any other suggestions?
I have a React frontend with a rails 7 api backend and using linode for storage. The app allows users to create accounts and update their avatar. Everything works fine in development. The user first sees the default avatar when the account is first signed in, and can later update the avatar. I have gone into config/environments/development.rb and config/environments/production.rb set active_storage to use linode.
in config/environments/development.rb:
config.active_storage.service = :linode
in config/environments/production.rb
config.active_storage.service = :linode
The problem occurs when in production. I have created 2 heroku apps: one for the frontend and one for the backend. I am able to upload images to linode storage, but when I make a request for a particular image, I get an error in the browser
GET https://backend-api-app-name.herokuapp.com/rails/active_storage/blobs/redirect/sdkjkdsldIddlDkdhyKSDJjdjdYkskdfldoisdYkdksldfskdj-kjdhasdfmdfswqTj172494048372/image.jpg 404 (Not Found)
The rails backend api consist of a User model, devise, and doorkeeper. The backend is really modest at this time. I think the problem is rails is not redirecting the request from the frontend for the image. Rails redirects this request in development, but not in production. If the redirecting is the problem, how can I fix this?
I'd really appreciate any help on this matter. This is the only error I'm receiving.
I am developing a React project for studies and would like to publish.
I tried some ways, but the site is blank, there is no data from the NEWS-API I am using.
It seems to make no mistake.
It is a front application, only react with the API.
If it helps, here's the repository link.
https://github.com/carlos-souza-dev/apinews
I visited your deployment in vercel from your github repo and noticed this issue.
You're requesting data from the API over http which is insecure, while your page hosted by vercel uses https.
Modern browsers donot allow for a page served over https to request http data.
It might just be a fixed by changing your urls to use https, or if the API didn't have https you might have to do other workarounds.(Although it's better to use an API with https support)
I noticed this by opening the console after visiting your page to see these Mixed content requests blocked error.
The reason for the blank page after loading is that the Promise to get the data from the API get rejected but never handled causing the JavaScript execution to stop
[EDIT 1]
I read through some of the code in your repository and noticed a link pointing to localhost. It looks like you tried to setup a nodejs server to proxy data through https
The API you're using does seem to have HTTPS support
Conclusion:
Try changing the links to the API to https instead of http in your react code and see if it works. If it does, there's no need for a backend server of you're own
If the API doesn't have HTTPS support however, do either one of
Migrate to a different API with HTTPS support
Try serving your static react app through the backend and pointing your react app to /path/to/api/route without an absolute url and use the proxy setting in package.json as said here for development
Point to a full path to your backend server on the internet (i.e remove localhost links)
Also note that you cannot deploy a backend to vercel but it does support serverless functions
I have an API made with NodeJS (NodeJS v10 + Express v4.16 + Node-Fetch v2.3) and into this API, I have one endpoint that need to consume content from a third-party API/Service via HTTP Request (POST)
The problem is: This third-party API only accepts requests coming from Brazil
In the past, my API was hosted on Digital Ocean, but with this rule I have migrated to GCP (since DO doesn't have hosts in Brazil) and created my App Engine Application under region southamerica-east1 (Sao Paulo/Brazil according with this document)
And yeah... It works on my machine ¯|_(ツ)_/¯
What's happening: Sometimes the requests runs Ok, working fine, but after some version updates (I'm using CI/CD to make de deployment) the requests goes down.
The Question: Exist a way to control my application to only use the hosted region to make the outgoing requests??
PS* I'm not using flex env, purposely to prevent auto-scale (and cost elevation). (I don't know if I'm right about it because I'm new on GCP)
The IPs of Google Cloud Platform share the same geolocation (US) so I would say that it's expected for the requests to fail. You can have a look at this and this questions for more info and potential workarounds.
I have a React based SPA that is hosted via S3 on one subdomain, react.mydomain.com ... It communicates with a PHP REST API that is hosted on a VPS on another subdomain, api.mydomain.com . The api.mydomain.com is behind CloudFlare. The webapp is behind CloudFront since it is on AWS.
I am having issues w/ bot requests directly to the API flooding my VPS, and I would like to use the JS challenge functionality with CloudFlare to mitigate.
However, what seems to be happening is that users are able to load the React webapp (which is not behind CloudFlare). Then, the request that will prompt the JS challenge will fail with a 503 response instantly, because it is an AJAX request and it is incompatible with the Javascript challenge.
I thought I may be able to handle this by catching the error and redirecting. However, if I manually force my own browser to navigate to the api.mydomain.com URL, I will see the CloudFlare challenge and pass it. However, if I then navigate back to my react.mydomain.com SPA, the OPTIONS requests will fail because it cannot attach the cookie that tells CloudFlare it has passed.
I don't understand how to adjust my infrastructure so that I can take advantage of using the JS challenge. At the moment I am restricted to using rate limiting, but I have found that I am still letting in what seems like ~75% or more of the unwanted bot traffic through by the time I get severe enough that users start complaining.
If you have backend access, you may be able to use NPM and process.kill(process.pid) upon detection of a bot as a temporary solution.
I suggest not to host spa in s3 only file uploads or attachments to s3
Host on Ec2 and block all access through security Group Policy
only allow Cloudflare IP's
which are listed here https://www.cloudflare.com/ips/
You can also use Amazon AWS Lambda serverless for hosting instead of s3
https://aws.amazon.com/lambda/?c=ser&sec=srv