React aws s3 Loading chunk X failed - reactjs

I have a react app uploaded on aws s3, this is the command I am using to deploy it
npm i
npm run build
aws s3 sync build s3://bucket/path --acl=public-read --delete
aws cloudfront create-invalidation --distribution-id XXX --paths "/*"
Then I have many errors like
Loading chunk X failed
or
Loading CSS chunk X failed
I was thinking it was due to cache issue after deployment, bet now I didn't deploy for a week and the error rate does not decrease.
For example, the error is Loading CSS chunk 6 failed. (/static/css/6.32a7316b.chunk.css) then when I go to https://my-website.com/static/css/6.32a7316b.chunk.css the file is loaded without issue
I read many posts but can not find any solution, some were talking about CORS configuration but should not be an issue as the file are coming from the same domain
There is maybe some cache rule to define, but I do not really know which one and where.
I am also using Cloudflare between the client and S3 I do not know if it has an impact

This error is occurring because some users are getting a version of the files that not exists anymore on the server.
When you use the create-invalidation Cloudfront method and invalidate the output directory, the users with cached files will try to load a file that not exists and you will saw this kind of error happening.
A option to stop with these errors is to have an app version tracking (using AJAX and read from db or dynamic config file) and when the app detects a newer version, message the user about it and ask them to reload the page as you can see on this answer here.
Also, check the output.publicPath option in the webpack.config.js file, maybe changing the default for / or /dist can solve your issue.

Related

netlify deployment build is failing for gatsby-contentful site

I'm trying to deploy a Gatsby-Contentful site to Netlify and while deploying it gives me build error. While testing in localhost its working perfect.
Here is the screenshot of the error:
While in production the npm run build command is working perfect.
I have tried this solution but it doesn't solve the issue.
Invalid plugin options for "gatsby-source-contentful"
Link to the code on github is Here
I think there is something wrong with the contentful API keys while deploying but I can't figure out what is it.
You must setup environment variable on Netlify
Netlify environment variables are accessible during your build. This allows you to change behaviors based on deploy parameters or to include information you don’t want to save in your repository
Go to your site > Site Settings > Build & deploy > Environment variable, and add your variables
More details in this doc
Update 2021-06-29
The problem turned out to be process killed while querying during build on Netlify. AVIF transformation while querying gastby image data causes resource exceeded on Netlify. Netlify team and Gatsby are still working on this.
The temporary solution now is to exclude AVIF from query.
Discussion post
Try to profile your memory application if it's consuming a lot of memory during npm run build on your local setup. In my case it was peaking up to 1.5 GB and Netlify will trigger an error build during deployment.
Adjusting the CPU Count to a lower number did the trick for me. Site Settings > Build & deploy > Environment variable
GATSBY_CPU_COUNT = 2
Reference: https://www.gatsbyjs.com/docs/how-to/performance/resolving-out-of-memory-issues/#try-reducing-the-number-of-cores

react hosted on S3 having trouble using env files for Auth0 values

This is more of a lack of understanding on my part but I cannot seem to debug this.
I have a react app that I use Auth0 for some authentication stuff.
I recently moved my site on production from using a docket container to running from S3. I thought this was working but its clearly not today it times out when I click login
Request URL: https://undefined/authorize?
This just times out
It works from local host and no longer says undefined it has my auth0 domain
In my react app I store that Auth0 Domain value in a .env file. I am assuming my issue is that react in my S3 bucket does not have my .env file because gitignore. I thought maybe at build that was somehow pulled into a build file ? So when I do npm run build does it do anything with that .env?
How do I use my .env with react running in S3
This assumes that is what my issue is it seems to be.
I had multiple issues going.
Cloudfront was caching my site so as I was debugging I was not viewing my latest code. I had to invalidate cloudfront cache after each push.
I added my Auth0 secrets into github action secret and then used them with an env: section in my github action. That allowed me to use environment variables when github action performed the build.
All works now I was just tangled up for a short time.

Chrome keeps loading source map even after GENERATE_SOURCEMAP=false

I've built an app with CRA and I'm trying to prevent Chrome from loading source maps.
Here's what I've tried so far:
Build react app with the command GENERATE_SOURCEMAP=false react-scripts build.
Ensure that no .map files are present in /myprojectfolder/build/.
Delete all static files in my AWS S3 bucket.
Deploy build folder to S3 using aws s3 sync ./build s3://mybucket --profile=s3-admin.
Invalidate AWS Cloudfront using aws cloudfront create-invalidation --paths / /build /css/* /index.html /error.html /service-worker.js /manifest.json /favicon.ico.
Clear all browser cache at Chrome settings.
Hard refresh with ⇧⌘R.
But it's still showing the message Source Map detected with the exact code I wrote. Also tested on Windows Chrome with the same result. Safari seems to have stopped loading source maps, on the other hand.
Did I do anything wrong? Or is there anything else I can do (maybe reboot)?
Any advice would be appreciated. Thank you.
I had the same issue. In order to fix this, I tried removing/adding hosting multiple times. I noticed that one of the old s3 hosting buckets still has .map files. So I removed the obsolete buckets and verified the current bucket doesn't have js.map files. Now I no longer see the actual source .js files in dev. tools including individual .css files.

Google App Engine: connection interrupted while running gradle appengineDeploy and now application will not work (including firebase cloud messaging)

While I was deploying my google app engine project (Java) using gradle appengineDeploy my internet connection was interrupted.
I re-deployed the project. Although the console said BUILD SUCCESSFUL, the app engine instance no longer works. No matter how many times I re-deploy or update my application, the logs show nothing but errors:
java.lang.NoClassDefFoundError: com/google/appengine/api/ThreadManager
at
com.google.api.control.extensions.appengine.GoogleAppEngineControlFilter.createClient
(GoogleAppEngineControlFilter.java:61) at
com.google.api.control.ControlFilter.init (ControlFilter.java:141) at
org.eclipse.jetty.servlet.FilterHolder.initialize
(FilterHolder.java:139) at
org.eclipse.jetty.servlet.ServletHandler.initialize
(ServletHandler.java:873) at
org.eclipse.jetty.servlet.ServletContextHandler.startContext
(ServletContextHandler.java:349) at
org.eclipse.jetty.webapp.WebAppContext.startWebapp
(WebAppContext.java:1406) at
com.google.apphosting.runtime.jetty9.AppEngineWebAppContext.startWebapp
(AppEngineWebAppContext.java:175) at
org.eclipse.jetty.webapp.WebAppContext.startContext
(WebAppContext.java:1368) at
org.eclipse.jetty.server.handler.ContextHandler.doStart
(ContextHandler.java:778) at
org.eclipse.jetty.servlet.ServletContextHandler.doStart
(ServletContextHandler.java:262) at
org.eclipse.jetty.webapp.WebAppContext.doStart
(WebAppContext.java:522) at
com.google.apphosting.runtime.jetty9.AppEngineWebAppContext.doStart
(AppEngineWebAppContext.java:120) at
org.eclipse.jetty.util.component.AbstractLifeCycle.start
(AbstractLifeCycle.java:68) at
com.google.apphosting.runtime.jetty9.AppVersionHandlerMap.createHandler
(AppVersionHandlerMap.java:240) at
com.google.apphosting.runtime.jetty9.AppVersionHandlerMap.getHandler
(AppVersionHandlerMap.java:178) at
com.google.apphosting.runtime.jetty9.JettyServletEngineAdapter.serviceRequest
(JettyServletEngineAdapter.java:120) at
com.google.apphosting.runtime.JavaRuntime$RequestRunnable.dispatchServletRequest
(JavaRuntime.java:747) at
com.google.apphosting.runtime.JavaRuntime$RequestRunnable.dispatchRequest
(JavaRuntime.java:710) at
com.google.apphosting.runtime.JavaRuntime$RequestRunnable.run
(JavaRuntime.java:680) at
com.google.apphosting.runtime.JavaRuntime$NullSandboxRequestRunnable.run
(JavaRuntime.java:872) at
com.google.apphosting.runtime.ThreadGroupPool$PoolEntry.run
(ThreadGroupPool.java:270) at java.lang.Thread.run (Thread.java:748)
Caused by: java.lang.ClassNotFoundException:
com.google.appengine.api.ThreadManager at
java.net.URLClassLoader.findClass (URLClassLoader.java:381) at
com.google.apphosting.runtime.ApplicationClassLoader.findClass
(ApplicationClassLoader.java:135) at java.lang.ClassLoader.loadClass
(ClassLoader.java:424) at java.lang.ClassLoader.loadClass
(ClassLoader.java:357)
Is there some way for me to clear out the partial build (or whatever there may be) in google app engine?
I tried deleting all versions and instances (you cannot delete them all, it won't let you delete the serving one).
I tried increasing the version number in appengine-web.xml.
I tried gradle clean.
I tried disabling and enabling the application in the google cloud console.
No luck.
One idea I have is maybe if I can somehow force all the files for the app to be uploaded again? Because with gradle appengineDeploy only the changed files get uploaded.
EDIT:
I managed to fix one part of this by upgrading classpath 'com.google.cloud.tools:appengine-gradle-plugin:2.2.0' in my gradle to the latest version. (Felt kind of hackish to do that and not reliable in the future if this happens again.) Maybe this flushed whatever was in app engine or something by doing that. I still have one problem remaining:
I am using the firebase admin SDK to send firebase cloud messages (authenticated with a .json credentials file) like this:
FirebaseMessaging.getInstance().send(msg);
When I do I am getting this error:
com.google.firebase.messaging.FirebaseMessagingException: Unexpected
HTTP response with status: 401; body: null
Which is strange because other parts of the firebase admin SDK are working (like writing/reading from firestore).
So there is still something weird going on and I think it has to do with the fact that as I was uploading my google app engine project the connection was interrupted.
EDIT 2:
Here is something interesting: When I deploy my application using gcloud app deploy appengine-web.xml the application again will not work. Deploying with gradle appengineDeploy does though. I also noticed that the size of the application is shown as much smaller in the GCP console when deploying with gcloud app deploy appengine-web.xml. So something is messed up here. I tried looking up some sort of gcloud command to clear cache? Or something like that but no luck.
EDIT 3:
Additional Info:
My app was already deployed and working before the failed attempt. I changed one small piece of code in a function and upon uploading the app, the connection was interrupted because the internet went out.
I am on app engine standard environment.
I am deploying my application from the macOS terminal and using android studio to develop.
I have tried the stopPreviousVersion promote and version configs in gradle (actually the first two are true by default and version gets auto-generated if you do not set it).
Running gcloud app deploy appengine-web.xml --verbosity=debug shows a lot of sensitive information but one thing I am seeing is all the files in WEB-INF are being skipped:
DEBUG: Skipping upload of [WEB-INF/....
INFO: Incremental upload skipped 100.0% of data
DEBUG: Uploading 0 files to Google Cloud Storage
DEBUG: Using [16] threads
So perhaps files are not all being uploaded? This SO post raises a similar problem but has no solution: stackoverflow.com/q/42137452/3075340
It's weird but when I do gcloud app deploy, the app won't work at all. There are run-time errors all over the place. Doing gradle appengineDeploy fixes that but I still am having the firebase-admin issue.
To actually answer your question, GAE uses a staging bucket to cache files.
You can delete this bucket, and it will get recreated next time you try to deploy.
The bucket is always named staging.[PROJECT-ID].appspot.com. It should show on the buckets overview page
Just to be clear, I'm not anywhere near sure that this will actually resolve your issue, but this will most definitely clear whatever files GAE cached
Here there was a discussion on a, more or less, similar issue on App Engine from last year.
You can use it as a source of inspiration to fix your current issue, because I think they are really similar as a concept. From there I think it may worth trying to put the appengine-api-1.0-sdk-1.9.63.jar ( or whatever your version is ) file under WEB-INF/lib.
It the same post there were some guys who found out the there was a problem with gcloud v194 and switching to another version fixed the issue.
Here there is a similar issue which was resolved by upgrading the gcloud tool.
Anyhow, this behaviour is not normal and things should not work like this. You can try to find a workaround, but my recommendation is to report the issue and let an App Engine engineer taking a look over it. There are good chances to be something internal.

Deploying ReactJS (w/ webpack) to S3 & CloudFront

I have a ReactJS application that produces a dist folder when I perform an npm build. This can be uploaded to Amazon's S3 and everything is fine.
I'm looking to continuously deploy this application, so my thoughts were to deploy to s3://RANDOM_STRING/, producing:
RANDOM_STRING/js
RANDOM_STRING/css
RANDOM_STRING/index.html
I can't tell S3, to the best of my knowledge, to use a sub-directory as a web-root, so I looked into CloudFront and updating the origin to the directory. This takes a lot of time to update and actually can't be done through the aws-cli, so continuous deployment would be ruined.
I've looked at using file-loader to rewrite my url() calls to include the RANDOM_STRING into the deployment assets, but this feels pretty "ugly"
Does anyone have experience of this kind of deployment and could help me out?
Have you tried setting your Default Root Object? You can configure it to be a sub directory.
See here: http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/DefaultRootObject.html

Resources