I am new in development world. I am trying to understand how the Heroku filesystem works.
I did an Express project using multer to upload images.
In production, everything worked well including fetching the images from my static folder.
However, when I did it with React (frontend=React & Backend=Express) the images are not displaying even though console shows no error.
According to my research Heroku says
Heroku filesystem is ephemeral - that means that any changes to the
filesystem whilst the dyno is running only last until that dyno is
shut down or restarted
and that I should use a dedicated file storage service such as AWS S3 (for static files).
How does this apply to my React project since I didn't use it in the Express project?
In production, everything worked well including fetching the images from my static folder.
In fact, it probably didn't.
Files can be saved to Heroku's ephemeral filesystem, and even loaded, but, as you have seen, this isn't permanent. Whenever your dyno restarts the filesystem resets. This happens frequently (at least once per day).
Express vs. React is irrelevant. You should always use something like S3 for user uploads on Heroku.
Related
i have created and build (react-script build) a simple react application. I want to deploy it to my ubuntu server via a Docker image and i am looking for advise. Is it better to use expressjs and pm2 to serve the react app or would it be more useful to serve it via nginx? What are the advantages and disadvantages?
Thanks a lot for your advises
When you're going to deploy a React application, you've typically used a tool like Webpack to compile it down to a set of static files. Once you've done that, deploying it via nginx will be smaller and faster than an Express server, and since you don't have the entire Node interpreter involved, there are fewer parts involved to potentially have security issues.
In fact, even if the rest of your application is in containers, it can be beneficial to host the front-end somewhere external (if you're otherwise deploying to AWS, for example, put the built front-end into an S3 bucket). This can simplify the deployment mechanics a little bit, and with Webpack's file hashing, you can keep older versions of the code for browsers that haven't reloaded the page recently. You still probably need the nginx proxy (to serve a /index.html page, to proxy the back-end services) but you don't necessarily need to redeploy it when the front-end code changes.
The only real advantage to an Express-based deployment setup is being able to live-reload your code in production. That would come with a high risk of human error, though: make an accidental typo and save the file, and end users see an error page. I'd avoid this path.
I use CloudFront from AWS and I have an S3 static website.
I use ReactJs and I changed some texts on most pages.
The problem I have now is that I use
npm run build
to produce the production application. I want to update the content on AWS in the S3 bucket (I previously uploaded the same files) however two things happen:
-When I access in incognito, everything works fine, I am given the updated version of the website
-When I access in normal mode with web browsers that I used to access the website before, I am still given the old version of files.
I accessed AWS documentation and I have two solutions:
-Wait 24 hours for CloudFront to cache files in edge locations
-Use version name for the files (for example, change the name of image.jpg to image_1.jpg; image_2.jpg etc.)
I would definitely go with the second option which indeed is time consuming but for sure less than 24 hours. Should I change the name of EACH file that I have in build or just those in static?
Any other solutions?
Something I haven't tried is, before uploading to AWS S3, create a folder such as V1 and upload my react files. When I make a change I call the folder V2 and so on.
Using version names is the most robust method. It gives you full control on the cache behavior without messing around with CloudFront.
So yes, each time there's a new version update your file names.
Btw, if you bootstrapped your react app using create-react-app then the build process does that by default. It will name each bundle with a unique hash every time the bundle changes. This way you can utilize long-term cache in the browser and CF for your files.
You'd probably still have to invalidate the root index.html each deployment as its name doesn't change between versions.
I am using create-react-app CLI to build my application. What I noticed is that the images take longer time to load from CDN than if it is loaded from the local assets folder residing in the src folder. But everybody says that CDN is faster which I am not noticing. The same image takes 200ms to load through CDN whereas 4ms when it is from a local folder.
What do you think is the best way?
Files that are local will always be faster. But only for the local machine, think about it you are hosting and using the site locally and the image files are also on the machine. You have no external calls to outside sources. You could work with no internet connection.
Now if you use a CDN that needs the internet, it will go and fetch that file for you. Developing locally that will be slower. But as soon as your app is being used by clients they will need that image, getting it from your server will work, but it may be slower.
The power of a CDN is that they have are a network, they have redundancy, caching, and they have instances all over the world. It will take load of your server itself.
Lets say on my local machine in the folder which contains my GAE project I have an images folder.
When I upload the app to the GAE with the correct .yaml information the images folder and its contents will be uploaded.
Now lets say, I'm running the APP online and I upload an image to the image folder now on Google's servers. Now the images folders contents on the web and on my local development machine are different.
My question is this:
Next time I upload the app to GAE, how will the discrepancy between the different contents of the image folder be resolved?
You can't add files to the application like that after deployment. The local file system is read-only to GAE applications.
If you were to upload an image (via a handler you create) when the app is deployed you can't save it in the image folder in your application, you can only save it to the data/blob store. The files you uploaded with your app are static, they cannot be changed either by you or the application outside of the deployment tool. You can read them in, sure, but not write to them once deployed/at all.
So the situation will never arise that a deployed version has different files to the local version - they are always identical.
I am working on Restful Web application. I am maintaining different project for web client code and Google app engine server code.
When ever i made changes in the client code, i rebuilt the client code and places inside the war folder of server project through build scripts.
Here i dont want to place all files directly to war folder and i wanted to put them under folder called 'Publish' for better maintainence. How can do it?
Is there any better way to maintaining client code and Google app engine server code?
The structure also works well for Mobile application in future.
I am still new to this too, but there is versioning. If you change the version number in your project manifest file, it does not become the default (i.e., visible to your original public URL). It is public and accessible for you to test. When you are ready to "publish" just switch the new version to be the default. Use the Manage section of the Dashboard and set the Version to be the default when you are ready.
To test any of the earlier versions, you access through the Manage and click on the specific version. I don't know if the persistent storage is versioned with this same mechanism -- I can image problems if you have a huge DB.