Patterns to show content in a React and AWS website - reactjs

Sorry about the "stupid" title, but I don't really know how to explain this.
I want to have a webpage on my site (built in React), that will show the release notes for each version of my site/product. I can hardcode the content of the release notes in the page, but I want to do something that allows me not to have to recompile my site just to change content.
My site is hosted in AWS, so I was thinking if there are any patterns to store the content of the page in an S3 bucket as a text file, or as an entry in DynamoDB.
Does this make sense?
These are things I remember, but I would like to ask how "you" have done this in the past.
Thank you in advance!

You could really use either S3 or DynamoDB, though S3 ends up being more favorable for a few reasons.
For S3, the general pattern would be to store your formatted release notes as an HTML file (or multiple files) as S3 objects and have your site make AJAX requests to the S3 object(s) and load the HTML stored there as your formatted release notes.
Benefits:
You can make the request client-side and asynchronous via AJAX, so the rest of the page load time isn't negatively impacted.
If you want to change the formatting of the release notes, you can do so by just changing the S3 object. No site recompilation required.
S3 is cheap.
If you were to use DynamoDB, you would have to request the contents server-side and format them server-side (the format would not be changeable without site recompilation). You get 25 read capacity units for free, but if your site sees a lot of traffic, you could end up paying much more than you would with S3.

Related

Public s3 bucket security

I have simple MERN stack application that allows to upload gallery images. I host all static assets including all gallery images on public s3 bucket. I did it on purpose because I want every image to be public and load time is much faster because we don't need to retrieve and encode images to display them on the website. However, I'm worried about some unexpected cost in case that someone will try to generate unwanted traffic to my bucket. As we know AWS charge for every request and data transfer so my question is what are my options to prevent it. Although, generating images using server like mentioned before is probably the best option, I don't want to do it. I read somewhere that I could use budget alerts and trigger lambda function to make s3 bucket private but I can't find information how to implement it.
Maybe there is some other way to do it?

Reduce initial server response time with Netlify and Gatsby

I'm running PageSpeed Insights on my website and one big error that I get sometimes is
Reduce initial server response time
Keep the server response time for the main document short because all
other requests depend on it. Learn more.
React If you are server-side rendering any React components, consider
using renderToNodeStream() or renderToStaticNodeStream() to allow
the client to receive and hydrate different parts of the markup
instead of all at once. Learn more.
I looked up renderToNodeStream() and renderToStaticNodeStream() but I didn't really understand how they could be used with Gatsby.
It looks like a problem others are having also
The domain is https://suddenlysask.com if you want to look at it
My DNS records
Use a CNAME record on a non-apex domain. By using the bare/apex domain you bypass the CDN and force all requests through the load balancer. This means you end up with a single IP address serving all requests (fewer simultaneous connections), the server is proxying to the content without caching, and the distance to the user is likely to be further.
EDIT: Also, your HTML file is over 300KB. That's obscene. It looks like you're including Bootstrap in it twice, you're repeating the same inline <style> tags over and over with slightly different selector hashes, and you have a ton of (unused) utility classes. You only want to inline critical CSS if possible; serve the rest from an external file if you can't treeshake it.
Well the behavior is unexpected, I ran the pagespeed insights of your site and it gave me a warning on first test with initial response time of 0.74 seconds. Then i used my developer tools to look at the initial response time on the document root, which was fairly between 300 to 400ms. So I did the pagespeed test again and the response was 140ms. The test was passed. After that it was 120ms.
See the attached image.
I totally think there is no problem with the site. Still if you wanna try i would recommend you to change the server or your hosting for once, try and go for something different. I don't know what kind of server you have right now where the site is deployed. You can try AWS S3 and CloudFront, works well for me.

CDN serving private images / videos

I would like to know how do CDNs serve private data - images / videos. I came across this stackoverflow answer but this seems to be Amazon CloudFront specific answer.
As a popular example case lets say the problem in question is serving contents inside of facebook. So there is access controlled stuff at an individual user level and also at a group of users level. Besides, there is some publicly accessible data.
All logic of what can be served to whom resides on the server!
The first request to CDN will go to application server and gets validated for access rights. But there is a catch - keep this in mind:
Assume that first request is successful and after that, anyone will be able to access the image with that CDN URL. I tested this with Facebook user uploaded restricted image and it was accessible with the CDN URL by others too even after me logging out. So, the image will be accessible till the CDN cache expiry time.
I believe this should work - all requests first come to the main application server. After determining whether access is allowed or not, a redirect to the CDN server or access-denied error can be shown.
Each CDN working differently, so unless you specify which CDN you are looking for its hard to tell.

Amazon S3 and resource protection

I am building a mobile application with a server-side backend.
Application has a lot of different functionality + some social network functionality.
The problem I am experiencing is about protecting resources on Amazon S3.
Let's imagine, that user has received an url to Amazon S3, where his friend's picture is stored. The url is something like:
https://bucket-name.amazon.com/some-unique-id-of-the-picture
So now the user can randomly ( or sequentially ) start generating some-unique-id-of-the-picture and get the access to other user's private pictures. How can I restrict this?
Actually, I am more-less OK to that ( these pictures aren't very confidential in our application ), however, some of the user's really confidential data is stored on Amazon S3 as well ( in another bucket ). And I definitely need to secure them.
How can I secure this data?
Using time-limited URLs is not an option.
Here is why:
1) Downloading the resource by time-limited URL and then just use the local copy - slows down the application ( reading BLOB data from SQLite on Android is slow ) and the application takes a lot of storage
2) Caching of the resources and storing just URLs won't work. Since user can empty the cache, but URL will already expire.
At the moment the only solution I see:
Make all the requests via our server, check if the user can see requested resource, then read the data from amazon and return it to the user.
Unfortunately, this would produce quite significant load on our server(s).
Any help/advice or suggestions would be highly appreciated.
Thank you in advance.

Multiple data sources: data storage and retrieval approaches

I am building a website (probably in Wordpress) which takes data from a number of different sources for display on various pages.
The sources:
A Twitter feed
A Flickr feed
A database on a remote server
A local database
From each source I will mainly retrieve
A short string, e.g. for Twitter, the Tweet, and from the local database the title of a blog page.
An associated image, if one exists
A link identifying the content at its source
My question is:
What is the best way to a) store the data and b) retrieve the data
My thinking is:
i) Write a script that is run every 2 or so minutes on a cron job
ii) the script retrieves data from all sources and stores it in the local database
iii) application code can then retrieve all data from the one source, the local database
This should make application code easier to manage - we only ever draw data from one source in application code - and that's the main appeal. But is it overkill for a relatively small site?
I would recommend putting the twitter feed and flickr feed in JavaScript. Both flickr and twitter have REST APIs. By putting it on the client you free up resources on your server, create less complexity, your users won't be waiting around for your server to fetch the data, and you can let twitter and flickr cache the data for you.
This assumes you know JavaScript. Once you get past JavaScript quirks, it's not a bad language. Give Jquery a try. JQuery Twitter plugin Flickery JQuery plugin. There are others, that's just the first results from Google.
As for your data on the local server and remote server, that will depend more on the data that is being fetched. I would go with whatever you can develop the fastest and gives acceptable results. If that means making a REST call from server to sever, then go for it. IF the remote server is slow to respond, I would go the AJAX REST API method.
And for the local database, you are going to have to write server side code for that, so I would do that inside the Wordpress "framework".
Hope that helps.

Resources