We have an AWS Amplify React project associated with our domain, which leads to all files and contents being sourced by the underlying react router.
In order to support backend API communications with Microsoft APIs, we need to host a specific JSON file at a particular location within our domain, such as mydomain.com/.well-known/microsoft-identity-association.json.
I am unsure how to do this. My first question is whether this is best accomplished via static routes within the react router or, instead, configuring Cloud Front and Route 53 to serve up the JSON file for this exact URL.
I have been trying the second approach and have created a distribution in Cloud Front for a specific S3 bucket storing the JSON file. I have named the S3 bucket "mydomain" with a subfolder ".well-known" and a contained JSON filed entitled "microsoft-identity-association.json". My problem is that I do not know how to configure Route 53 to route to this distribution as my root domain (mydomain.com) is associated with my Amplify project and is handled by the react router. I'm not sure if I can somehow configure a specific route or alias to serve up the exact JSON file.
I have reviewed this post (How do I return a json file from s3 to a specific url, but only that url) but it seems to be addressing a slightly different problem.
Any and all guidance appreciated.
Addressed this issue by splitting my site. I used a static S3-hosted site for public pages (including the JSON file) and redirected the React app to a subdomain.
Related
I am trying to build a static React App which displays api documentation. For backend I am using swagger (hosted separately in a lambda) which generates the JSON data model. With new endpoints created, I have a lambda which will give me the details of the endpoint, like headers, requests and response. The data comes in format of JSON which is dropped in a s3 bucket. Could my react app deployed in the same bucket, consume that json and render the newly added api documentation details? I need help on the same without a node backend?
Here is an example of what I am trying to build
A react app with cooking receipies is deployed statically hosted in s3.
The receipies for this app is in JSON model, which is common for any up comming receipies
This app is in AWS s3 bucket.
A new recepie needs to be added to my app. But my app is already built and hosted.
Can I build my app in such a way that if I drop any new JSON files, it would consume that file and render/refresh its frontend, without a server in S3?
Given that your app is statically hosted in S3, you will have to write client-side JS code to check for the new recipes.
You can call your Lambda from the client-side to get the endpoint and send a request to get all the recipes. If the folder path in S3 is known, you can do this check without needing the Lambda.
This way, your app will refresh the frontend without a server when you add a new JSON file.
After some through research I found that, you could drop all your JSON files in Public folder in React App. The files in this folder will not be altered. So you could access them in your react component with react useEffect Hook. Here is the reference article I took this implementation from.
How to add Static JSON files in React App post deployment
I am creating a web app using React where the user can upload files to a folder in my S3 bucket. This folder will have a unique passcode name. The user (or someone else) can use this passcode to retrieve these files from the S3 folder. So basically there is no login/authentication system.
My current issue is how do I safely allow read/write access to my S3 bucket? Almost every tutorial stores the access keys to the client code which I read is very bad practice but I also don't want to create a backend for something this simple. Someone suggested presigned URLs but I have no idea how to set that up (do I use Lambda? IAMs?). I'm really new to AWS (and webdev in general). Does anyone have any pointers on what I could look into?
do I use Lambda? IAMs?
The setup and process if fully explained in AWS blog:
Uploading to Amazon S3 directly from a web or mobile application
I would like to create a public read aws s3 bucket with some files read restricted by a IAM role.
First of all:
I using amplify cli for deploying my «static» website.
The website is a react app
This app have public pages/react components and a admin area
I would like to restrict admin area/admin pages/admin react components with a aws IAM role
More details:
The react app is very big so I splited components using asyncComponent feature like const Dashboard = asyncComponent(() => import('./pages/Dashboard'))
So when I build the app instead to have one big file I have several small files. And all these files are on the same bucket.
Now I want to build admin pages. Always using asyncComponent we get a collection of «Admin» files and there are hosted on the same bucket. But for security reason I want to restrict access to authenticated users with a certain IAM role (for ex AdminRole).
I go through lot of doc from amplify config or AWS::S3::Bucket from cloudFormation and I saw different things that tell me it's possible but I'm very lost in this doc.
So finally I ask:
How can I protect some files/objects for reading access in s3 buckets with a IAM role?
And how can I «tag» admin components in the react app? or via amplify? maybe using regex for match files? or a specified folder? In order to apply this read restriction.
Thank you in advance for your reply.
Content in Amazon S3 is private by default.
Therefore, anything you are happy for everyone in the world to view can be made publicly accessible via a Bucket Policy (whole bucket or part of a bucket) or via Access Control Lists (ACLs) on the objects themselves.
To serve content that should be restricted to specific users, take advantage of Pre-Signed URLs. These are time-limited URLs that provide temporary access to private objects in Amazon S3. They are easy to generate (no API calls required).
The way it would work is:
Users would authenticate with your application
When they wish to access restricted content, the application would determine whether they are permitted access
If they are permitted access, the application would generate a pre-signed URL. These can also be used in <a> and <img> tags to refer to pages and images.
Users will receive/view the content just like normal web components
Once the expiry time has passed, the pre-signed URLs will no longer work
See: Share an Object with Others - Amazon Simple Storage Service
(I'm not an Amplify person, so I can't speak to how Amplify would specifically generate/use pre-signed URLs.)
I would like to host my ReactJS app as static on Azure Blob. The problem is Azure Blob doesn't support default document. To overcome this, I have set Azure CDN with URL Rewrite rules
for the first source pattern, set to ((?:[^\?]*/)?)($|\?.*)
for the first destination pattern, set to $1index.html$2
for the second source pattern, set to
((?:[^\?]*/)?[^\?/.]+)($|\?.*)
for the second destination pattern, set to $1/index.html$2
This is from the Hao's tutorial
This successfully resolves myapp.azureedge.net but when the client-side routing is used directly e.g. myapp.azureedge.net\react\route the app will return ResourceNotFound.
Meaning when the user inputs myapp.azureedge.net\react\route as his URL and tries to navigate to the page, he will get an error.
I suspect I need to redirect every path, that is not to a static specific file, to index.html. However, I do not know if that's the right solution or how to achieve it
Thank you for Any help!
Azure CDN supports static website hosting now. More information here:
https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blob-static-website
You can host a single page app without using URL rewrites by setting the default document and the error document to be index.html
I encountered the similar issue before. Assuming that the structure of your static files under Azure Blob container looks like this:
Note: The cdn is the container name.
You could configure the following URL Rewrite rules for setting default page and rewriting all requests to index.html along with the possible query string and your images and scripts under cdn/scripts and cdn/images could correctly accessed.
Additionally, you could use Azure Web App to host your static website and choose the proper pricing tier. Details you could follow Pricing calculator.
There is a new Azure static web app service, currently in preview mode but it is super easy to deploy a modern frontend SPA. You can set up a fallback route (route.json) to redirect everything to index.html, you can see more here: https://learn.microsoft.com/en-us/azure/static-web-apps/
I am working with AngularjS. And i am using Amazon S3 Bucket.
I am trying to remove '#' from URLs of my website which I did successfully
Then i can across a problem where i was not able to reload any of the page from my website.
I received an error saying cannot get filename
Then after some searching i came to know that i am supposed to Rewrite URLs in server configuration
I found few links telling me about how to make such configurations. But i couldn't find any reference teaching me about making such configurations on Amazon S3 Bucket.
How can i make configuration changes in this situation? Any useful reference/documentation for the same?
S3 is not a full-featured web application server like nginx or Apache. You cannot rewrite URLs. The only thing you can do with S3 bucket is handle error pages like 404.