I have a react app, using react-router hosted in an S3 bucket, using Route53 as a DNS provider. The app worked fine with the Route53 config pointing to the S3 bucket.
Since I want to use SSL, I created a Cloudfront distribution pointing to the bucket, with an SSL cert., and pointed the DNS to it. Since doing that, none of the links work, (example.com works, but example.com/foo does not). It just returns a NoSuchKey error. I know that this is incorrect, as the key is definitely there, and it was working before.
Problems
Like most web-servers, CloudFront/S3 will throw a 404 if the bucket doesn't contain the object specified by the URL.
Unknown/unidentified objects will automatically throw a 403 if they are not publicly accessible, or don't have the right permissions
Solution
You can have CloudFront return an object to the viewer (for example, an HTML file) when your Amazon S3 or custom origin returns an HTTP 4xx or 5xx status code to CloudFront. You can also specify how long an error response from your origin or a custom error page is cached in CloudFront edge caches.
More resources
How CloudFront Processes and Caches HTTP 4xx and 5xx Status Codes from Your Origin
Creating a Custom Error Page for Specific HTTP Status Codes
How can I troubleshoot the HTTP 404 error "NoSuchKey" from Amazon S3?
CloudFront: Custom Error Pages and Error Caching
Related
I am using the ReactS3Client and I am having issues with uploading to a bucket. I have all the config files ready. I am using a 3rd party library that allows a user to upload media to a designated cloud server, that being S3. Everything works good on iOS side for the configuration and I am able to upload accordingly. For that, I am using AmplifyCLI.
The error I am getting is: POST BUCKETURL 403 (Forbidden)
On the S3 side, I configured the following -- S3 bucket policy config
I know the error stems from the S3 CORS policy. Ive tried making the bucket public and editing the policy to allow all sorts of requests, but I'm still receiving the 403 forbidden error.
Any help would be really appreciated.
For e.g. website url is https://www.myreactapp.com. It has some other pages with dynamic get parameters.
https://www.myreactapp.com/category/1
https://www.myreactapp.com/category/2
It's giving me Access Denied error
I had the same issue where i'm trying to access content at run time using ajax.
Set S3 bucket Access as "Objects can be public", No need to set "Public" Access for Static website hosting.
Use S3 Origin if you want CloudFront to deliver any objects that place in S3 bucket. But if you generate run time content, its batter to use Custom Origin.
For Custom Origin keep Note: https://docs.aws.amazon.com/general/latest/gr/s3.html#s3_website_region_endpoints
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/DownloadDistS3AndCustomOrigins.html
The solution for the OP was to update the origin from S3 to Custom Origin domain. This allowed expected behaviour to work.
Validated that bucket was public, error 403 was caused by key not existing.
I have react app what I already deployed to the GitHub Pages.
But now I have a problem: what I am requesting auth status to server and didn`t get any response. What is the problem?
I have this error in console about my requests
has been blocked by CORS policy:
Response to preflight request doesn't pass access control check:
No 'Access-Control-Allow-Origin' header is present on the requested resource
GitHub pages supports CORS since 2015, so you can follow "Fix CORS Error| React Tutorial" which points to:
"Run Chrome browser without CORS" (not recommended, just for testing)
axios/axios issue 853
That last issue mentions:
cURL does not enforce CORS rules. Those rules are enforced by browsers for security purposes.
When you mention that you added the relevant header, I assume you mean you added those headers to the request.
Actually, the header is expected in the response headers from the server, indicating that the resource is allowed to be accessed by other websites directly.
FYI, CORS - Cross Origin Resource Sharing. Since your API does not support it, you have two options -
Use a proxy server on the same domain as your webpage to access 4chan's API or,
Use a proxy server on any other domain, but modify the response to include the necessary headers.
I want to be able to make a get request with axios to the google places api with an url like the following below
https://maps.googleapis.com/maps/api/place/textsearch/json?query=pizza+&type=restaurant&location=-21.8029127,142.9766041&radius=10000&key=MYAPIKEY
But I get a CORS error.
So I've scoured to try find how to achieve this and I cant seem to find a simple solution. I don't want any maps or autocomplete functionality that the current npm libraries offer. I just want to be able to get results from the places api based on the query that is entered by the user.
If you are getting a CORS error, it means that your browser is restricting a cross-origin request originated from your application script. One solution to avoid this is by providing the CORS header. However, you do not have access to the API server to get it. So you could specify the origin in your Google Maps API call using the origin param.
https://maps.googleapis.com/maps/api/place/textsearch/json?query=pizza+&type=restaurant&location=-21.8029127,142.9766041&radius=10000&key=MYAPIKEY&origin=*
Notice that I have provided origin=*. But you could use your own DNS instead of *, in case you have one set up.
Below is a transcript from the Mozilla Web Docs website about cors:
For security reasons, browsers restrict cross-origin HTTP requests initiated from within scripts. For example, XMLHttpRequest and the Fetch API follow the same-origin policy. This means that a web application using those APIs can only request HTTP resources from the same origin the application was loaded from, unless the response from the other origin includes the right CORS headers.
So I have a React/Redux application that is being served through Amazon s3 currently. We have configured s3 to render the index.html page on a 4xx error and serve our bundle.js. This allowed the react router to be bootstrapped and take over from there. Until recently this worked without issue. Now when I try to visit the page in IE or Edge I get the IE or Edge 404 page.
If I turn off the "Show friendly HTTP error messages" option on the IE browser everything works without issue. From the research I have done this is my theory on what is happening:
When the client hits the requested route the react router has not been bootstrapped yet. This results in the 404 that has to be rescued by rendering the /index.html page. When the 404 is returned IE/Edge steps in and renders their own 404 page which prevents the index.html from ever being rendered.
I am kind of at a loss as to how to solve this issue without actually using a full on backend. I can configure a redirect in the s3 settings to replace the root url with the index.html but this will break all of the sub-routes off of the main route. Is there a way to configure the application so that it works on all major browsers without actually implementing an actual backend?
EDIT: So I found this article which shows how this issue can be solved using cloudfront by rendering the index on a custom error: https://medium.com/#omgwtfmarc/deploying-create-react-app-to-s3-or-cloudfront-48dae4ce0af. Does anyone know if this is possible using cloud flare? I don't actually have the access to cloudflare personally so I am not sure of the possibilities.
If it works with "Show friendly HTTP error messages" disabled, then the issue can also be fixed by making your error pages longer. By default, IE will show those pages if it is less than 512 bytes.
OK so I discovered:
1) You cannot rescue this error in cloudflare.
2) You can put cloudfront in between S3 and cloudflare and simply implement a custom error rule.
3) Do not use the standard dropdown option when selecting the bucket for cloudfront. If you give the bucket name when configuring the error rule, cloudfront will treat it like a normal file not a static site and thus it fails to intercept the error. You have to use the fully qualified url for the bucket. I hope this saves someone some time. I lost some sleep over this issue.