i have deployes ReactJS application on github pages but getting there errors.
Although all the API's are working fine.
The issue is that adblock recognizes the word advertisement in the URL and it thinks it's an ad.
Many applications work this way (for example it happened to me that Kaspersky blocked a page on a site i own because it contained the words toss and ban) so you should be careful about the URL you write to take into account that visitors might have applications that block certain bad words.
Related
This is a follow up question to the one found here: CloudFront + S3 Website: "The specified key does not exist" when an implicit index document should be displayed
I am trying to host a React single page app (static website) through S3 and I want to allow https access only (using a custom SSL). I have everything configured with CloudFront and my website is showing up at the CloudFront URL just fine. But when I navigate around the app, I get the error shown in the link above.
According to that post, the error is fixed by switching from a REST to a website endpoint. But in the process, you have to make your S3 bucket public. My question: is there a way to fix this error without switching to a website endpoint and, in the process, making all my S3 content public? Is there some kind of workaround within the AWS ecosystem where I can combine private S3 contents with a process that returns the html doc without the XML formatted error? According to this reference (https://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteEndpoints.html#WebsiteRestEndpointDiff), this seems like it may not be possible, but I'm hoping someone can prove me wrong.
Thanks!
The error you're getting usually occurs when your application tries to access something which it isn't privileged to.
Since you mentioned the app loads normally but you get this error while you move around; So it can be the case that it occurs when a component tries to load a private resource which you haven't added in the policies you have defined.
My question: is there a way to fix this error without switching to a website endpoint and, in the process, making all my S3 content public?
Definitely! But you need to pin point the resources which is being accessed when you're getting the error! I would request you to provide more info regarding the same.
Lastly, if you switch to website endpoints, you won't to able to serve private S3 content. You'll have to make it all public. You can find more info about this here: https://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteEndpoints.html#WebsiteRestEndpointDiff
I have an app running on Google app engine (Flask, python 3, flexible environment) using the Identity-Aware proxy to allow everyone in our organization (which uses GSuite) to control access. Recently we've been getting 413 errors.
When I looked at the cookies of the failing requests I expected to see one request cookie prefixed with GCP_IAAP_AUTH_TOKEN. Instead I see 11, each one slightly different. Their combined sizes put us over the 15kb header size limit indicated in the link below, causing a 413 error.
https://cloud.google.com/appengine/docs/flexible/go/how-requests-are-handled
I don't understand why there are so many cookies, or how to make them go away. Our users all use Chrome, and many but not all of them are intermittently running into this error. Those that aren't, when their cookies are inspected, show only a couple cookies with this prefix. See below for an example of what this collection of cookies looks like:
Eleven IAP cookies in a single header
Posting what ended up solving this particular instance of the problem in case something like it occurs to other people in the future.
The original IAP code for our project was written in 2018. At the time, IAP had a known issue requiring re-logging in every hour. The suggested workaround from this thread was to use a hidden iframe.
https://issuetracker.google.com/issues/69386592?pli=1
We followed that guidance, but Google fixed the underlying issue in June of 2019. Now, following that guidance causes a gradual accumulation of session cookies in the headers. Removing the no-longer-needed offending iframe code solved the problem.
I would like to know how do CDNs serve private data - images / videos. I came across this stackoverflow answer but this seems to be Amazon CloudFront specific answer.
As a popular example case lets say the problem in question is serving contents inside of facebook. So there is access controlled stuff at an individual user level and also at a group of users level. Besides, there is some publicly accessible data.
All logic of what can be served to whom resides on the server!
The first request to CDN will go to application server and gets validated for access rights. But there is a catch - keep this in mind:
Assume that first request is successful and after that, anyone will be able to access the image with that CDN URL. I tested this with Facebook user uploaded restricted image and it was accessible with the CDN URL by others too even after me logging out. So, the image will be accessible till the CDN cache expiry time.
I believe this should work - all requests first come to the main application server. After determining whether access is allowed or not, a redirect to the CDN server or access-denied error can be shown.
Each CDN working differently, so unless you specify which CDN you are looking for its hard to tell.
I have a fairly big application which went over a major overhaul.
The newer version uses lot of JSONP calls and I notice 500 server errors. Nothing is logged in the logs section to determine the error cause. It happens on JS, png and even jersey (servlets) too.
Searching SO and groups suggested that these errors are common during deployment. But it happens even after hours after deployment.
BTW, the application has become slightly bigger and it even causes deadline exception while starting few instances in few rare cases. Sometimes, it starts & serves within 6-10secs. Sometimes it goes to more than 75secs thereby causing a timeout for the similar request. I see the same behavior for warmup requests too. Nothing custom is loaded during app warmup.
I feel like you should be seeing the errors in your logs. Are you exceeding quotas or having deadline errors? Perhaps you have an error in your error handler like your file cannot be found, or the path to the error handler overlaps with another static file route?
To troubleshoot, I would implement custom error pages so you could determine the actual error code. I'm assuming Python since you never specified what language you are using. Add the following to your app.yaml and create static html pages that will give the recipient some idea of what's going on and then report back with your findings:
error_handlers:
- file: default_error.html
- error_code: over_quota
file: over_quota.html
- error_code: dos_api_denial
file: dos_api_denial.html
- error_code: timeout
file: timeout.html
If you already have custom error handlers, can you provide some of your app.yaml so we can help you?
Some 500s are not logged in your application logs. They are failures at the front-end of GAE. If, for some reason, you have a spike in requests and new instances of your application cannot be started fast enough to serve those requests, your client may see 500s even though those 500s do not appear in your application's logs. GAE team is working to provide visibility into those front-end logs.
I just saw this myself... I was researching some logs of visitors who only loaded half of the graphics files on a page. I tried clicking on the same link on a blog that they did to get to our site. In my case, I saw a 500 error in the chrome browser developer console for a js file. Yet when I looked at the GAE logs it said it served the file correctly with a 200 status. That js file loads other images which were not. In my case, it was an https request.
It is really important for us to know our customer experience (obviously). I wanted to let you know that this problem is still occurring. Just having it show up in the logs would be great, even attach a warm-up error to it or something so we know it is an unavoidable artefact of a complex server system (totally understandable). I just need to know if I should be adding instances or something else. This error did not wait for 60 seconds, maybe 5 to 10 seconds. It is like the round trip for SSL handshaking failed in the middle but the logs showed it as success.
So can I increase any timeout for the handshake or is that done on the browser side?
My problem is that I have a site which has requires a dedicated page for every city I choose to support. Early on, I decided to use subdomains rather than a directly after my domain (ie i used la.truxmap.com rather than truxmap.com/la). I realize now that this was a major mistake because Google seems to treat la.truxmap.com as a completely different site as ny.truxmap.com. So for instance, if i search "la food truck map" my site will be near the top, however, if i search "nyc food truck map" im no where in sight because ny.truxmap.com wouldnt be very high in the page rank by itself, and it doesnt have the boost that it ought to be getting from the better known la.truxmap.com
So a mistake I made a year ago is now haunting my page rank. I'd like to know what the most painless way of resolving my dilemma might be. I have received so much press at la.truxmap.com that I can't just kill the site, but could I re-direct all requests at la.truxmap.com to truxmap.com/la and do the same for all cities supported without trashing my current, satisfactory page rank results I'm getting from la.truxmap.com ??
EDIT
I left out some critical information. I am using Google Apps to manage my domain (that is, to add the subdomains) and Google App Engine to host my site. Thus, Google Apps provides a simple mechanism to mask truxmap.appspot.com (the app engine domain) as la.truxmap.com, but I don't see how I can mask it as truxmap.com/la. If I can get this done, then I can just 301 redirect la.truxmap.com to truxmap.com/la as suggested below.
Thanks so much!
You could send a "301 Moved Permanently" redirect to cause the Google crawler to update its references to your site, no?
See this article on 301 redirects and SEO.
You'll need to modify your app as follows:
Add www.truxmap.com as an alias for the app (you can't serve naked domains in App Engine, so just truxmap.com won't work)
Add support to your app for handling URLs of the form www.truxmap.com/something/, routing to the same handlers as the subdomain. You'll need to make sure you've debugged any relative path issues well before continuing.
Modify your app to serve 302 redirects for every url under something.truxmap.com/whatever to www.truxmap.com/something/whatever.