My problem is Crawl in Google Search Console can't found sub-routes in React.
The URL is https://huynhsamha.github.io/crypto, and crawler can fetch and render homepage (route /) and static files such as /robots.txt, /favicon.ico, but it can't found the sub-routes, which are rendered by React, (SPA, using Redux), such as /algorithm/sha256. Example, https://huynhsamha.github.io/crypto/algorithm/sha256 can't found by Crawler but it can be accessible.
This is my screenshot in Google Search Console I've tried.
Who can explain why and how to fix my problem? I'm using react-router-dom with react-redux My repository on github here
Edit 1
I've also tried the answer https://stackoverflow.com/a/53966338/8828489 in this question, but not working. I've added script in index.html (https://github.com/huynhsamha/crypto/blob/gh-pages/index.html), but search console can't still found, so it also can't render any error on screen.
Edit 2
I've also tried the answers https://stackoverflow.com/a/54040745/8828489 and https://stackoverflow.com/a/54048119/8828489 in this question, but not working. I've created 404.html file and add scripts as the answer instructs but it didn't also work.
Edit 3
I've also tried the answer https://stackoverflow.com/a/54044148/8828489 in this question by creating a simple sitemap.xml, googlebot can find this file and discover all URLs I defined in sitemap. But it also cannot fetch and render URLs mentioned.
I found that when i opened https://huynhsamha.github.io/crypto/algorithm/sha256, I actually received a 404 as a response. I think your workaround for hosting SPA on GitHub using the 404.html is the issue here. While us humans see your app being served on our browser correctly, googlebot doesn't care and just look at the response code and see that it has received a 404. You'll need a different workaround that doesn't involves using the 404.html as the entry point to your app directly.
Try following this workaround by rafrex instead, it redirects the browser to index.html using the 404.html while keeping the original route, it claims that googlebot register that as a 301 instead of a 404, for your case that means adding these changes below to your site, pay attention to the script below the <!-- ------Single Page Apps GitHub Pages Workaround------ -->:
<!-- 404.html -->
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>Cryptography</title>
<!-- ------Single Page Apps GitHub Pages Workaround------ -->
<script type="text/javascript">
// Single Page Apps for GitHub Pages
// https://github.com/rafrex/spa-github-pages
// Copyright (c) 2016 Rafael Pedicini, licensed under the MIT License
// ----------------------------------------------------------------------
// This script takes the current url and converts the path and query
// string into just a query string, and then redirects the browser
// to the new url with only a query string and hash fragment,
// e.g. http://www.foo.tld/one/two?a=b&c=d#qwe, becomes
// http://www.foo.tld/?p=/one/two&q=a=b~and~c=d#qwe
// Note: this 404.html file must be at least 512 bytes for it to work
// with Internet Explorer (it is currently > 512 bytes)
// If you're creating a Project Pages site and NOT using a custom domain,
// then set segmentCount to 1 (enterprise users may need to set it to > 1).
// This way the code will only replace the route part of the path, and not
// the real directory in which the app resides, for example:
// https://username.github.io/repo-name/one/two?a=b&c=d#qwe becomes
// https://username.github.io/repo-name/?p=/one/two&q=a=b~and~c=d#qwe
// Otherwise, leave segmentCount as 0.
var segmentCount = 1;
var l = window.location;
l.replace(
l.protocol + '//' + l.hostname + (l.port ? ':' + l.port : '') +
l.pathname.split('/').slice(0, 1 + segmentCount).join('/') + '/?p=/' +
l.pathname.slice(1).split('/').slice(segmentCount).join('/').replace(/&/g, '~and~') +
(l.search ? '&q=' + l.search.slice(1).replace(/&/g, '~and~') : '') +
l.hash
);
</script>
</head>
<body>
</body>
</html>
<!-- index.html -->
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<meta name="theme-color" content="#000000">
<meta name="description" content="Cryptography Algorithms: Secure Hash Algorithm (sha256, sha512, ...), Message Digest Algorithm (md5, ripemd160), HMAC-SHA, HMAC-MD, pbkdf2, Advanced Encryption Standard (AES), Triple Data Encryption Standard, (TripleDES, DES), RC4, Rabbit, ...">
<meta name="keywords" content="crypto, algorithms, secure hash, sha, sha512, sha256, message digest, md5, hmac-sha, aes, des, tripledes, pbkdf2, rc4, rabbit, encryption, descryption">
<meta name="author" content="huynhsamha">
<!-- Open Graph -->
<meta property="fb:app_id" content="440168923127908">
<meta property="og:url" content="https://huynhsamha.github.io/crypto">
<meta property="og:title" content="Cryptography Algorithms">
<meta property="og:description" content="Cryptography Algorithms: Secure Hash Algorithm (sha256, sha512, ...), Message Digest Algorithm (md5, ripemd160), HMAC-SHA, HMAC-MD, pbkdf2, Advanced Encryption Standard (AES), Triple Data Encryption Standard, (TripleDES, DES), RC4, Rabbit, ...">
<meta property="og:type" content="website">
<meta property="og:image" content="%PUBLIC_URL%/img/main.jpeg">
<meta property="og:site_name" content="Cryptography">
<meta property="og:locale" content="vi_VN">
<!-- Twitter Card -->
<meta name="twitter:card" content="summary">
<meta name="twitter:site" content="#huynhsamha">
<meta name="twitter:creator" content="#huynhsamha">
<meta name="twitter:url" content="https://huynhsamha.github.io/crypto">
<meta name="twitter:title" content="Cryptography Algorithms">
<meta name="twitter:description" content="Cryptography Algorithms: Secure Hash Algorithm (sha256, sha512, ...), Message Digest Algorithm (md5, ripemd160), HMAC-SHA, HMAC-MD, pbkdf2, Advanced Encryption Standard (AES), Triple Data Encryption Standard, (TripleDES, DES), RC4, Rabbit, ...">
<meta name="twitter:image:src" content="%PUBLIC_URL%/img/main.jpeg">
<!--
manifest.json provides metadata used when your web app is added to the
homescreen on Android. See https://developers.google.com/web/fundamentals/engage-and-retain/web-app-manifest/
-->
<link rel="manifest" href="%PUBLIC_URL%/manifest.json">
<link rel="shortcut icon" href="%PUBLIC_URL%/favicon.ico">
<link rel="author" href="//github.com/huynhsamha">
<link rel="canonical" href="//huynhsamha.github.io/crypto">
<!--
Notice the use of %PUBLIC_URL% in the tags above.
It will be replaced with the URL of the `public` folder during the build.
Only files inside the `public` folder can be referenced from the HTML.
Unlike "/favicon.ico" or "favicon.ico", "%PUBLIC_URL%/favicon.ico" will
work correctly both with client-side routing and a non-root public URL.
Learn how to configure a non-root public URL by running `npm run build`.
-->
<link href="//fonts.googleapis.com/css?family=Open+Sans:400,600,700&subset=vietnamese" rel="stylesheet">
<link rel="stylesheet" href="%PUBLIC_URL%/css/bootstrap.min.css">
<link rel="stylesheet" href="%PUBLIC_URL%/lib/font-awesome/css/font-awesome.min.css">
<!-- ------Single Page Apps GitHub Pages Workaround------ -->
<script type="text/javascript">
// Single Page Apps for GitHub Pages
// https://github.com/rafrex/spa-github-pages
// Copyright (c) 2016 Rafael Pedicini, licensed under the MIT License
// ----------------------------------------------------------------------
// This script checks to see if a redirect is present in the query string
// and converts it back into the correct url and adds it to the
// browser's history using window.history.replaceState(...),
// which won't cause the browser to attempt to load the new url.
// When the single page app is loaded further down in this file,
// the correct url will be waiting in the browser's history for
// the single page app to route accordingly.
(function(l) {
if (l.search) {
var q = {};
l.search.slice(1).split('&').forEach(function(v) {
var a = v.split('=');
q[a[0]] = a.slice(1).join('=').replace(/~and~/g, '&');
});
if (q.p !== undefined) {
window.history.replaceState(null, null,
l.pathname.slice(0, -1) + (q.p || '') +
(q.q ? ('?' + q.q) : '') +
l.hash
);
}
}
}(window.location))
</script>
<title>Cryptography</title>
</head>
<body>
<noscript>
You need to enable JavaScript to run this app.
</noscript>
<div id="root"></div>
<!--
This HTML file is a template.
If you open it directly in the browser, you will see an empty page.
You can add webfonts, meta tags, or analytics to this file.
The build step will place the bundled scripts into the <body> tag.
To begin the development, run `npm start` or `yarn start`.
To create a production bundle, use `npm run build` or `yarn build`.
-->
<script src="%PUBLIC_URL%/js/jquery-3.3.1.slim.min.js" type="text/javascript"></script>
<script src="%PUBLIC_URL%/js/popper.min.js" type="text/javascript"></script>
<script src="%PUBLIC_URL%/js/bootstrap.min.js" type="text/javascript"></script>
<!-- Google Adsense -->
<script async src="//pagead2.googlesyndication.com/pagead/js/adsbygoogle.js"></script>
</body>
</html>
More info and discussions on GitHub's support for single page app here.
I poked around in your source code and don't see anything alarming; however, I found a few posts about similar issues (1) (2). The second seems particularly helpful, so I'll repeat it here. Shout out to #Zerotorescue on Reddit.
Open Google Search Console and go to Crawl -> Fetch as Google and do a fetch and render.
Add this to your site, either as a part of tag in your HTML file or as part of the bundle:
https://gist.github.com/mstijak/715fa2dd3f495a98386c3ebbadbabb8c
I recommend the former since that makes it easier to change if you need to make it more readable (no need to recompile your app).
Push this to your site and then do another fetch and display. The error preventing Google from running your app will now show. The search console resolution is pretty low so you may have to increase the font-size of the error and fetch again. Don't worry, Google doesn't mind repeated calls.
You'll probably find that Google's crawler can't process your code because you're using some ES6 feature it doesn't support. You can fix this by polyfilling. I've tried a couple of things such as https://polyfill.io/ which turned out to not really support Googlebot and while it might sometimes work, it is pretty unreliable. Instead I recommend using babel-polyfill. It will increase your bundle size a little bit for everyone but in my experience it provides the widest browser support with a minimal headache. Just turn it on and you're done.
If you're using create-react-app this is the polyfills.js file I use that you could copy:
https://github.com/WoWAnalyzer/WoWAnalyzer/blob/2c67a970f8bd9026fa816d31201c42eb860fe2a3/config/polyfills.js#L1
Notice there are a lot of comments explaining all the issues the polyfill service introduce that you won't have to deal with if you use babel-polyfill.
Because, react app is onepage web, You need a sitemap file, you can find it how to make a one here ,too make a 404 page, and every route add property that has a anchor
like to
<a title="This my Route One" href="https://myreactapp/routeOne" alt="Route One"/>
The problem is that you're using a 404 page to capture incoming traffic to routes other than /. This means those routes serve a 404 status code (you can see this if you open Network in dev tools and try to visit one of those deep URLs). Google sees a 404 status in the response header and just gives up right away. You probably noticed that the "Not Found" message in Webmaster Tools popped up super-fast.
On a normal server, you would capture those routes and return a successful status code like 200 or 301 and Google would continue crawling. However, because you're using GitHub pages, you need to hack your way around it.
You should be able to do this by setting up an instant redirect from that 404 template to your index template. Browsers interpret instant redirects as 301s. To do this, replace the contents of your 404.html with something like this:
<html>
<head>
<script>
sessionStorage.redirect = location.href; // we'll use this later
</script>
<meta http-equiv="refresh" content="0;URL='/crypto'">
</head>
<body></body>
</html>
Just make sure the file-size of that 404.html is greater than 512b or IE will discard it (damn M$...).
Lastly, you'll need to make sure your index.html captures the original route. To do so, use a script like this in the head of your index.html:
<script>
(function(){
var redirect = sessionStorage.redirect; // remember me?
delete sessionStorage.redirect;
if (redirect && redirect != location.href) {
history.replaceState(null, null, redirect);
}
})();
</script>
For reference, I stole this clever hack from:
https://www.smashingmagazine.com/2016/08/sghpa-single-page-app-hack-github-pages/
I also, do not see anything alarming in your code (although I don't think you need the baseUrl in your <Route /> - though I could be wrong, and don't think that's the issue, but it may be worth eliminating if unnecessary).
Just a guess but looking at the networks tab as I bounced around the links, I noticed the service worker. I am, admittedly, not super savvy when it comes to service workers (yet!), however googling a bit revealed that google crawlers do not yet support service workers as asserted in this article, this article, and by google.... I also noticed that if I run a Lighthouse test on one of the links I reached via in-app navigation (for instance I click on the /algorithm tab from the nav on the homepage and then run a Lighthouse test) I get the following errors:
There were issues affecting this run of Lighthouse: Chrome extensions
negatively affected this page's load performance. Try auditing the
page in incognito mode or from a Chrome profile without extensions.
and more interesting:
Lighthouse was unable to reliably load the page you requested. Make
sure you are testing the correct URL and that the server is properly
responding to all requests. Status code: 404.
...despite clearly seeing it rendered in the browser. Seems suspect. So, if that is part of how navigation is happening (seems it likely is based on the registerServiceWorker.js file in your repo lol), it may be the cause of your links not being found/followed.
Angular loads some fonts when starting.
https://fonts.googleapis.com/css?family=Source+Sans+Pro:300,400,700
When I start nodewebkit and I'm offline, it seems that try to load the fonts slows down the app...can I hold this font offline without loading?
Or angular alwasy watch online for this fonts?
Thanks!
This is not angular's concern. In HTML5 You can use an appcache for that.
as noted in http://www.w3schools.com/
HTML5 introduces application cache, which means that a web application
is cached, and accessible without an internet connection.
create a file beside your index.html named resource.appcache
add the link to the following files you want to cache inside resource.appcache
CACHE MANIFEST
# v1.0.0
https://fonts.googleapis.com/css?family=Source+Sans+Pro:300,400,700
then in your html, link it like
<!DOCTYPE HTML>
<html manifest="resource.appcache">
...
</html>
We use Angular's ui-router in an application. We're running into a problem with cache. When we deploy changes, the old HTMl is still used for partial views until the user does a hard refresh. What's worse, the user has to do a hard refresh in every state in order to get that partial view to update.
We use grunt for our build, and have grunt tasks that version our javascript, css, images, etc so the new version is guaranteed to be used. However, I can't find any such grunt task to do the same thing for the html pages.
We've tried setting the main html page to no-cache but that hasn't seemed to help, plus we do want cache to work in general, just not after a new deploy.
<meta http-equiv="cache-control" content="max-age=0" />
<meta http-equiv="cache-control" content="no-cache" />
<meta http-equiv="expires" content="0" />
<meta http-equiv="expires" content="Tue, 01 Jan 1980 1:00:00 GMT" />
<meta http-equiv="pragma" content="no-cache" />
The only thing I can think of is to write a grunt task that versions the html files, and then go through our state definitions and update every templateUrl to point to the appropriate version. To make this harder, we have views that are included in other views, not defined in a state, so we'd have to loop through all our .html files as well and make the appropriate updates.
Anyone else have issues with this? Any suggestions?
UI router has a component called $templateFactory that loads templates via $http / $templateCache in angular. And $templateCache has a removeAll() method.
So in your app.run stage you can do something like:
yourApp.run(['$templateCache', function($templateCache) {
$templateCache.removeAll();
}])
Make sure you set some flag for every deployment though, so it doesn't clear the template cache on every refresh.
Some people prefer appendind buildID at the end of html file requests.
templateUrl : 'path/to/myhtml.html?buildId='+buildID;
I am using servereless to deploy me backend and front end. My front end is using create react app. I believe after I made the following changes
<img className="svg-width" src="/img/Icons/photographer-camera.svg" alt="camera icon" />
<img className="svg-width" src="/img/icons/photographer-camera.svg" alt="camera icon" />
Where I changed Icons/ to icons/ I get the following issue:
Uncaught SyntaxError: Unexpected token <
In my s3 bucket I navigate to img/ and verify that my directory is also lowercase for icons.
The file in question of the syntax error is main.977eb738.js under /static/js/main.977eb738.js of my domain. But when I go to my bucket I don't see that js file. I see
The code in the file its complaining about is the index.html in public/index.html in the create react app boilerplate.
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="theme-color" content="#000000">
<script src="https://maps.googleapis.com/maps/api/js?key=MY_KEY&libraries=places"></script>
<script src="https://js.stripe.com/v3/"></script>
</head>
<body>
<noscript>
You need to enable JavaScript to run this app.
</noscript>
<div id="root"></div>
</body>
</html>
One more thing to note is this works fine locally and even on mobile. I thought this could be cloudfront caching so I waited a full day and still cannot get to the bottom of this error.
I ran into the same issue. I tested incognito and the site worked fine in inco after doing a cache invalidation the same way that Michael stated in the first comment. It looks like it is browser caching alongside the Cloudfront caching.
I was able to resolve the issue by clearing browser cookies/data from the last day.
I would recomend anyone who is uploading directly to AWS S3 bucket to clear the CloudFront edge cache.
Using AWS CLI this can be done with the folowing line:
aws cloudfront create-invalidation --distribution-id YOURID --paths "/*"
In order to find the CloudFront Distribution Id navigate to cloudFront in AWS console.
Read more here: Invalidating Files
In my case, my CloudFront distribution was blocking access to all /static/* files. Creating a CF behavior that whitelisted that path resolved the issue.
I faced a similar issue. I wasn't using serverless(AWS lambda).
What was happening was that inside my build/index.html somehow it was failing at the link's hrefs, and script's src tag.
So, I had <link href="/static/css/main.866f5359.chunk.css" rel="stylesheet"> and I changed it to
<link href="https://s3-us-west-2.amazonaws.com/fullthrottle-labs-react-task/static/css/main.866f5359.chunk.css" rel="stylesheet">, similarly for scripts as well.
So, instead of giving relative paths in build/index.html, giving an absolute path did the trick for me.