So basically, I bought the custom domain of nexus-cheats.com, and went to connect it to my GitHub pages website of archiemourad.github.io/Nexus (/Nexus is the homepage). I entered the custom domain and set up the DNS, did the DNS checks and everything seemed to be working (Image Below) (My website is using React.js) Now, when loading the domain nexus-cheats.com It brings me to a blank page, And it "seems" to be working in a way It loads the tab title but nothing else. In the console there are a bunch of cookie related warnings. But no errors. When I go to nexus-cheats.com/Nexus i get the default GitHub pages 404. And get two errors. One is the denail of loading of my favicon.ico (Tab logo) Error => Content Security Policy: The page’s settings blocked the loading of a resource at https://nexus-cheats.com/favicon.ico (“img-src”). The other is a server GET 404 error => GEThttps://nexus-cheats.com/NexusALTHOUGH after loading nexus-cheats.com/Nexus and going back to nexus-cheats.com I get two more errors, both being the failure to load certain files in my react app. Errors => GET https://nexus-cheats.com/Nexus/static/js/main.89be2f5c.js GET https://nexus-cheats.com/Nexus/static/css/main.1bf437ff.css These (I assume) Are the build version my GitHub pages website is running on. Anyone know a solution? Or the problem going on here?
DNS CONFIG: My A: Record is set to 185.199.108.153 My CNAME: Record is pointing to archiemourad.github.io. (www)
----edit: Solved, but I kept all of my original text for brevity---
I'm getting this same exact error and situation right now.
I don't have answers yet, but it looks like React is trying to connect the %PUBLICURL% to the wrong place now. This comes from my /public/index.html file.
<!DOCTYPE html>
<html lang="en">
<head>
...
<link rel="icon" href="%PUBLIC_URL%/favicon.ico" />
...
<link rel="apple-touch-icon" href="%PUBLIC_URL%/logo192.png" />
<link rel="manifest" href="%PUBLIC_URL%/manifest.json" />
...
</head>
<body>
<noscript>You need to enable JavaScript to run this app.</noscript>
<div id="root"></div>
</body>
</html>
I subtracted irrelevant code and added the ...'s, for the record.
Locally, my network tab in my browser's developer tools shows:
http://localhost:3000/{project-name}/manifest.json
But, when I hosted it the exact way you have (with it working properly before I added the custom domain), it returns:
{my-custom-domain}/{project-name}/manifest.json
The /public/index.html file has loaded, but it can't access the other files it calls correctly.
It seems that the index.html is adding an extra part to the address from the %PUBLICURL%. By that, I mean the %PUBLICURL% adds the {project-name} between the custom domain and the files it is trying to access. My url should read:
{my-custom-domain}/manifest.json
I can edit the values in the developer's tools to remove that part, and then the files will load. But, this still doesn't solve the issue.
---------edit: solved the issue---------
I am unsure if the above changes are necessary, but I did remove the %PUBLICURL% from my /public/index.html file.
Now for the good part - the fix!
In your package.json, be sure to change the:
"homepage": "your-github-url"
to:
"homepage": "your-fancy-new-custom-domain"
Save it up and then run your build script included in most of the resources I found ( like this: https://create-react-app.dev/docs/deployment/#step-2-install-gh-pages-and-add-deploy-to-scripts-in-packagejson ):
npm run deploy
This will update your project, post it to your GitHub, and deploy it. Then just check that you have the same settings as you originally posted an image of on your GitHub pages, and it should work!
My problem is Crawl in Google Search Console can't found sub-routes in React.
The URL is https://huynhsamha.github.io/crypto, and crawler can fetch and render homepage (route /) and static files such as /robots.txt, /favicon.ico, but it can't found the sub-routes, which are rendered by React, (SPA, using Redux), such as /algorithm/sha256. Example, https://huynhsamha.github.io/crypto/algorithm/sha256 can't found by Crawler but it can be accessible.
This is my screenshot in Google Search Console I've tried.
Who can explain why and how to fix my problem? I'm using react-router-dom with react-redux My repository on github here
Edit 1
I've also tried the answer https://stackoverflow.com/a/53966338/8828489 in this question, but not working. I've added script in index.html (https://github.com/huynhsamha/crypto/blob/gh-pages/index.html), but search console can't still found, so it also can't render any error on screen.
Edit 2
I've also tried the answers https://stackoverflow.com/a/54040745/8828489 and https://stackoverflow.com/a/54048119/8828489 in this question, but not working. I've created 404.html file and add scripts as the answer instructs but it didn't also work.
Edit 3
I've also tried the answer https://stackoverflow.com/a/54044148/8828489 in this question by creating a simple sitemap.xml, googlebot can find this file and discover all URLs I defined in sitemap. But it also cannot fetch and render URLs mentioned.
I found that when i opened https://huynhsamha.github.io/crypto/algorithm/sha256, I actually received a 404 as a response. I think your workaround for hosting SPA on GitHub using the 404.html is the issue here. While us humans see your app being served on our browser correctly, googlebot doesn't care and just look at the response code and see that it has received a 404. You'll need a different workaround that doesn't involves using the 404.html as the entry point to your app directly.
Try following this workaround by rafrex instead, it redirects the browser to index.html using the 404.html while keeping the original route, it claims that googlebot register that as a 301 instead of a 404, for your case that means adding these changes below to your site, pay attention to the script below the <!-- ------Single Page Apps GitHub Pages Workaround------ -->:
<!-- 404.html -->
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>Cryptography</title>
<!-- ------Single Page Apps GitHub Pages Workaround------ -->
<script type="text/javascript">
// Single Page Apps for GitHub Pages
// https://github.com/rafrex/spa-github-pages
// Copyright (c) 2016 Rafael Pedicini, licensed under the MIT License
// ----------------------------------------------------------------------
// This script takes the current url and converts the path and query
// string into just a query string, and then redirects the browser
// to the new url with only a query string and hash fragment,
// e.g. http://www.foo.tld/one/two?a=b&c=d#qwe, becomes
// http://www.foo.tld/?p=/one/two&q=a=b~and~c=d#qwe
// Note: this 404.html file must be at least 512 bytes for it to work
// with Internet Explorer (it is currently > 512 bytes)
// If you're creating a Project Pages site and NOT using a custom domain,
// then set segmentCount to 1 (enterprise users may need to set it to > 1).
// This way the code will only replace the route part of the path, and not
// the real directory in which the app resides, for example:
// https://username.github.io/repo-name/one/two?a=b&c=d#qwe becomes
// https://username.github.io/repo-name/?p=/one/two&q=a=b~and~c=d#qwe
// Otherwise, leave segmentCount as 0.
var segmentCount = 1;
var l = window.location;
l.replace(
l.protocol + '//' + l.hostname + (l.port ? ':' + l.port : '') +
l.pathname.split('/').slice(0, 1 + segmentCount).join('/') + '/?p=/' +
l.pathname.slice(1).split('/').slice(segmentCount).join('/').replace(/&/g, '~and~') +
(l.search ? '&q=' + l.search.slice(1).replace(/&/g, '~and~') : '') +
l.hash
);
</script>
</head>
<body>
</body>
</html>
<!-- index.html -->
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<meta name="theme-color" content="#000000">
<meta name="description" content="Cryptography Algorithms: Secure Hash Algorithm (sha256, sha512, ...), Message Digest Algorithm (md5, ripemd160), HMAC-SHA, HMAC-MD, pbkdf2, Advanced Encryption Standard (AES), Triple Data Encryption Standard, (TripleDES, DES), RC4, Rabbit, ...">
<meta name="keywords" content="crypto, algorithms, secure hash, sha, sha512, sha256, message digest, md5, hmac-sha, aes, des, tripledes, pbkdf2, rc4, rabbit, encryption, descryption">
<meta name="author" content="huynhsamha">
<!-- Open Graph -->
<meta property="fb:app_id" content="440168923127908">
<meta property="og:url" content="https://huynhsamha.github.io/crypto">
<meta property="og:title" content="Cryptography Algorithms">
<meta property="og:description" content="Cryptography Algorithms: Secure Hash Algorithm (sha256, sha512, ...), Message Digest Algorithm (md5, ripemd160), HMAC-SHA, HMAC-MD, pbkdf2, Advanced Encryption Standard (AES), Triple Data Encryption Standard, (TripleDES, DES), RC4, Rabbit, ...">
<meta property="og:type" content="website">
<meta property="og:image" content="%PUBLIC_URL%/img/main.jpeg">
<meta property="og:site_name" content="Cryptography">
<meta property="og:locale" content="vi_VN">
<!-- Twitter Card -->
<meta name="twitter:card" content="summary">
<meta name="twitter:site" content="#huynhsamha">
<meta name="twitter:creator" content="#huynhsamha">
<meta name="twitter:url" content="https://huynhsamha.github.io/crypto">
<meta name="twitter:title" content="Cryptography Algorithms">
<meta name="twitter:description" content="Cryptography Algorithms: Secure Hash Algorithm (sha256, sha512, ...), Message Digest Algorithm (md5, ripemd160), HMAC-SHA, HMAC-MD, pbkdf2, Advanced Encryption Standard (AES), Triple Data Encryption Standard, (TripleDES, DES), RC4, Rabbit, ...">
<meta name="twitter:image:src" content="%PUBLIC_URL%/img/main.jpeg">
<!--
manifest.json provides metadata used when your web app is added to the
homescreen on Android. See https://developers.google.com/web/fundamentals/engage-and-retain/web-app-manifest/
-->
<link rel="manifest" href="%PUBLIC_URL%/manifest.json">
<link rel="shortcut icon" href="%PUBLIC_URL%/favicon.ico">
<link rel="author" href="//github.com/huynhsamha">
<link rel="canonical" href="//huynhsamha.github.io/crypto">
<!--
Notice the use of %PUBLIC_URL% in the tags above.
It will be replaced with the URL of the `public` folder during the build.
Only files inside the `public` folder can be referenced from the HTML.
Unlike "/favicon.ico" or "favicon.ico", "%PUBLIC_URL%/favicon.ico" will
work correctly both with client-side routing and a non-root public URL.
Learn how to configure a non-root public URL by running `npm run build`.
-->
<link href="//fonts.googleapis.com/css?family=Open+Sans:400,600,700&subset=vietnamese" rel="stylesheet">
<link rel="stylesheet" href="%PUBLIC_URL%/css/bootstrap.min.css">
<link rel="stylesheet" href="%PUBLIC_URL%/lib/font-awesome/css/font-awesome.min.css">
<!-- ------Single Page Apps GitHub Pages Workaround------ -->
<script type="text/javascript">
// Single Page Apps for GitHub Pages
// https://github.com/rafrex/spa-github-pages
// Copyright (c) 2016 Rafael Pedicini, licensed under the MIT License
// ----------------------------------------------------------------------
// This script checks to see if a redirect is present in the query string
// and converts it back into the correct url and adds it to the
// browser's history using window.history.replaceState(...),
// which won't cause the browser to attempt to load the new url.
// When the single page app is loaded further down in this file,
// the correct url will be waiting in the browser's history for
// the single page app to route accordingly.
(function(l) {
if (l.search) {
var q = {};
l.search.slice(1).split('&').forEach(function(v) {
var a = v.split('=');
q[a[0]] = a.slice(1).join('=').replace(/~and~/g, '&');
});
if (q.p !== undefined) {
window.history.replaceState(null, null,
l.pathname.slice(0, -1) + (q.p || '') +
(q.q ? ('?' + q.q) : '') +
l.hash
);
}
}
}(window.location))
</script>
<title>Cryptography</title>
</head>
<body>
<noscript>
You need to enable JavaScript to run this app.
</noscript>
<div id="root"></div>
<!--
This HTML file is a template.
If you open it directly in the browser, you will see an empty page.
You can add webfonts, meta tags, or analytics to this file.
The build step will place the bundled scripts into the <body> tag.
To begin the development, run `npm start` or `yarn start`.
To create a production bundle, use `npm run build` or `yarn build`.
-->
<script src="%PUBLIC_URL%/js/jquery-3.3.1.slim.min.js" type="text/javascript"></script>
<script src="%PUBLIC_URL%/js/popper.min.js" type="text/javascript"></script>
<script src="%PUBLIC_URL%/js/bootstrap.min.js" type="text/javascript"></script>
<!-- Google Adsense -->
<script async src="//pagead2.googlesyndication.com/pagead/js/adsbygoogle.js"></script>
</body>
</html>
More info and discussions on GitHub's support for single page app here.
I poked around in your source code and don't see anything alarming; however, I found a few posts about similar issues (1) (2). The second seems particularly helpful, so I'll repeat it here. Shout out to #Zerotorescue on Reddit.
Open Google Search Console and go to Crawl -> Fetch as Google and do a fetch and render.
Add this to your site, either as a part of tag in your HTML file or as part of the bundle:
https://gist.github.com/mstijak/715fa2dd3f495a98386c3ebbadbabb8c
I recommend the former since that makes it easier to change if you need to make it more readable (no need to recompile your app).
Push this to your site and then do another fetch and display. The error preventing Google from running your app will now show. The search console resolution is pretty low so you may have to increase the font-size of the error and fetch again. Don't worry, Google doesn't mind repeated calls.
You'll probably find that Google's crawler can't process your code because you're using some ES6 feature it doesn't support. You can fix this by polyfilling. I've tried a couple of things such as https://polyfill.io/ which turned out to not really support Googlebot and while it might sometimes work, it is pretty unreliable. Instead I recommend using babel-polyfill. It will increase your bundle size a little bit for everyone but in my experience it provides the widest browser support with a minimal headache. Just turn it on and you're done.
If you're using create-react-app this is the polyfills.js file I use that you could copy:
https://github.com/WoWAnalyzer/WoWAnalyzer/blob/2c67a970f8bd9026fa816d31201c42eb860fe2a3/config/polyfills.js#L1
Notice there are a lot of comments explaining all the issues the polyfill service introduce that you won't have to deal with if you use babel-polyfill.
Because, react app is onepage web, You need a sitemap file, you can find it how to make a one here ,too make a 404 page, and every route add property that has a anchor
like to
<a title="This my Route One" href="https://myreactapp/routeOne" alt="Route One"/>
The problem is that you're using a 404 page to capture incoming traffic to routes other than /. This means those routes serve a 404 status code (you can see this if you open Network in dev tools and try to visit one of those deep URLs). Google sees a 404 status in the response header and just gives up right away. You probably noticed that the "Not Found" message in Webmaster Tools popped up super-fast.
On a normal server, you would capture those routes and return a successful status code like 200 or 301 and Google would continue crawling. However, because you're using GitHub pages, you need to hack your way around it.
You should be able to do this by setting up an instant redirect from that 404 template to your index template. Browsers interpret instant redirects as 301s. To do this, replace the contents of your 404.html with something like this:
<html>
<head>
<script>
sessionStorage.redirect = location.href; // we'll use this later
</script>
<meta http-equiv="refresh" content="0;URL='/crypto'">
</head>
<body></body>
</html>
Just make sure the file-size of that 404.html is greater than 512b or IE will discard it (damn M$...).
Lastly, you'll need to make sure your index.html captures the original route. To do so, use a script like this in the head of your index.html:
<script>
(function(){
var redirect = sessionStorage.redirect; // remember me?
delete sessionStorage.redirect;
if (redirect && redirect != location.href) {
history.replaceState(null, null, redirect);
}
})();
</script>
For reference, I stole this clever hack from:
https://www.smashingmagazine.com/2016/08/sghpa-single-page-app-hack-github-pages/
I also, do not see anything alarming in your code (although I don't think you need the baseUrl in your <Route /> - though I could be wrong, and don't think that's the issue, but it may be worth eliminating if unnecessary).
Just a guess but looking at the networks tab as I bounced around the links, I noticed the service worker. I am, admittedly, not super savvy when it comes to service workers (yet!), however googling a bit revealed that google crawlers do not yet support service workers as asserted in this article, this article, and by google.... I also noticed that if I run a Lighthouse test on one of the links I reached via in-app navigation (for instance I click on the /algorithm tab from the nav on the homepage and then run a Lighthouse test) I get the following errors:
There were issues affecting this run of Lighthouse: Chrome extensions
negatively affected this page's load performance. Try auditing the
page in incognito mode or from a Chrome profile without extensions.
and more interesting:
Lighthouse was unable to reliably load the page you requested. Make
sure you are testing the correct URL and that the server is properly
responding to all requests. Status code: 404.
...despite clearly seeing it rendered in the browser. Seems suspect. So, if that is part of how navigation is happening (seems it likely is based on the registerServiceWorker.js file in your repo lol), it may be the cause of your links not being found/followed.
Locally, I can view my site easily and all theme changes.
However, once the site is deployed to Netlify, all I am greeted with is a blank white screen.
I tried toggling baseURL configurations to no avail. However, if I rename my _index.md file to index.md in the /content folder the contents of that file will display when deployed via Netlify.
There are no build errors in Netlify or locally.
My working files can be seen at the following GitHub repository.
Explanation Summary: The _index.md file at the root of your content directory is using your themes/gwynn/layouts/index.html template, but you are not telling it to display any content or data from your frontmatter.
themes/gwynn/layouts/index.html code:
<!DOCTYPE html>
<html>
<body>
{{ range first 10 .Data.Pages }}
<h1><a href={{ .Permalink }}>{{ .Title }}</a></h1>
{{ end }}
</body>
</html>
Nothing is showing up because you are displaying only the pages in your site, but both have draft: true set in the frontmatter of the pages in post.
Solution to show Home page Title:
themes/gwynn/layouts/index.html
<!DOCTYPE html>
<html>
<head>
<title>{{ .Title }}</title>
</head>
<body>
<h1>{{ .Title }}</h1>
{{ range first 10 .Data.Pages }}
<h1><a href={{ .Permalink }}>{{ .Title }}</a></h1>
{{ end }}
</body>
</html>
To show the page links, you can set draft: false within content/post/first.md and content/post/second.md
Netlify doesn’t automatically know that you have a Hugo site. In the app.netlify.com interface, navigate to Site settings, then “Build & deploy” in the left-hand navigation.
In the Build command field, type hugo, and in the Publish directory, type public. I think this should be enough.
(If not, here’s a more detailed config from the Hugo site. That strikes me as too complicated, though.)
Going further, and this is optional, I prefer to run the hugo command locally, which generates the whole site, and the resulting public directory becomes part of my repo. In that case, I leave the above Build command: field blank, but keep the Publish directory: public field.
I am trying to run the google-cdn plugin via Gulp (gulp-google-cdn) to covert bower references in my HTML file into the CDN equivalent. Gulp-google-cdn does not do anything, and enabling the DEBUG, shows: google-cdn Could not find satisfying version for angular-material ^1.0.5
My task (I use a subdirectory with tasks per file):
gulp.task('HTML:Release', function() {
return gulp.src('../src/*.html')
.pipe(googleCdn(require('../bower.json')))
.pipe(gulp.dest('../dist/') )
;
});
HTML:
<!DOCTYPE html>
<html ng-app="OntarioDarts" ng-cloak lang="en">
<head>
</head>
<body layout="row" ng-cloak>
<div layout="column" class="relative" layout-fill role="main">
<md-content flex md-scroll-y>
<ng-view></ng-view>
</md-content>
</div>
</body>
<!-- Load JavaScript Last for Speed. Load from CDN for cache speed -->
<!-- Angular JS -->
<script src="bower_components/angular/angular.js"></script>
<script src="bower_components/angular-material/angular-material.min.js"></script>
<script src="bower_components/angular-material-icons/angular-material-icons.min.js"></script>
The distribution file does not point Angular to the CDN, but still tries to use the bower_components, even though it did not complain that the files were not found.
One problem I found is that I have Angular set at ^1.5.0 in my bower.json. However, I was only using the default Google CDN, which does not currently have the 1.5.0 available. I changed the version in the bower.json file to be ^1.4.0, and then the file was changed to use the CDN with version 1.4.7.
The problem though is that the reference did not get changed to HTTPS://, but was left simply as src="//ajax.googleapis.com/ajax/libs/angularjs/1.4.7/angular.min.js"
Gulp-google-cdn does not do anything, and enabling the DEBUG, shows: google-cdn Could not find satisfying version for angular-material ^1.0.5
That's because the newest version available from the Google CDN is 1.0.4.
The problem though is that the reference did not get changed to HTTPS://, but was left simply as src="//ajax.googleapis.com/ajax/libs/angularjs/1.4.7/angular.min.js"
That's not necessarily a problem. That's a protocol-relative URL. If your page is served over HTTP, angular.min.js is fetched over HTTP. If your page is served over HTTPS, angular.min.js is fetched over HTTPS.
Unless you absolutely need angular.min.js to always be fetched over HTTPS you can just leave it like that.
EDIT: ... except for when you're trying to open a local HTML file in a browser. Then your protocol is file:// and the protocol relative URL will refer to your local file system. Which of course leads nowhere.
One way of fixing this would be to serve your html files through a locally running webserver (e.g. with gulp-webserver). When your HTML pages come from e.g. http://localhost:8000/ all the protocol relative URLs will be served over http:// as well.
If you just want all the CDN URLs to be prefixed with https:// instead, here's a way to wrap the google-cdn-data object to achieve this:
var gulp = require('gulp');
var googleCdn = require('gulp-google-cdn');
var jp = require('jsonpath');
function protocol(proto, cdn) {
jp.apply(cdn, '$.*.url', function(url) {
return function(version) {
return proto + url(version);
};
});
return cdn;
}
gulp.task('HTML:Release', function() {
return gulp.src('../src/*.html')
.pipe(googleCdn(require('./bower.json'), {
cdn: protocol('https:', require('google-cdn-data'))
}))
.pipe(gulp.dest('../dist/') );
});
You'll need to run npm install --save-dev google-cdn-data jsonpath for this to work.
I am using servereless to deploy me backend and front end. My front end is using create react app. I believe after I made the following changes
<img className="svg-width" src="/img/Icons/photographer-camera.svg" alt="camera icon" />
<img className="svg-width" src="/img/icons/photographer-camera.svg" alt="camera icon" />
Where I changed Icons/ to icons/ I get the following issue:
Uncaught SyntaxError: Unexpected token <
In my s3 bucket I navigate to img/ and verify that my directory is also lowercase for icons.
The file in question of the syntax error is main.977eb738.js under /static/js/main.977eb738.js of my domain. But when I go to my bucket I don't see that js file. I see
The code in the file its complaining about is the index.html in public/index.html in the create react app boilerplate.
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="theme-color" content="#000000">
<script src="https://maps.googleapis.com/maps/api/js?key=MY_KEY&libraries=places"></script>
<script src="https://js.stripe.com/v3/"></script>
</head>
<body>
<noscript>
You need to enable JavaScript to run this app.
</noscript>
<div id="root"></div>
</body>
</html>
One more thing to note is this works fine locally and even on mobile. I thought this could be cloudfront caching so I waited a full day and still cannot get to the bottom of this error.
I ran into the same issue. I tested incognito and the site worked fine in inco after doing a cache invalidation the same way that Michael stated in the first comment. It looks like it is browser caching alongside the Cloudfront caching.
I was able to resolve the issue by clearing browser cookies/data from the last day.
I would recomend anyone who is uploading directly to AWS S3 bucket to clear the CloudFront edge cache.
Using AWS CLI this can be done with the folowing line:
aws cloudfront create-invalidation --distribution-id YOURID --paths "/*"
In order to find the CloudFront Distribution Id navigate to cloudFront in AWS console.
Read more here: Invalidating Files
In my case, my CloudFront distribution was blocking access to all /static/* files. Creating a CF behavior that whitelisted that path resolved the issue.
I faced a similar issue. I wasn't using serverless(AWS lambda).
What was happening was that inside my build/index.html somehow it was failing at the link's hrefs, and script's src tag.
So, I had <link href="/static/css/main.866f5359.chunk.css" rel="stylesheet"> and I changed it to
<link href="https://s3-us-west-2.amazonaws.com/fullthrottle-labs-react-task/static/css/main.866f5359.chunk.css" rel="stylesheet">, similarly for scripts as well.
So, instead of giving relative paths in build/index.html, giving an absolute path did the trick for me.