Hugo prefixes cdn urls with localhost:[port] in local dev - hugo

I am very new to Hugo (i.e. < week).
I'm trying to use a cdn in local development. But, the Hugo server prefixes cdn urls with the base url. I first tried putting the url directly in the tag. But, in trying to solve this same problem, I found the method below.
in config.toml I have:
...
[params]
jqueryCDN = "https://ajax.googleapis.com/ajax/libs/jquery/3.5.1/jquery.min.js"
...
then, in layouts/home.html I have:
<script type="text/javascript" src=”{{ .Site.Params.jqueryCDN | safeURL }}”></script>
what gets rendered is (I have a test site running on the default port, therefore the unusual port)
http://localhost:35349/%E2%80%9Dhttps:/ajax.googleapis.com/ajax/libs/jquery/3.5.1/jquery.min.js%E2%80%9D
How do I get Hugo to render just the external url?

Related

If I deploy my react app, will the link to a localhost still be valid, or will I also need to host the localhost app?

Basically the heading. I have a strapi app at localhost:1337 which I will fetch in React. I'm not very sure how localhost works, and therefore I want to know if the path will still be relevant when I deploy the react app.
When you deploy your react.js app on any server your url named http://localhost:1337/Dashboard
will be changed. In it http://localhost:1337/ is the base url or domain name. Which will change the server to the new one.
your code will maintain same value for that API and you will have to re-build your code each time you change your API, (most of people use low cost hosting provider which allow only port 80 to be used) my advice is to move your endpoit (backend url) outside your code in a json, .env file ... but what will work on most of platform is a variable defined in your public/index.html (not a best pratice but it will work) ex:
<html>
<head>
<!-- you will add this tag here it will contain your backend url -->
<script>
var bakendUrl = "http://....";
</script>
<!-- some other code here -->
</head>
<body>
<div id="root"></div>
</body>
</html>

Making React aware of the custom configured CDN headers

Our React app is served from a static hosting using S3 and CloudFront.
We configured S3 and CloudFront to add CloudFront-Viewer-Country in the return header of each request made to resources in it. So for instance, our index.html makes a call to get the .js bundle from CloudFront, the returned header would include: cloudfront-viewer-country: US in my case.
My goal is to have the React app "wake up to life" already knowing the location of its user. I realize I can probably add some javascript to the index.html to keep/store it somehow so that the React root component can pick up on that and pass it on to wherever it needs to be (probably the redux state). But then I ask myself, how do I tap into the response header received when the <script> tag finished loading the bundle in order to extract the custom header from it?
the index.html is pretty straightforward. Its body looks like this:
<body>
<div id=root></div>
<script type="text/javascript" src="/myBundle.ac9cf87295a8f1239929.js"></script>
</body>
What do you recommend?
It isn't possible to access the headers from the page load or script load. You will have to make a separate request to access the headers.
You could also use browser's locale (navigator.languages) if you need this information for localization.

How to deploy React on IIS?

When working on localhost, the app is assumed to be on root of the local dev server
localhost:50001/index.html
But when deploying to a remote IIS server, there are other web apps running there, and each individual app must be created as an "Application" (IIS terminology)
So for example, the 'Default Web Site' is on port 80; other apps (also on port 80) have their own AppName.
MYSERVERNAME/App1/
MYSERVERNAME/App2/
MYSERVERNAME/MyReactApp/
So now to get to my React App i have an additional path
http://MYSERVERNAME/MyReactApp/index.html
The index.html produced by 'npm run build' contains absolute paths;
To make my deployment work, I manually edited the index.html to contain relative paths
So for example, instead of:
<script type="text/javascript" src="/static/js/main.d17bed58.js"></script>
I added a .(dot) in front of all paths to get:
<script type="text/javascript" src="./static/js/main.d17bed58.js"></script>
This works mostly, and all scripts load initially. BUT I am not happy with the result, because any links and client-side routes (i.e from react-router) that I click within the app, will revert to assume the app is hosted on the root of the webserver. i.e.
http://MYSERVERNAME/
http://MYSERVERNAME/Home
http://MYSERVERNAME/MyWorkOrder/
http://MYSERVERNAME/MyWorkOrder/123456
Furthermore, if I type any of the links directly on the browser (or refresh the page), it will fail obviously.
To recap. the question is I need to maintain the "true" path http://MYSERVERNAME/myReactApp at all times, when deploying to IIS. How to do that?
From the docs:
Building for Relative Paths
By default, Create React App produces a build assuming your app is hosted at the server root.
To override this, specify the homepage in your package.json, for example:
"homepage": "http://mywebsite.com/relativepath",
This will let Create React App correctly infer the root path to use in the generated HTML file.
For example:
<script type="text/javascript" src="/static/js/main.xyz.js"></script>
will become:
<script type="text/javascript" src="/relativepath/static/js/main.xyz.js"></script>
If you are using react-router#^4, you can root <Link>s using the basename prop on any <Router>.
More information here.
For example:
<BrowserRouter basename="/calendar"/>
<Link to="/today"/> // renders <a href="/calendar/today">
What i ended up doing
1) After npm run build, change absolute paths to relative paths within index.html (example href="./etc..." and src="./etc...")
2) use basename in <BrowserRouter basename="/MyReactApp"/> (as per the answer by #mehamasum)
3) And finally, when doing page refresh on a non-existent SERVER route, you need to redirect what would otherwise be a 404, to the index.html, and let the client-side react-router library do its job. How? In IIS Manager, go to the IIS section\Error Pages\double-click\Edit 404 Status code, and in the 'Edit Custom Error Page' dialog, choose 'Execute a URL on this site', and enter the absolute path /MyReactApp/index.html

Angular not working under SSL via NgineX

My default file in nginx https://github.com/NatuMyers/nginxSSL-setup/blob/master/default
I used that, then my node app doesn't allow Angular to work, it just serves the static index page, but the routing etc doesn't work. When it was straight http it worked.
In my file with the node app I have app.js, node modules, and /public among other things.
in public I have angular packages, index.html, and partials.
When I call node app.js, it just serves index.html without the functionality. Here is a complete github of the set up minus the nginx content: https://github.com/NatuMyers/A.M.E.N.SQL-Stack
This is ubuntu, with Nginx on the digital ocean LAMP stack
You are including angular.js with
<script src="http://code.angularjs.org/1.2.13/angular.js"></script>
When your browser is loading assets for a https website it will block http only scripts. So you should change your link to have no protocol, just // and the browser will insert whatever the rest of the page is loaded with.
<script src="//code.angularjs.org/1.2.13/angular.js"></script>

How to redirect crawlers requests to pre-rendered pages when using Amazon S3?

Problem
I have a static SPA site built with Angular and hosted on Amazon S3. I'm trying to make my pre-rendered pages accessible by crawlers, but I can't redirect the crawlers requests since Amazon S3 does not offer a URL Rewrite option and the Redirect rules are limited.
What I have
I've added the following meta-tag to the <head> of my index.html page:
<meta name="fragment" content="!">
Also, my SPA is using pretty URLs (without the hash # sign) with HTML5 push state.
With this setup, when a crawler finds my http://mywebsite.com/about link, it will make a GET request to http://mywebsite.com/about?_escaped_fragment_=. This is a pattern defined by Google and followed by others crawlers.
What I need is to answer this request with a pre-rendered version of the about.html file. I've already done this pre-rendering with Phantom.js, but I can't serve the correct file to crawlers because Amazon S3 do not have a rewrite rule.
In a nginx server, the solution would be to add a rewrite rule like:
location / {
if ($args ~ "_escaped_fragment_=") {
rewrite ^/(.*)$ /snapshots/$1.html break;
}
}
But in Amazon S3, I'm limited by their redirect rules based on KeyPrefixes and HttpErrorCodes. The ?_escaped_fragment_= is not a KeyPrefix, since it appears at the end of the URL, and it gives no HTTP error since Angular will ignore it.
What I've tried
I've started trying using dynamic templates with ngRoute, but later I've realized that I can't solve this with any Angular solution since I'm targeting crawlers that can't execute JavaScript.
With Amazon S3, I have to stick with their redirect rules.
I've managed to get it working with an ugly workaround. If I create a new rule for each page, I'm done:
<RoutingRules>
<!-- each page needs it own rule -->
<RoutingRule>
<Condition>
<KeyPrefixEquals>about?_escaped_fragment_=</KeyPrefixEquals>
</Condition>
<Redirect>
<HostName>mywebsite.com</HostName>
<ReplaceKeyPrefixWith>snapshots/about.html</ReplaceKeyPrefixWith>
</Redirect>
</RoutingRule>
</RoutingRules>
As you can see in this solution, each page will need its own rule. Since Amazon limits to only 50 redirect rules, this is not a viable solution.
Another solution would be to forget about pretty URLs and use hashbangs. With this, my link would be http://mywebsite.com/#!about and crawlers would request this with http://mywebsite.com/?_escaped_fragment_=about. Since the URL will start with ?_escaped_fragment_=, it can be captured with the KeyPrefix and just one redirect rule would be enough. However, I don't want to use ugly URLs.
So, how can I have a static SPA in Amazon S3 and be SEO-friendly?
Short Answer
Amazon S3 (and Amazon CloudFront) does not offer rewrite rules and have only limited redirect options. However, you don't need to redirect or rewrite your URL requests. Just pre-render all HTML files and upload them following your website paths.
Since a user browsing the webpage has JavaScript enabled, Angular will be triggered and will take control over the page which results into a re-rendering of the template. With this, all Angular functionalities will be available for this user.
Regarding the crawler, the pre-rendered page will be enough.
Example
If you have a website named www.myblog.com and a link to another page with the URL www.myblog.com/posts/my-first-post. Probably, your Angular app has the following structure: an index.html file that is in your root directory and is responsible for everything. The page my-first-post is a partial HTML file located in /partials/my-first-post.html.
The solution in this case is to use a pre-rendering tool at deploy time. You can use PhantomJS for this, but you can't use a middleware tool like Prerender because you have a static site hosted in Amazon S3.
You need to use this pre-render tool to create two files: index.html and my-first-post. Note that my-first-post will be an HTML file without the .html extension, but you will need to set its Content-Type to text/html when you upload to Amazon S3.
You will place the index.html file in your root directory and my-first-post inside a folder named posts to match your URL path /posts/my-first-post.
With this approach, the crawler will be able to retrieve your HTML file and the user will be happy to use all Angular functionalities.
Note: this solution requires that all files be referenced using the root path. Relative paths will not work if you visit the link www.myblog.com/posts/my-first-post.
By root path, I mean:
<script src="/js/myfile.js"></script>
The wrong way, using relative paths, would be:
<script src="js/myfile.js"></script>
EDIT:
Below follows a small JavaScript code that I've used to prerender pages using PhantomJS. After installing PhantomJS and testing the script with a single page, add to your build process a script to prerender all pages before deploying your site.
var fs = require('fs');
var webPage = require('webpage');
var page = webPage.create();
// since this tool will run before your production deploy,
// your target URL will be your dev/staging environment (localhost, in this example)
var path = 'pages/my-page';
var url = 'http://localhost/' + path;
page.open(url, function (status) {
if (status != 'success')
throw 'Error trying to prerender ' + url;
var content = page.content;
fs.write(path, content, 'w');
console.log("The file was saved.");
phantom.exit();
});
Note: it looks like Node.js, but it isn't. It must be executed with Phantom executable and not Node.

Resources