How to escape # character in HAProxy config? - angularjs

I'm trying to modularize my front-end which is in Angular-JS, with it we are using HA-proxy as a load balancer and K8s.
Each ACL in the HA-proxy configuration is attached to a different service in K8s and since we are using Angular with the (hash-bang enabled), in the HA-proxy configuration file we use that as a way to identify the different modules.
Below is my configuration, in HA-proxy which is failing because I can't escape the # in the file even after following the HA Documentation.
acl login-frontend path_beg /\#/login
use_backend login-frontend if login-frontend
acl elc-frontend path_beg /\#/elc
use_backend elc-frontend if elc-frontend
I have tried escaping it as /%23/login and /'#'/admin but without success.
Any idea would be greatly appreciated.

The fragment (everything followed by a # character) as defined in RFC 3986
As with any URI, use of a fragment identifier component does not
imply that a retrieval action will take place. A URI with a fragment
identifier may be used to refer to the secondary resource without any
implication that the primary resource is accessible or will ever be
accessed.
and it is used on the client side, therefore a client (a browser, a curl, ...) does not send it with a request. As reference: Is the URL fragment identifier sent to the server?
So there is no point to route/acl with it. The reason why haproxy provide an escape sequence for that is you may want to include it with a body, a custom header... but again, you will not obtain that part from the request line (the first line with URI).

What is really happening here is the user is requesting from HAProxy / and Angular, in the user's browser, is then parsing the #/logic and #/elc part to decide what to do next.
I ran into a similar problem with my Ember app. For SEO purposes I split out my "marketing" pages and my "app" pages.
I then mounted my Ember application at /app and had HAProxy route requests to the backend that serviced my Ember app. A request for "anything else" (i.e. /contact-us) was routed to the backend that handled marketing pages.
/app/* -> server1 (Ember pages)
/ -> server2 (static marketing pages)
Since I had some urls floating around out there on the web that still pointed to things like /#/login but really they should now be /app/#/login what I had to do was edit the index.html page being served by my marketing backend and add Javascript to that page that parsed the url. If it detected a /#/login it forced a redirect to /app/#/login instead.
I hope that helps you figure out how to accomplish the same for your Angular app.

Related

Reduce initial server response time with Netlify and Gatsby

I'm running PageSpeed Insights on my website and one big error that I get sometimes is
Reduce initial server response time
Keep the server response time for the main document short because all
other requests depend on it. Learn more.
React If you are server-side rendering any React components, consider
using renderToNodeStream() or renderToStaticNodeStream() to allow
the client to receive and hydrate different parts of the markup
instead of all at once. Learn more.
I looked up renderToNodeStream() and renderToStaticNodeStream() but I didn't really understand how they could be used with Gatsby.
It looks like a problem others are having also
The domain is https://suddenlysask.com if you want to look at it
My DNS records
Use a CNAME record on a non-apex domain. By using the bare/apex domain you bypass the CDN and force all requests through the load balancer. This means you end up with a single IP address serving all requests (fewer simultaneous connections), the server is proxying to the content without caching, and the distance to the user is likely to be further.
EDIT: Also, your HTML file is over 300KB. That's obscene. It looks like you're including Bootstrap in it twice, you're repeating the same inline <style> tags over and over with slightly different selector hashes, and you have a ton of (unused) utility classes. You only want to inline critical CSS if possible; serve the rest from an external file if you can't treeshake it.
Well the behavior is unexpected, I ran the pagespeed insights of your site and it gave me a warning on first test with initial response time of 0.74 seconds. Then i used my developer tools to look at the initial response time on the document root, which was fairly between 300 to 400ms. So I did the pagespeed test again and the response was 140ms. The test was passed. After that it was 120ms.
See the attached image.
I totally think there is no problem with the site. Still if you wanna try i would recommend you to change the server or your hosting for once, try and go for something different. I don't know what kind of server you have right now where the site is deployed. You can try AWS S3 and CloudFront, works well for me.

Private S3 + CloudFront react app: "XML file does not appear to have any style information associated with it"

This is a follow up question to the one found here: CloudFront + S3 Website: "The specified key does not exist" when an implicit index document should be displayed
I am trying to host a React single page app (static website) through S3 and I want to allow https access only (using a custom SSL). I have everything configured with CloudFront and my website is showing up at the CloudFront URL just fine. But when I navigate around the app, I get the error shown in the link above.
According to that post, the error is fixed by switching from a REST to a website endpoint. But in the process, you have to make your S3 bucket public. My question: is there a way to fix this error without switching to a website endpoint and, in the process, making all my S3 content public? Is there some kind of workaround within the AWS ecosystem where I can combine private S3 contents with a process that returns the html doc without the XML formatted error? According to this reference (https://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteEndpoints.html#WebsiteRestEndpointDiff), this seems like it may not be possible, but I'm hoping someone can prove me wrong.
Thanks!
The error you're getting usually occurs when your application tries to access something which it isn't privileged to.
Since you mentioned the app loads normally but you get this error while you move around; So it can be the case that it occurs when a component tries to load a private resource which you haven't added in the policies you have defined.
My question: is there a way to fix this error without switching to a website endpoint and, in the process, making all my S3 content public?
Definitely! But you need to pin point the resources which is being accessed when you're getting the error! I would request you to provide more info regarding the same.
Lastly, if you switch to website endpoints, you won't to able to serve private S3 content. You'll have to make it all public. You can find more info about this here: https://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteEndpoints.html#WebsiteRestEndpointDiff

CDN serving private images / videos

I would like to know how do CDNs serve private data - images / videos. I came across this stackoverflow answer but this seems to be Amazon CloudFront specific answer.
As a popular example case lets say the problem in question is serving contents inside of facebook. So there is access controlled stuff at an individual user level and also at a group of users level. Besides, there is some publicly accessible data.
All logic of what can be served to whom resides on the server!
The first request to CDN will go to application server and gets validated for access rights. But there is a catch - keep this in mind:
Assume that first request is successful and after that, anyone will be able to access the image with that CDN URL. I tested this with Facebook user uploaded restricted image and it was accessible with the CDN URL by others too even after me logging out. So, the image will be accessible till the CDN cache expiry time.
I believe this should work - all requests first come to the main application server. After determining whether access is allowed or not, a redirect to the CDN server or access-denied error can be shown.
Each CDN working differently, so unless you specify which CDN you are looking for its hard to tell.

Using sw-precache with client-side URL routes for a single page app

How would one configure sw-precache to serve index.html for multiple dynamic routes?
This is for an Angular app that has index.html as the entry point. The current setup allows the app to be accessable offline only through /. So if a user go to /articles/list/popular as an entry point while offline they won't be able to browse it and would be given you're offline message. (although when online they'd be served the same index.html file on all requests as an entry point)
Can dynamicUrlToDependencies be used to do this? Or does this need to be handled by writing a separate SW script? Something like the following would do?
function serveIndexCacheFirst() {
var request = new Request(INDEX_URL);
return toolbox.cacheFirst(request);
}
toolbox.router.get(
'(/articles/list/.+)|(/profiles/.+)(other-patterns)',
serveIndexCacheFirst);
You can use sw-precache for this, without having to configure runtime caching via sw-toolbox or rolling your own solution.
The navigateFallback option allows you to specify a URL to be used as a "fallback" whenever you navigate to a URL that doesn't exist in the cache. It's the service worker equivalent to configuring a single entry point URL in your HTTP server, with a wildcard that routes all requests to that URL. This is obviously common with single-page apps.
There's also a navigateFallbackWhitelist option, which allows you to restrict the navigateFallback behavior to navigation requests that match one or more URL patterns. It's useful if you have a small set of known routes, and only want those to trigger the navigation fallback.
There's an example of those options in use as part of the app-shell-demo that's including with sw-precache.
In your specific setup, you might want:
{
navigateFallback: '/index.html',
// If you know that all valid client-side routes will begin with /articles
navigateFallbackWhitelist: [/^\/articles/],
// Additional options
}
Yes, I think you can use dynamicUrlToDependencies, as mentioned in the documentation of the directoryIndex option: https://github.com/GoogleChrome/sw-precache#directoryindex-string.

Can't configure varnish to work with cookies and drupal module

I'm using cookies so that mobile users can visit my site as desktop users. To do this, I give them a cookie - mob_yes.
Then, in a module, i use a drupal hook to see if the cookie is set.
I can see that the cookie IS getting set, but in my module (isset($_COOKIE["mob_yes"])) always returns false when using varnish.
In /etc/varnish/default.vlc I have the following:
if (req.http.Cookie) {
set req.http.Cookie = regsuball(req.http.Cookie, ";(mob_yes)=", "; \1=");
I'm really not sure what's going on here, but I only presume varnish is not unsetting that cookie temporarily? Does anyone have any idea what's going wrong here?
Thanks,
what do you mean by
I can see that the cookie IS getting set
you mean that you can see it in headers in firebug (client side) and then you see it on the server side with tcpdump / varnishlog / application (server side)?
code snippet from vcl is probably part of commonly used way of preserving important cookies by adding a space in front of them, deleting all that dont have ";[space]" combination and removing space at the end.
It is used later on to generate hash for specific url+cookies request.
i think you should check your vcl if its not removing any cookies if user is not logged in - it's a common practice to increase hitrate.
usually in vcl for drupal it's part which checks for DRUPAL_UID

Resources