I am currently using MEAN stack for web development. My questions is how do I mix/redirect HTTP and HTTPs pages on the same website?
For most scenarios I am communicating with the back-end using HTTP protocol. And for some pages I absolutely need to use HTTP (For example /shop route I used a iframe that displays info from other websites. Also for some pages I need to HTTP GET from other API services providers.). It seems that I cannot achieve these with HTTPs setup.
For some cases I am communicating with the back-end using HTTPs (such as /money route) ...etc. And apparently you need HTTPs with money involved.
My back-end has middlewares setup so that if you request /money using HTTP it will return some error (currently returning 301 moved permanently..)
But I am not sure how do I proceed with front-end development. For Angular part should I just do some configurations so when the route consists of /money I just reload the whole page in HTTPs and should I just explicitly make sure the links look someting like this?
<a ng-href="https://www.domain.com/#!/money">money page</a>
But that seems like a lot of hard-coding for me.
My question is:
1. Am I thinking in the right direction?
2. Is it doable?
Thanks and any idea is greatly appreciated!
There are a few things to consider here.
First of all, yes, when money is involved you should always use HTTPS. The S here stands for secure, so as you can imagine this means that regular HTTP is not secure. With regular HTTP things are sent in plain text, so anyone who can see the request along the way can see and copy what's in there. For this reason you should always make sure to use HTTPS if you are doing things like sending financials data, sending passwords/login requests, sending private user data, etc.
You should also remember that you shouldn't load HTTP content on a HTTPS page, this will cause browsers to warn for insecure content or even block the request outright.
Anyway, on to the actuall question. Yes, you are thinking correctly. When going from HTTP to HTTPS you are going to have to do a full reload of the page, but hardcoding https into any such link is not a great solution. Luckily Angular provides us with decorators for directives that allows us to chainge their behavior. You can use this to decorate ngHref so that it adds https on it's own when certain conditions are met.
Decorators are not that easy to get started with however, or at least they weren't the last time I read the docs, so here is one that does what you need and that you can edit to cover more cases:
myApp.config(function($provide) {
$provide.decorator('ngHrefDirective', function($delegate) {
var original = $delegate[0].compile;
$delegate[0].compile = function(element, attrs, transclude) {
if(attrs.ngHref.indexOf('money') !== -1) {
attrs.ngHref = 'https://example.com/'+attrs.ngHref;
}
return original(element, attrs, transclude);
};
return $delegate;
})
});
This will take any ng-href directive that contains the word money, and change the input data to include the full path with https. It will turn this:
<a ng-href="/money">Link that should be re-written</a>
<a ng-href="/other">Link that should not be re-written</a>
Will become this:
<a href="https://example.com/money>Money link</a>
Link that should not be re-written
Related
I have inherited a Ionic app which uses ng-token-auth+devise_token_auth to handle the authentication and the session between front and back.
What happens is quite strange. Sometimes (specially with slow connections) the request (or the response) get lost and after that I get only 401 http errors.
I know that that everytime I send a request the token expires, but when the xhr request is cancelled (by the server I suppose, or by the browser, I don't know) the token is expired without having been replaced by the new one generated by devise_token_auth gem.
I know Rails but I'm not familiar with Angular, neither Ionic and I don't know exactly where to look.
After reading a lot of SO answers where noone seems having my problem (which happens locally and in staging/production), I checked the following
storage is set as localStorage.
config.batch_request_buffer_throttle = 20.seconds
there is no pattern between cancelled requests, sometimes I perform get for the username, sometimes a post or a put to a comment.
Is not a CORS problem because it would happen always or never. (moreover I'm using a proxy as explained in ionic blog)
Maybe it could be related to provisional headers chrome bug. But, how can I be sure?
What puzzles my is that it happens only sometimes and not always. (and there are no errors in the backend)
The only workaround I have found in the devise_token_auth documentation is change config.change_headers_on_each_request to false avoiding in this way the regeneration of the token.
But I don't like this solution because I think it hides the real problem in an insecure way instead of solving the token loss. Any suggestion?
Kindly, please check this thing:
Version: which version of this gem (and ng-token-auth, jToker or Angular2-Token if applicable) are you using?
Request and response headers: these can be found in the "Network" tab of your browser's web inspector.
Rails Stacktrace: this can be found in the log/development.log of your API.
Environmental Info: How is your application different from the reference implementation?
This may include (but is not limited to) the following details:
Routes: are you using some crazy namespace, scope, or constraint?
Gems: are you using MongoDB, Grape, RailsApi, ActiveAdmin, etc.?
Custom Overrides: what have you done in terms of [custom controller overrides]
5?
Custom Frontend: are you using ng-token-auth, jToker, Angular2-
Token, or something else?
I've been diving into authentication between Angular and Express, and decided on using token auth with JWTs and the npm jsonwebtoken package. I've got everything set up on the server side and am receiving the token on the client side, but now I need to know how to make it send the token with every request.
From what I've found, most resources out there say to use an $http interceptor to transform every outgoing request. But people at work have always used $httpProvider.headers.defaults.common["Auth"] = token in a .config block, which seems a lot more straightforward to me. Here's a blog explaining how to do it both ways.
But the accepted answer on this stackoverflow post says it would be better to use interceptors, but he doesn't give a reason why.
Any insight would be helpful.
After a bunch more research and a conversation on Reddit, it seems like the best way to do it is through the interceptor. Doing the setup in the .config or .run blocks may be good for checking if the user is already authenticated when they first load the app (if there is a token in local storage), but won't be possible for handling dynamic changes like logging out or logging in after the app is loaded. I'm pretty sure you could do it through the $http default headers, but might as well just do it in one place.
Hopefully this helps someone in the future!
I'm trying to implement a simple interceptor that allows me to display a message along the lines of "cannot contact the server" in my Angular app. However as the API is on a different host I'm dealing with CORS pre-flight OPTIONS requests.
I've found that if the API is unavailable Chrome dev tools shows a 503 on the OPTIONS request but Angular's $http interceptor catches a 404 response to the subsequent GET request. I believe this is because the OPTIONS response did not contain the required CORS headers so the GET is actually never performed.
Is is possible to intercept the OPTIONS response? If all I see is a 404 I can't distinguish "server down" from "no such resource".
You can't intercept this request by design - the browser is "checking up" on you, making sure YOU should be allowed to make the request.
We've used three solutions to work around this:
If the problem is that you're using a development environment like NodeJS, and your domain names aren't matching (that is, if you normally wouldn't need to deal with this in Production) you can use a proxy. The https://github.com/substack/bouncyBounceJS NodeJS Module is an easy to use option. Then your Web service request domain will match the domain your page is on, and the check won't be triggered. (You can also use tricks like this in Production, although it can be easily abused!)
Also for temporary use, you can use something like Fiddler or Charles to manipulate the request by faking the required headers, or tell your browser not to check them (--disable-web-security in Chrome).
If you have this problem in Production, you either need to legitimately fix it (adjust the Web service handler to add the required headers - there are only two), or find a way to make the request in a way that doesn't trigger the check. For instance, if you control both the source and target domains, you can put a script on the target that makes the requests to itself. Run this in an IFRAME, invisibly. Then you can use things like postMessage() to communicate back and forth. Large services like Facebook use "XHR bridges" like this for the same reason.
My website generates some short URLs when users share a link (ex: http://futureo.us/l/ixjF).
These short URLs redirect the user to the original content I'm linking to. Before redirecting, the app renders a page that contains only Google Analytics javascript code.
Currently my handler code looks like this:
class PostHandler(handler.Handler):
def get(self, code):
#strip URL shortcode
code = code.strip('/')
#grab URL based on shortcode
url = scripts.urlshort.getURL(code)
if url:
self.render('tracking.html')
self.redirect(str(url))
else:
self.write('Code not FOUND.')
This solution isn't working. GA is not registering pageviews to these short links. I would alo like to see who where the referrers to these short-links.
Any ideas how I could fix this?
I believe your problem is that you're adding an HTML tracking code, in a response that has HTTP redirection. Probably the HTTP redirection is processed earlier than the HTML, if the latter is evaluated at all.
Seems to me that the best solution would be to track the redirects on the server side rather than on the client side. As these are redirects anyway, you don't need to track client-only data such as time spent on page, page events etc. Tracking the redirect would be most accurate and simpler if done in the python code. (I don't know, though, of a way to use the google analytics tools for tracking these; for my uses I just track the redirects in an NDB model).
Another solution, which might be slower on the user experience, is to avoid using HTTP redirect (self.redirect) and instead put a javascript client-side redirect which will be evaluated after the tracking code.
window.location = "{{url}}";
I have a REST service sitting at http://restservice.net. I am implementing a client for this service in backbone. The client is simply an html file (for bootstrapping the application) and bunch of js files holding my backbonejs code. I am hosting these files on another site http://client.net.
My backbonejs code is calling into http://restservice.net but now allowed due to same origin policy. I have already looked at other SO questions that talk about how I can only talk to http://client.net.
Do I have to redirect every request through http://client.net. I see that as inefficient. What's the point in using a client side MVC framework then? Am I missing something here?
You have two options: JSONP and CORS both of these demand that your http://restservice.net server is setup to suppor the protocols. Forcing backbone to use JSONP simply requires you passing an option to Backbone.sync. One way to do this is like this:
sync: function(method, model, options){
options.dataType = "jsonp";
return Backbone.sync(method, model, options);
}
The problem with JSONP is that you can only make GET requests, so your REST api is effectively read only. To get CORS working you simply need to configure your api server to send back the proper headers . This would pretty liberal:
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: POST, GET, PUT, DELETE OPTIONS
here is a pretty good run down on CORS. If you set that up, then things will pretty much work as usual.
If you don't have the ability to make changes to the server at http://restservice.net then you have no choice but to proxy all the requests to that service. This is definately inefficient but implementing is probably simpler than you would expect. One thing to consider is a reverse proxy