We have a web app which is HTML/AngularJS on the front-end and uses MS Web API on the background. We require the use of HTTPS for security reasons. Every article I've read about using plaintext passwords and how to login basically comes down to "just use HTTPS and everything will be secure".
Recently, we were testing the app in-house and the Web API service was accidentally down when the QA person was trying to login. What happened next is what you see in the image below. The password was shown in plain text in the browser. QA, my boss, the company, God and everyone in America is "having a cow" because of this.
The message being displayed in the browser isn't something I coded, it appears that it is part of AngularJS which is trying to do me a favor by showing me a failed API call and what object it was trying to pass to the API. In which case, it makes sense (I think) that Angular has that information.
Can anyone please help me understand what happened here? And what is considered the proper way to address this? I assume I can add some JavaScript code to encrypt the password on the client side first, but that also seems like it would be super easy for a hacker to intercept on the client side. So what's the correct approach to take to keep things secure on the client?
What happened is that the user (you) and the browser (on your machine) live in the same trust boundary.
You just typed that password into the browser. The browser only hides it in the input box to prevent shoulder-surfing. The browser does not really attempt to hide something you just typed in from you.
If you open dev tools in the browser, you can see anything that is sent over the wire in the HTTP protocol. Anyone outside your trust boundary cannot see this because the HTTPS protocol encrypts anything on the wire.
its hard to tell without looking on the code, but I found the similar issue :
app.config(['$qProvider', function ($qProvider) {
$qProvider.errorOnUnhandledRejections(false);
}]);
Related
It seems keeping all the browsers happy is a challenging task, what with all the security they are adding and the complexities of certificates.
I have a SPA (Vuejs) which is using oidc-client.js to implement OIDC, communicating with an Identity Server (Identity Server 4).
First thing to note is that everything works if I run both client and server on localhost.
It is when I deploy the Identity Server to a Staging Server inside our network that things go awry.
So, the hostname of the Idp now differs to that of the SPA (which would be normal in production).
After much work, I've got everything working except IE11 (yep IE).
I had to do several things to get me there such as:
solve the samesite cookie issue of Chrome
create self-signed certificates and install the root certificate in the Trusted Certificates
add Babel config code and Core.js at the client, to enable IE to not throw errors when promises come into play
So, it's been a long road, yet still, I have to deal with this (see animation):
I just can't quite figure out why IE is doing that.
It is not possible to use the dev tools to see any info.
The logs at the server do not contain any information that seems relevant.
Has anyone else seen these "Browser symptoms" in IE.
Happy to provide more information (code, logs etc.) if people think that will help. Just didn't want to dump all that in the initial question, as many people don't like that.
Here are a couple of Fiddler screenshots. The first is from Chrome:
The second on is for IE11.
For some reason, the Silent Refresh is being invoked over and over again with IE11.
I think I can see what is happening, but not sure how to fix it.
There appears to be 2 calls to the Authorize endpoint which fail, conspicuously missing the .AspNetCore.Antiforgery cookie. This results in 2 invocations of silent-refresh.html.
Then, for some reason there is some king of GET request to the base url of the Idp and immediately following on the heels of that request is a request to the Authorize endpoint which does have the .AspNetCore.Antiforgery cookie.
The ship is set straight until the next call to the Authorize endpoint which is the beginning of the next cycle.
However, with Chrome, after the user is logged in, the next call to the Authorize endpoint does contain the cookie.
So, I guess it is the missing cookie which is the issue.
Perhaps this has something to do with the code which I used from this post to solve the Chrome samesite cookie issue?
Cheers
I'm trying to write a VSCode extension where users could log into Google AppEngine with a google account, and I need to get their SACSID cookie to make appengine requests.
So I'm opening a browser window at
https://accounts.google.com/ServiceLogin?service=ah&passive=true&continue=https://appengine.google.com/_ah/conflogin%3Fcontinue%3Dhttp://localhost:3000/
(generated by google.appengine.api.users.create_login_url)
The user logs in and is redirected to my local webserver at
localhost:3000/_ah/conflogin/?state={state}
Now I try to forward the request to my AppEngine app (since it knows how to decode the state parameter), so I do a request to
https://my-app.appspot.com/_ah/conflogin/?state={state}
basically just replacing localhost with the actual app.
but it doesn't work, presumably because the domain is different. I assume this is on purpose, for security.
Is there any way I can make this work ?
Not ideal, but the only solution I've found is to have an endpoint on my GAE instance that does the redirection. Then I can set that as the continue url, when I'm starting the authentication process
https://accounts.google.com/ServiceLogin?service=ah&passive=true&continue=https://appengine.google.com/_ah/conflogin%3Fcontinue%3Dhttps://my-app.appspot.com/redirect?to=http://localhost:3000
I think you should center the attention on the protocols you are using, since it’s known that the cookie name is based on the http protocol (HTTP : ACSID, HTTPS:SACSID), and that’s the security perspective till this point for me.
Having the error you are facing now would be helpful to understand the problem better. Also, how are you performing the call to the API and the code you are using would be helpful too.
Contemplating building an Angular 2 front-end to my website. My question is not necessarily related to Angular but I want to provide full context.
Application logic that displays content to user would shift to the client. So on the server side, I would need to expose data via a RESTful JSON feed. What worries me, is that someone can completely bypass my front-end and execute requests to the service with various parameters, effectively scraping my database. I realize some of this is possible by scraping HTML but exposing a service with nicely formatted data is just a no-brainer.
Is there a way to protect the RESTful service from this? In other words, is there a way to ensure such service would only respond to my Angular 2 application call? Authentication certainly isn't a solution here - I don't want to force visitors to authenticate and the scraper could very well authenticate and get access, anyway.
I would recommend JWT Authorization. One such implementation is OAuth. Basically you get a json web token ( JWT ) that has been signed by an authority you trust that tells about the user and what resources they can access on your api.
If the request doesn't include an Authorization token - your API rejects it.
If the token has been tampered with by someone trying to grant themselves privledges after the token is signed by the authorization authority - your API rejects it.
It is a pretty cool piece of kit.
This site has information about OAuth implementations in different languages, hopefully your favorite is listed.
Some light bed time reading.
There is no obvious way to do it that I know of, but a lot of people seem to be looking at Amazon S3 as a model. If you put credentials in your client code, then anyone getting the client code can see them. I might suggest that you could write the server to pass a time limited token back to the browser with the client code. The client code would be required to pass it back to the server for access. This would prevent anyone from writing their own client code, as only client code sent by the server would work, though only for some period of time. The user might occasionally get timeouts, but that depends on how strict you want to make the token timeouts. Of course, even this kind of thing could be hacked by someone making a client request to get a copy of the token to use with their own client API, but at that point you should be proud that someone is trying so hard to use your API! I have not tried to write such a thing, so I don't have any practical experience with the issue. I myself have wondered about it, but also don't have enough experience with this architecture to see what, if anything, others have been doing. What do angularJS forums suggest?
Additional References: Best Practices for securing a REST API / web service
I believe the answer is "No".
You could do some security by obscurity type stuff. Your rest API could expose garbled data and you could have some function that was "hidden" in your code un-garble it. Though obviously this isn't fool proof, but if you expose data on a public site it's out there regardless of server or client rendering.
I'm working on a Windows Phone 7 application that makes a REST service call. The third party that hosts the web services has an invalid certificate in the current environment. When I hit the URL in Firefox, I get a warning about the cert and I am asked if I want to continue. I'm also using the Poster FF extension to test the call. It works with Poster if I first accept the invalid cert in Firefox. If I don't, then POSTER wont make the request.
In my WP7 Emulator, I can't make the request at all. I get a 404 at the EndGetResponse method. I making the same request as in Poster, so I know there is nothing wrong with the request. I have successfully hit another web service using the same code (no certs involved), so I don't think it's the code. The only thing I can think of is that WP7 doesn't allow requests to an invalid cert. Has anyone had experience with this situation? Is there any way around it?
Is there a way I can tell my app to accept all communication, even if there is an invalid cert?
There is sadly no way to do this on the phone. Ordinarily, i.e. on the desktop this simple line of code will disable certificate checking.
System.Net.ServicePointManager.ServerCertificateValidationCallback = (se, cert, chain, sslError) => { return true; };
If you look at the ServicePointManager on the phone, there's no callback to hook into. It's a massive pain in the arrrrse.
Have you considered writing to the service owner and asking why they're being bad internet citizens? (essentially, what you're seeing here is web security in action, for better or worse)
As Matt says, you might be able to code a simple relay on a web server. It doesn't have to be a special service, but maybe just a web page that does the call for you and spits out RAW text or XML. Your phone client just GETs this page and picks through the response manually.
Where there's a will there's a way.
Luke
You need to install the root CA cert of the issuing party on the phone.
You can do this by emailing the RootCA to the user of the phone. They click on the attachement and it will prompt them to ask if they want to install the certificate on the phone.
Once you have done that your requests should go through.
I dont believe there is a way to do this programatically in your app however.
I'm not aware of a way to install additional certificates on the phone.
In this situation I'd create a proxy service between your app and the 3rd party site and have your app call that. If you need to, you could put the proxy behind a valid cert.
I'm working on building a Silverlight application whereas we want to be able to have a client hit a url like:
http://{client}.domain.com/
and login, where the {client} part is their business name. so for example, google's would be:
http://google.domain.com/
What I was wondering was if anyone has been able, in silverlight, to be able to use this subdomain model to make decisions on the call to the web server so that you can switch to a specific database to run a query? Unfortunately, it's something that is quite necessary for the project, as we are trying to make it easy for their employees to get their company specific information for our software.
Wouldn't it work to put the service on a specific subdomain itself, such as wcf.example.com, and then setup a cross domain policy file on the service to allow it to access it?
As long as this would work you could just load the silverlight in the proper subdomain and then pass that subdomain to your service and let it do its thing.
Some examples of this below:
Silverlight Cross Domain Services
Silverlight Cross Domain Policy Helpers
On the server side you can check the HTTP 1.1 Host header to see how the user came to your server and do the necessary customization based on that.
I think you cannot do this with Silverlight alone, I know you cannot do this without problems with Javascript, Ajax etc. . That is because a sub domain is - for security reasons - treated otherwise than a sub-page by the browsers.
What about the following idea: Insert a rewrite rule to your web server software. So if http://google.domain.com is called, the web server itself rewrites the URL to something like http://www.domain.com/google/ (or better: http://www.domain.com/customers/google/). Would that help?
Georgi:
That would help if it would be static, but alas, it's going to all be dynamic. My hope was to have 1x deployment for the application, and to use the http://google.domain.com/ idea to switch to the correct database for the user. I recall doing this once when we built an asp.net website, using the domain context to figure out what skin to use, etc.
Ates: Can you explain more about what you are saying... sounds like you are close to what I am trying to come up with. Have you seen such a tutorial for this?
The only other way I have come up with to make this work is to have a metabase that when the user logs in, it will switch them to the appropriate database as required... was just thinking as well that telling Client x to hit:
http://ClientX.domain.com/ would have been sweeter than saying to hit http://www.domain.com/ and login. It seemed as if they were to hit their name, and to show it personalized for them right from the login screen would have been much more appealing for the client base.
#Richard B: No, I can't think of any such tutorial that I've seen before. I'll try to be more verbose.
The server-side approach in more detail:
Direct *.example.com to the same IP in your DNS settings.
The backend app that handles login checks the Host HTTP header (e.g. the "HTTP_HOST" server variable in some platforms). That would contain the exact subdomain.example.com that the client used for reaching your server. Extract the subdomain part and continue...
There can also be a client-side-only approach. I don't know much about Silverlight but I'm assuming that you should be able to interface Silverlight with JavaScript. You could read document.location with JavaScript and pass it to your Silverlight applet, whereon further data fetching etc. logic would rely on the subdomain that was passed in by JavaScript.
#Ates:
That is what we did when we wrote the ASP.Net system... we pushed a slew of *.example.com hosts against the web server, and handled using the HTTP headers. The hold-up comes when dealing with WCF pushing the info between the client and the server... it can only exist in one domain...
So, for example, when you have {client}.example.com and {sandbox}.example.com, the WCF service can't be registered to both. It also cannot be registered to just *.example.com or example.com, so that's where the catch 22 is coming in at. everything else I have the prior knowledge of handling.
I recall a method by which an application can "spoof" another domain name in certain instances. I take it in this case, I would need to do such a configuration? Much to research yet I believe.