How to handle ERR_INSECURE_RESPONSE? - angularjs

My AngularJS app uses $http.get() with https urls. If the server is using a self-signed certificate Chrome will reject the request and log an error ERR_INSECURE_RESPONSE to the console.
I would like to capture this specific error and prompt the user to configure their server with a valid certificate.
I've tried $http.error and $httpProvider.interceptors to get information about this error, but no relevant information is available in the error parameters.
I understand that Chrome is rejecting the request rather than the server, but using Angular, is there anyway to capture that Chrome has rejected the request with error ERR_INSECURE_RESPONSE?

I had the same issue in my ionic Project, but there is No way to overcome this issue clientside, this is security issue, go to server settings for SSL configuration, change DHPARAMS to 2048 bit key, that solved my issue,
Note : my server is amazon s3

I've been struggling with this for a while, trying to figure how to get a type of error in the $http error callback and show a helpful message to users. However, there seem to be no way to determine this specific error, the only way to determine it is to check whether the response.status is equal to -1 (this happens if the request was interrupted for some reason) - but it also might be -1 in other cases, like missing internet connection. So eventually I edited the error message, shown on status -1, to explain user what might be wrong.
Also, in my case users have to enter the server address, so in addition I added checking whether it's not an IP address (IP addresses cannot have valid SSL certificate, only self-signed, which would cause aforementioned error):
if (server_url.match(/^(?:[0-9]{1,3}\.){3}[0-9]{1,3}/)) {
// Show error message like "only domain names are allowed"
}
Hope that helps.

Related

Network request failed from fetch in reactjs app

I am using fetch in a NodeJS application. Technically, I have a ReactJS front-end calling the NodeJS backend (as a proxy), and then the proxy calls out to backend services on a different domain.
However, from logging errors from consumers (I haven't been able to reproduce this issue myself) I see that a lot of these proxy calls (using fetch) throw an error that just says Network Request Failed, which is of no help. Some context:
This only occurs on a subset of all total calls (lets say 5% of traffic)
Users that encounter this error can often make the same call again some time later (next couple minutes/hours/days) and it will go through
From Application Insights, I can see no correlation between browsers, locations, etc
Calls often return fast, like < 100 ms
All calls are HTTPS, non are HTTP
We have a fetch polyfill from fetch-ponyfill that will take over if fetch is not available (Internet Explorer). I did test this package itself and the calls went through fine. I also mentioned that this error does occur on browsers that do support fetch, so I don't think this is the error.
Fetch settings for all requests
Method is set per request, but I've seen it fail on different types (GET, POST, etc)
Mode is set to 'same-origin'. I thought this was odd, since we were sending a request from one domain to another, but I tried to set it differently and it didn't affect anything. Also, why would some requests work for some, but not for others?
Body is set per request, based on the data being sent.
Headers is usually just Accept and Content-Type, both set to JSON.
I have tried researching this topic before, but most posts I found referenced React native applications running on iOS, where you have to set some security permissions in the plist file to allow HTTP requests or something to do with transport security.
I have implement logging specific points for the data in Application Insights, and I can see that fetch() was called, but then() was never reached; it went straight to the .catch(). So it's not even reaching code that parses the request, because apparently no request came back (we then parse the JSON response and call other functions, but like I said, it doesn't even reach this point).
Which is also odd, since the request never comes back, but it fails (often) within 100 ms.
My suspicions:
Some consumers have some sort of add-on for there browser that is messing with the request. Although, I run with uBlock Origin and HTTPS Everywhere and I have not seen this error. I'm not sure what else could be modifying requests that would cause it to immediately fail.
The call goes through, which then reaches an Azure Application Gateway, which might fail for some reason (too many connected clients, not enough ports, etc) and returns a response that immediately fails the fetch call without running the .then() on the response.
For #2, I remember I had traced a network call that failed and returned Network Request Failed: Made it through the proxy -> made it through the Application Gateway -> hit the backend services -> backend services sent a response. I am currently requesting access to backend service logs in order to verify this on some more recent calls (last time I did this, I did it through a screenshare with a backend developer), and hopefully clear up the path back to the client (the ReactJS application). I do remember though that it made it to the backend services successfully.
So I'm honestly not sure what's going on here. Does anyone have any insight?
Based on your excellent description and detective work, it's clear that the problem is between your Node app and the other domain. The other domain is throwing an error and your proxy has no choice but to say that there's an error on the server. That's why it's always throwing a 500-series error, the Network Request Failed error that you're seeing.
It's an intermittent problem, so the error is inconsistent. It's a waste of your time to continue to look at the browser because the problem will have been created beyond that, either in your proxy translating that request or on the remote server. You have to find that error.
Here's what I'd do...
Implement brute-force logging in your Node app. You can use Bunyan, or Winston or just require(fs) and write out to some file when an error occurs. Then look at the results. Only log it out when the response code from the other server is in the 400 or 500 ranges. Log the request object and the response object.
Something like this with Bunyan:
fetch(urlToRemoteServer)
.then(res => res.json())
.then(res => whateverElseYoureDoing(res))
.catch(err => {
// Get the request & response to the remote server
log.info({request: req, response: res, err: err});
});
where the res in this case is the response we just got from the other domain and req is our request to them.
The logs on your Azure server will then have the entire request and response. From this you can find commonalities. and (🤞) the cause of the problem.

ERR_HTTP2_PROTOCOL_ERROR after authentication is done

I'm trying to use ITfoxtec.Identity.Saml2.MvcCore on a .NET Core 3.1 web application using an in-house IdP.
It works great on our test server (Windows Server 2012, hosted in the IIS) but I can't get it to work on any other server.
This is what happens:
The initial call to the website is correctly identified as a non authenticated call and the user is being sent to the IdP where the user logs in as usual. The SAML-token is then posted back to the web applications assertion consumer service where everything seems like it does what its supposed to, saml2AuthnResponse.Status has statuscode Saml2StatusCodes.Success and the logfile says "AuthenticationScheme: saml2 signed in". Then it reads the ReturnUrl-parameter and log something like "Executing RedirectResult" but then it just stops. Nothing in the logfile, nothing in the IIS-logs. The user is met by the message
This site can’t be reached
...
ERR_HTTP2_PROTOCOL_ERROR
In short, every controller that has the [Authorize]-attribute gives the ERR_HTTP2_PROTOCOL_ERROR-error. When I remove all [Authorize]-attributes the application works great, although without authentication.
I've also tried the example TestWebAppCore-application from ITfoxtec.Identity.Saml2's github-page and it gives the same error. It works on our 2012 test-server but nowhere else.
Any ideas that I can try?
I think you need to trace the calls to see the actual http request and responses send between the browser and server. I usually use Fiddler for tracing the requests/response. Remember to enable Fiddler for https tracing.
My first thought is that the problem can have something to do with cookies. But it is only a guess...
You might be on to something, we disabled http/2 on the server and was greeted instead by this message:
Bad Request - Request Too Long
HTTP Error 400. The size of the request headers is too long.
It uses 5 cookie-chunks for the SAML-data for a total of 19941 bytes which is a bit to much. I've tried to make the application save the sessiondata in classic session objects instead but I cant seem to get it to work.
This is what I added to StartUp.cs:
In ConfigureService:
services.AddMvc()
.AddSessionStateTempDataProvider();
services.AddSession(options =>
options.Cookie.IsEssential = true
);
services.Configure<CookiePolicyOptions>(options =>
{
options.CheckConsentNeeded = context => false;
options.MinimumSameSitePolicy = SameSiteMode.None;
});
In Configure:
app.UseSession();
But it still fills up the header with cookies. What am I doing wrong? Is there a another way to make the session cookies smaller?

UNAUTHORIZED_CLIENT ABP Framework

My client site is broadcasting from 10.0.0.70. api broadcasts from localhost:44376 on the same machine.
10.0.0.70:4200 opens but when I click login it leads to http://localhost:44376/account/login site but 500
Internal Server Error
I am getting an UNAUTHORIZED_CLIENT error.
Probably you changed the appsettings endpoints to 10.0.x from localhost after running dbmigrator causing your client still registered with localhost:4200 redirect uri.
That's why you're getting UNAUTHORIZED_CLIENT error. I assume you are at the beginning of the project; you can delete your db and run db migrator again with your updated settings.
You can also check application logs for exact error messages; identityserver errors are logged detailed in log file.
This usually happens when CORS URL defined in the ClientCorsOrign database table is not valid. eg https:///www.domain.co.za is valid while https:///www.domain.co.za/app is not valid. So to accurate identitify cause of this error, open Logs in Identity api, in my case the CORS url was invalid..

IdentityServer4 Silent Renew Manually without client library

I'm writting authentication for flutter, which doesn't have direct access to any of the js clients so I'm trying to parse all of the stuff myself for silent renew. I'm having a number of problems and I can't find anyone that has done it so I figured I'd ask here:
When I create my iframe for silent renew I use the check_session path. This works fine, and I can send in my post message of " <session_state>". However 100% of the time I get back a response message of event.data == "changed". What am I doing wrong here?
When I do try and create a renew without prompt, I can't figure out what to put into the hidden iframe as the URL to make it work. The client uses code flow by default so I'm passing a url that looks like this:
https://localhost:44401/connect/authorize?client_id=Admin&redirect_uri=http%3A%2F%2Flocalhost%3A51190%2Fcallback.html&response_type=code%20token&scope=openid+profile+email+offline_access&code_challenge_method=S256&code_challenge=&prompt=none&state=
This always returns with login_required. I tried id id_token for the response_type but no dice there either. I just get a massively long error and it says that the grant_type is invalid in the console for identity server.
So what's the trick to getting this working?

How is "No 'Access-Control-Allow-Origin' header" a server issue if I can turn on a CORS extension and it works?

I access a server that I can change in any way. It is only available to me.
GETS / POSTS work in curl, but I get an error in my angular web app
I read a ton of posts about this, and after nothing seemed to work, I installed the CORS extension to Chrome, added *://*/*, and I have to turn it on anytime I'm trying to access the server. But it works.
Most of the posts say this is because the server does not allow access from outside sources. So I did some more digging and found the W3 CORS enabled site, that specifies a filter must be added.
However, when I get the error, I can open the network panel and see that the response came back exactly as I was expecting, so why did I get an error?
This makes it seem like Chrome is not allowing access.
Why must the server be changed to allow this?
Does this mean anyone with this chrome extension can access my server?
It seems like it should be possible to configure a header in my $http.get that would allow this, but everyone keeps saying its the server...
Cross domain calls are not allowed by default. When the browser makes a call to a website or Web-API sitting on a different domain than the domain opened on the browser, it includes a HTTP header "Origin" in the request. The server looks at this header and if it's white-listed it includes the header Access-Control_Allow_Origin in the response. All this happens in a pre-flight request using HTTP Options method before the actual GET/POST call. So for the CORS to work the server has to allow the client domain, so the browser can make further calls.

Resources