I'm encountering a pretty strange issue in IE11 where the browser is overriding the Authorization header in my requests even though I am setting it via AngularJS.
Basically, I have an HTTP interceptor registered for all requests that looks like this:
AuthInterceptorService.request = function (config) {
config.headers.Authorization = "Bearer " + bearerToken;
}
This works great in all browsers (even IE under certain conditions). I have my app set up in IIS as allowing anonymous authentication and I have basic/integrated authentication disabled for this subsite, however, the parent configuration has windows authentication eabled.
What is happening occasionally is that the browser will make a request to the root URL for a static file (say, /favicon.ico). This request is denied with a 401. The browser responds with negotiated authentication and gets the favicon. At this point, all other browsers still let my code set the Authorization header, but once this integrated authentication happens in IE, the authorization header seems to get stuck - no matter what my code does, the authorization header is always using integrated authentication. This causes all requests to my API to fail because no Bearer token is present.
I was able to work around the favicon issue by specifying a more local favicon (where static files can be served anonymously), but I am wondering if there is a less hacky solution to this issue. Can I somehow convince IE to let me set the Authorization header even if Windows authentication has taken place on a previous request?
Note: I found this question which seems to be related (maybe the same underlying cause).
If you look at the Negotiate Operation Example of the RFC 4559 document, it involves a pseudo mechanism used by IE to negotiate the choice of security when authenticating with IIS.
The first time the client requests the document, no Authorization
header is sent, so the server responds with
S: HTTP/1.1 401 Unauthorized
S: WWW-Authenticate: Negotiate
The client will obtain the user credentials using the SPNEGO GSSAPI
mechanism type to identify generate a GSSAPI message to be sent to
the server with a new request, including the following Authorization
header:
C: GET dir/index.html
C: Authorization: Negotiate a87421000492aa874209af8bc028
The server will decode the gssapi-data and pass this to the SPNEGO
GSSAPI mechanism in the gss_accept_security_context function. If the
context is not complete, the server will respond with a 401 status
code with a WWW-Authenticate header containing the gssapi-data.
S: HTTP/1.1 401 Unauthorized
S: WWW-Authenticate: Negotiate 749efa7b23409c20b92356
The client will decode the gssapi-data, pass this into
Gss_Init_security_context, and return the new gssapi-data to the
server.
So, I don't think its possible for you to intermingle while the negotiation takes place as the process is internal
Related
I added additional API to the Duende IdentityServer 6.2 as described here. Then I tried to access it from a sample App, using typed httpClient using their own library called AccessTokenManagement (aka Identity.Model) pretty much following their simple example. I use Authorization Code flow, everything pretty much simple and default.
It works well until both server and client are on the same dev machine under localhost. As soon as I publish IdentityServer to IIS, the API stops to work, while the rest still works well (I can be authenticated, and I see in the Fiddler that token exchanges work normally).
The call to API consists from two calls:
Calling to /connect/token using refresh token. Server returns access token.
Calling my endpoint using this new access token.
The flow fails on the step 1. Call to /connect/token is already unauthorized and I can't understand why. The "good" and "bad" calls looks the same, I cannot see any differences. Previous call moment ago to /connect/userinfo consists of the same two steps and it works. Logs on both server and client give no clues.
No reverse proxies, just good plain simple URI. Automatic key management is enabled and the keys are in the SQL table, common for dev and published server. Asp.Net Core Data Protection is enabled and keys are also common.
Relevant parts of logs are below. I noticed that "No endpoint entry found for request path" is specific to IdentityServer and it doesn't actually mean that endpoint was not found. It was found but not processed. I also noticed reacher response headers from bad request and log entry about "Cookie signed-in" in good request but not sure what does it mean and whether it's relevant.
I'm running out of ideas.
Bad response from IIS while trying to get new Access Token:
Proper response while developing:
///////Relevant part of log for BAD request
|Duende.AccessTokenManagement.OpenIdConnect.UserAccessAccessTokenManagementService|Token for user test#test.com needs refreshing.
|Microsoft.AspNetCore.Authentication.Cookies.CookieAuthenticationHandler|AuthenticationScheme: cookie was successfully authenticated.
|Duende.AccessTokenManagement.OpenIdConnect.UserTokenEndpointService|refresh token request to: https://auth.mysite.org/connect/token
|Duende.AccessTokenManagement.OpenIdConnect.UserAccessAccessTokenManagementService|Error refreshing access token. Error = Unauthorized
|System.Net.Http.HttpClient.IdsService.ClientHandler|Sending HTTP request POST https://auth.mysite.org/mycontroller/myaction
|System.Net.Http.HttpClient.IdsService.ClientHandler|Received HTTP response headers after 117.7278ms - 401
///////Same part of GOOD request
|Duende.AccessTokenManagement.OpenIdConnect.UserAccessAccessTokenManagementService|Token for user test#test.com needs refreshing.
|Microsoft.AspNetCore.Authentication.Cookies.CookieAuthenticationHandler|AuthenticationScheme: Cookies was successfully authenticated.
|Duende.AccessTokenManagement.OpenIdConnect.UserTokenEndpointService|refresh token request to: https://localhost:5001/connect/token
|Microsoft.AspNetCore.Authentication.Cookies.CookieAuthenticationHandler|AuthenticationScheme: Cookies signed in.
|System.Net.Http.HttpClient.IdsService.ClientHandler|Sending HTTP request POST https://localhost:5001/mycontroller/myaction
|System.Net.Http.HttpClient.IdsService.ClientHandler|Received HTTP response headers after 1994.9611ms - 200
///////Server log during BAD request
Duende.IdentityServer.Hosting.EndpointRouter No endpoint entry found for request path: "/mycontroller/myaction"
Duende.IdentityServer.Hosting.LocalApiAuthentication.LocalApiAuthenticationHandler HandleAuthenticateAsync called
Duende.IdentityServer.Hosting.LocalApiAuthentication.LocalApiAuthenticationHandler AuthenticationScheme: "IdentityServerAccessToken" was not authenticated.
Duende.IdentityServer.Hosting.LocalApiAuthentication.LocalApiAuthenticationHandler AuthenticationScheme: "IdentityServerAccessToken" was challenged.
Okay, found it. Thankfully, looked at Fiddler's WebView and had seen familiar picture!
Then, found this topic. The solution was disabling Basic authentication in IIS settings. Access token request has basic authentication header and it seems like IIS intercepts it. Still a bit unclear why other parts of flow worked.
when trying to $.ajax to fetch some content from other websites in my website, I got the error.
Failed to load https://www.pinterest.com/: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:8100' is therefore not allowed access.
I knew if the target website didn't allow localhost:8100 to fetch the data, I cannot fetch it in the client side on the web.
However, I found that mobile app (not mobile browser, but android/ios application) does not have the issue, they can simply get the website content by their default mobile built-in HTTP get function.
Do i want to ask why mobile will not encounter CORS issue (mobile can fetch the webcontent simply by the built-in http get function)?
thanks.
CORS is enforced by the browser to fulfill the security standard they have to meet. It does not affect requests made programmatically from any language, like a curl call on bash.
This is how CORS works, based on Wikipedia:
The browser sends the OPTIONS request with an Origin HTTP header. The value of this header is the domain that served the parent page. When a page from http://www.example.com attempts to access a user's data in service.example.com, the following request header would be sent to service.example.com: Origin: http://www.example.com.
The server at service.example.com may respond with:
An Access-Control-Allow-Origin (ACAO) header in its response indicating which origin sites are allowed. For example Access-Control-Allow-Origin: http://www.example.com
An error page if the server does not allow the cross-origin request
An Access-Control-Allow-Origin (ACAO) header with a wildcard that allows all domains: Access-Control-Allow-Origin: *
The way CORS works means it is optional. Browsers enforce it to prevent Javascript AJAX calls to perform malicious calls. But other types of consumers built by hand don't need to enforce CORS.
Think in this example:
You are the owner of somesite.com
Users authenticate to your site using the traditional cookie method
User logins into anothersite.com, built by an attacker. This site has the following code:
<script>fetch('http://somesite.com/posts/1', { method: 'DELETE' });</script>
... effectively performing a request to your site and doing bad things.
Happily, the browser will perform a preflight request when it sees a cross-domain request, and if your site does not respond saying that requests coming from anothersite.com are OK, you will be covered by default from a potential attack
This is why CORS only makes sense in the context of a browser. Javascript you send to the browser can not (at least easily) circumvent CORS because the only API that allows you to perform requests from the browser is written in stone. Additionally, there are no local storage or cookies outside of the browser.
Corolarium: Enforcing CORS is a deliberate action from the requester, or whoever is making the requests for you, not the sender. Javascript APIs in browsers enforce it. Other languages don't have the need for the reasons explained.
When running on a device, your files are served over the file:// protocol, not http://, and your origin will therefore not exist. That's why the request from the native device does not trigger CORS.
I'm having an issue with a web app I'm building. The web app consists of an angular 4 frontend and a dotnet core RESTful api backend. One of the requirements is that requests to the backend need to be authenticated using SSL mutual authentication; i.e., client certificates.
Currently I'm hosting both the frontend and the backend as Azure app services and they are on separate subdomains.
The backend is set up to require client certificates by following this guide which I believe is the only way to do it for Azure app services:
https://learn.microsoft.com/en-us/azure/app-service/app-service-web-configure-tls-mutual-auth
When the frontend makes requests to the backend, I set withCredentials to true — which, [according to the documentation][1], should also work with client certificates.
The XMLHttpRequest.withCredentials property is a Boolean that indicates whether or not cross-site Access-Control requests should be made using credentials such as cookies, authorization headers or TLS client certificates. Setting withCredentials has no effect on same-site requests.
Relevant code from the frontend:
const headers = new Headers({ 'Content-Type': 'application/json' });
const options = new RequestOptions({ headers, withCredentials: true });
let apiEndpoint = environment.secureApiEndpoint + '/api/transactions/stored-transactions/';
return this.authHttp.get(apiEndpoint, JSON.stringify(transactionSearchModel), options)
.map((response: Response) => {
return response.json();
})
.catch(this.handleErrorObservable);
On Chrome this works, when a request is made the browser prompts the user for a certificate and it gets included in the preflight request and everything works.
For all the other main browsers however this is not the case. Firefox, Edge and Safari all fail the preflight request because the server shuts the connection when they don't include a client certificate in the request.
Browsing directly to an api endpoint makes every browser prompt the user for a certificate, so I'm pretty sure this is explicitly relevant to how most browsers handle preflight requests with client certificates.
Am doing something wrong? Or are the other browsers doing the wrong thing by not prompting for a certificate when making requests?
I need to support other browsers than Chrome so I need to solve this somehow.
I've seen similar issues being solved by having the backend allow rather than require certificates. The only problem is that I haven't found a way to actually do that with Azure app services. It's either require or not require.
Does anyone have any suggestions on how I can move on?
See https://bugzilla.mozilla.org/show_bug.cgi?id=1019603 and my comment in the answer at CORS with client https certificates (I had forgotten I’d seen this same problem reported before…).
The gist of all that is, the cause of the difference you’re seeing is a bug in Chrome. I’ve filed a bug for it at https://bugs.chromium.org/p/chromium/issues/detail?id=775438.
The problem is that Chrome doesn’t follow the spec requirements on this, which mandate that the browser not send TLS client certificates in preflight requests; so Chrome instead does send your TLS client certificate in the preflight.
Firefox/Edge/Safari follow the spec requirements and don’t send the TLS client cert in the preflight.
Update: The Chrome screen capture added in an edit to the question shows an OPTIONS request for a GET request, and a subsequent GET request — not the POST request from your code. So perhaps the problem is that the server forbids POST requests.
The request shown in https://i.stack.imgur.com/GD8iG.png is a CORS preflight OPTIONS request the browser automatically sends on its own before trying the POST request in your code.
The Content-Type: application/json request header your code adds is what triggers the browser to make that preflight OPTIONS request.
It’s important to understand the browser never includes any credentials in that preflight OPTIONS request — so the server the request is being sent to must be configured to not require any credentials/authentication for OPTIONS requests to /api/transactions/own-transactions/.
However, from https://i.stack.imgur.com/GD8iG.png it appears that server is forbidding OPTIONS requests to that /api/transactions/own-transactions/. Maybe that’s because the request lacks the credentials the server expects or maybe it’s instead because the server is configured to forbid all OPTIONS requests, regardless.
So the result of that is, the browser concludes the preflight was unsuccessful, and so it stops right there and never moves on to trying the POST request from your code.
Given what’s shown in https://i.stack.imgur.com/GD8iG.png it’s hard to understand how this could actually be working as expected in Chrome — especially given that no browsers ever send credentials of any kind in the preflight requests, so any possible browsers differences in handling of credentials would make no difference as far as the preflight goes.
I'm making a React app and trying to use Auth0 to authenticate. After trying to log in, it returns this:
XMLHttpRequest cannot load https://my-domain.auth0.com/usernamepassword/login. Response to preflight request doesn't pass access control check: The value of the 'Access-Control-Allow-Credentials' header in the response is '' which must be 'true' when the request's credentials mode is 'include'. Origin 'http://localhost:3000' is therefore not allowed access. The credentials mode of requests initiated by the XMLHttpRequest is controlled by the withCredentials attribute.
I thought it would be related to this: CORS problems with Auth0 and React but I have both http://localhost:3000, http://localhost:3000/login in the 'Allowed Origins (CORS)' spot in Auth0's settings (and yes I'm using the correct client ID as well).
I tried putting http://localhost:3000/, http://localhost:3000/login in the 'Allowed Callback URL's (don't know exactly what that does) but that didn't work either.
When I use the social connection (Google) it allowed me to login after putting http://localhost:3000/login in the Allowed Callback URL's.
But it still won't work for just a new user logging in.
Any help?
If it makes a difference:
Auth0 Logs show for the social login but there are no logs at all for when I connect otherwise
I think related to this is that I also get this every time I load the page:
There was an error fetching the SSO data. This could simply mean that there was a problem with the network. But, if a "Origin" error has been logged before this warning, please add "http://localhost:3000" to the "Allowed Origins (CORS)" list in the Auth0 dashboard: ...(link to my dash)
I get a 404 from the gravatar website
Also I get these errors (may not be related):
Refused to set unsafe header "accept-encoding"
Refused to set unsafe header "user-agent"
Something was wrong with the client in Auth0. I don't know what it was but I built an Angular4 app and connected to the same client in Auth0 and got the same errors. I then tried deleting the client in Auth0 and making a new one and now it works. I have no idea what was causing the error, but creating a new client and connecting to that one fixed the issue.
I followed the instructions found in the readme in the stormapth-sdk-react github respository to set up a basic login form. The form displays, but I am immediately greeted by errors in the console:
XMLHttpRequest cannot load https://{redacted}.apps.stormpath.io/me. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:3000' is therefore not allowed access. The response had HTTP status code 403.
I get an identical error for the login endpoint.
The Client API Guide indicates that client endpoints have to be configured to allow traffic from a particular domain, but does not provide any instructions for how to do this:
Applications that use the Client API have two relevant configuration parameters, both found on your Application’s page in the Stormpath Admin Console:...
Authorized Callback URIs: This list should include any URIs that your users will be returned to after they have completed authentication with an outside provider, for example as a part of the social login flow. For example, if you do not specify a redirect URI when you kick off the social login flow, the user will be redirected the first URI in this list.
Authorized Origin URIs: This list should include the application’s URL, or whatever URL will be included in the Origin header of requests sent to the Client API.
What do I need to do to get this working?
To fix this, you can login to https://api.stormpath.com, navigate to Applications > My Application, and modify the Authorized Origin URIs to include http://localhost:3000.
Stormpath seems to use a pretty similar setup to many API services. Like the directions say, go to your Stormpath Admin Console, and put your hostname (http://localhost:3000) in the relevant fields for both 1. and 2.
Doing so tells the Stormpath API to allow data to be sent to your application.
The Cross-Origin Resource Sharing (CORS) mechanism gives web servers cross-domain access controls, which enable secure cross-domain data transfers.
As the server you are using (for react) differs from the server you are requesting data(node or something else). (even the subdomain or port matters)
index.js: (server)
const cors = require('cors');
..
..
app.use(cors());
for more info about using cors: npm cors