WCF RIAServices Querys that throw exceptions have caching problems - silverlight

We have a problem with HTTP Response caching when using WCF RIA Services with Silverlight.
Server side, we have a simple DomainService GET method with no caching specified, like this:
[OutputCache(OutputCacheLocation.None)]
public IQueryable<SearchResults> GetSearchResults(string searchText);
The throws a DomainException when the user is not authenticated (i.e. when the FormsAuthenticationCookie expires). This is as designed.
But when the user is re-authenticated, and the Query is called again with the same 'searchText' parameter, then the Query never gets to the server (no breakpoint hit; Fiddler shows no http request sent).
I think this is because when the exception is thrown on the server, the HTTP Response has the 'Cache-Control' property set to 'private', and when the client wants to perform the same query later (once the user is logged in), then the browser does not even send the request to the server.
If we enter a different search parameter, then the query is re-executed no problem.
Is there any way of ensuring the http response always has 'no-caching' - even when it does not return normally?
UPDATE1
The problem only occurs when deployed to IIS - when testing from Visual Studio with either Casini or IIS Express it works fine.
UPDATE2
I updated the question to reflect new knowledge.

You shouldn't be throwing a DomainException for authorization errors. Due to the way Silverlight handles faults, these responses can still be cached by your browser. Instead, throw an UnauthorizedAccessException from your DomainService and that should fix the caching error on the client.

Related

Network request failed from fetch in reactjs app

I am using fetch in a NodeJS application. Technically, I have a ReactJS front-end calling the NodeJS backend (as a proxy), and then the proxy calls out to backend services on a different domain.
However, from logging errors from consumers (I haven't been able to reproduce this issue myself) I see that a lot of these proxy calls (using fetch) throw an error that just says Network Request Failed, which is of no help. Some context:
This only occurs on a subset of all total calls (lets say 5% of traffic)
Users that encounter this error can often make the same call again some time later (next couple minutes/hours/days) and it will go through
From Application Insights, I can see no correlation between browsers, locations, etc
Calls often return fast, like < 100 ms
All calls are HTTPS, non are HTTP
We have a fetch polyfill from fetch-ponyfill that will take over if fetch is not available (Internet Explorer). I did test this package itself and the calls went through fine. I also mentioned that this error does occur on browsers that do support fetch, so I don't think this is the error.
Fetch settings for all requests
Method is set per request, but I've seen it fail on different types (GET, POST, etc)
Mode is set to 'same-origin'. I thought this was odd, since we were sending a request from one domain to another, but I tried to set it differently and it didn't affect anything. Also, why would some requests work for some, but not for others?
Body is set per request, based on the data being sent.
Headers is usually just Accept and Content-Type, both set to JSON.
I have tried researching this topic before, but most posts I found referenced React native applications running on iOS, where you have to set some security permissions in the plist file to allow HTTP requests or something to do with transport security.
I have implement logging specific points for the data in Application Insights, and I can see that fetch() was called, but then() was never reached; it went straight to the .catch(). So it's not even reaching code that parses the request, because apparently no request came back (we then parse the JSON response and call other functions, but like I said, it doesn't even reach this point).
Which is also odd, since the request never comes back, but it fails (often) within 100 ms.
My suspicions:
Some consumers have some sort of add-on for there browser that is messing with the request. Although, I run with uBlock Origin and HTTPS Everywhere and I have not seen this error. I'm not sure what else could be modifying requests that would cause it to immediately fail.
The call goes through, which then reaches an Azure Application Gateway, which might fail for some reason (too many connected clients, not enough ports, etc) and returns a response that immediately fails the fetch call without running the .then() on the response.
For #2, I remember I had traced a network call that failed and returned Network Request Failed: Made it through the proxy -> made it through the Application Gateway -> hit the backend services -> backend services sent a response. I am currently requesting access to backend service logs in order to verify this on some more recent calls (last time I did this, I did it through a screenshare with a backend developer), and hopefully clear up the path back to the client (the ReactJS application). I do remember though that it made it to the backend services successfully.
So I'm honestly not sure what's going on here. Does anyone have any insight?
Based on your excellent description and detective work, it's clear that the problem is between your Node app and the other domain. The other domain is throwing an error and your proxy has no choice but to say that there's an error on the server. That's why it's always throwing a 500-series error, the Network Request Failed error that you're seeing.
It's an intermittent problem, so the error is inconsistent. It's a waste of your time to continue to look at the browser because the problem will have been created beyond that, either in your proxy translating that request or on the remote server. You have to find that error.
Here's what I'd do...
Implement brute-force logging in your Node app. You can use Bunyan, or Winston or just require(fs) and write out to some file when an error occurs. Then look at the results. Only log it out when the response code from the other server is in the 400 or 500 ranges. Log the request object and the response object.
Something like this with Bunyan:
fetch(urlToRemoteServer)
.then(res => res.json())
.then(res => whateverElseYoureDoing(res))
.catch(err => {
// Get the request & response to the remote server
log.info({request: req, response: res, err: err});
});
where the res in this case is the response we just got from the other domain and req is our request to them.
The logs on your Azure server will then have the entire request and response. From this you can find commonalities. and (🤞) the cause of the problem.

ERR_HTTP2_PROTOCOL_ERROR after authentication is done

I'm trying to use ITfoxtec.Identity.Saml2.MvcCore on a .NET Core 3.1 web application using an in-house IdP.
It works great on our test server (Windows Server 2012, hosted in the IIS) but I can't get it to work on any other server.
This is what happens:
The initial call to the website is correctly identified as a non authenticated call and the user is being sent to the IdP where the user logs in as usual. The SAML-token is then posted back to the web applications assertion consumer service where everything seems like it does what its supposed to, saml2AuthnResponse.Status has statuscode Saml2StatusCodes.Success and the logfile says "AuthenticationScheme: saml2 signed in". Then it reads the ReturnUrl-parameter and log something like "Executing RedirectResult" but then it just stops. Nothing in the logfile, nothing in the IIS-logs. The user is met by the message
This site can’t be reached
...
ERR_HTTP2_PROTOCOL_ERROR
In short, every controller that has the [Authorize]-attribute gives the ERR_HTTP2_PROTOCOL_ERROR-error. When I remove all [Authorize]-attributes the application works great, although without authentication.
I've also tried the example TestWebAppCore-application from ITfoxtec.Identity.Saml2's github-page and it gives the same error. It works on our 2012 test-server but nowhere else.
Any ideas that I can try?
I think you need to trace the calls to see the actual http request and responses send between the browser and server. I usually use Fiddler for tracing the requests/response. Remember to enable Fiddler for https tracing.
My first thought is that the problem can have something to do with cookies. But it is only a guess...
You might be on to something, we disabled http/2 on the server and was greeted instead by this message:
Bad Request - Request Too Long
HTTP Error 400. The size of the request headers is too long.
It uses 5 cookie-chunks for the SAML-data for a total of 19941 bytes which is a bit to much. I've tried to make the application save the sessiondata in classic session objects instead but I cant seem to get it to work.
This is what I added to StartUp.cs:
In ConfigureService:
services.AddMvc()
.AddSessionStateTempDataProvider();
services.AddSession(options =>
options.Cookie.IsEssential = true
);
services.Configure<CookiePolicyOptions>(options =>
{
options.CheckConsentNeeded = context => false;
options.MinimumSameSitePolicy = SameSiteMode.None;
});
In Configure:
app.UseSession();
But it still fills up the header with cookies. What am I doing wrong? Is there a another way to make the session cookies smaller?

$http.post cross domain request not working in Internet explorer (Network Error 0x80070005, Access is denied)

When I make an $http.post request and set the "withCredentials" property to true.
My request works fine in Chrome and Fiefox. However, I'm getting the error below in IE:
XMLHttpRequest: Network Error 0x80070005, Access is denied.
I noticed that if I enable the "Access data resources across domains" setting in IE, The error gets resolved. However I need to find an alternative solution because I can't ask the users to enable that setting obviously.
I noticed that a $http.get request to the same domain is working in IE with no issue, the issue is only with the $http.post request, the Options request is getting a 500 internal server and I see the request and response headers below:
Note:
I do have the necessary custom headers, and I can see them in Chrome when the OPTIONS request succeeds. The headers that I see in Chrome are listed below:
Could you please let me know if I'm missing something that would make the request work in IE without having to enable Access data sources across domains?
Internet Explorer 9 doesn't support cookies in CORS requests. The withCredentials property of the $http arguments attempts to send cookies. I don't think there's any way to fix it with headers. IE10+ should work by default, just be sure that you are not in compatibility mode. CORS isn't fully implemented in IE10 either, but the type of request you are trying to do should work.
You didn't mention what the nature of your web app is, but it impacts the type of workaround you will need for IE9. If possible, see if you can refactor your code to use a GET request instead (again, I don't know what you are trying to do via AJAX so this may be impossible).
You may be able to use Modernizr or something similar to detect if the browser supports CORS. If it is not supported, send the request without AJAX and have a page refresh.
Another alternative if you really want to use AJAX is to set up a proxy on your web server, i.e. the server on the same domain. Instead of making the cross-origin request directly, you make the AJAX request to your same-origin server, which then makes the request to the cross-origin server for you. The server won't have CORS issues. This solution assumes, of course, that you have some server-side scripting going on such as PHP, Node or Java.

working with $http.post function

I want to save data using AngularJS and RestApi. I am sending an object in data parameter.
I tried both $http.post() direct method and $http() method , but non of these are working.
Always the error coming is "Method not allowed-405"
I am running on local machine.
Edit:
Eventually by doing some modifications like I specified "localhost:xxx" before the 'api/abc', now I am getting the error as "The requested resource does not support the http method 'POST'".
The reason is that the API you're using does not support POST requests to the URL you're trying to POST to
More info from http://www.checkupdown.com/status/E405.html below
All Web servers can be configured to allow or disallow any method. For example if a Web server is 'read-only' (no client can modify URL resources on the Web server), then it could be set up to disallow the PUT and DELETE methods. Similarly if there is no user input (all the Web pages are static), then the POST method could be disallowed. So 405 errors can arise because the Web server is not configured to take data from the client at all.

Force.com callout: Is there a way to get the full response from the target server

When calling a web service from Force.com, I am getting:
System.CalloutException: Web service callout failed: Unexpected
element. Parser was expecting element
'http://schemas.xmlsoap.org/soap/envelope/:Envelope' but found ':HTML'
The network guys at the other end has asked to see the full response that Salesforce is getting from their server.
Is there a way to achieve that? I have tried running with debug level 'Finest' from execute anonymous, but that yields the same little message with no further detail.
The message you are getting is because an error is generated as Saleforce is trying to parse the response is and it isn't logged unfortunately.
The parsing error is happening because instead of a SOAP message response you are getting an HTML page. This usually happens when you are accessing a service that is protected behind a firewall. Which means you may be able to see the service when browsing on your computer but remember that Salesforce is outside of your firewall and thus any communication by Salesforce to your service will be blocked.
Couple of ways to address this but this wiki topic from Salesforce best covers the options:
http://www.salesforce.com/us/developer/docs/api/Content/sforce_api_om_outboundmessaging_security.htm
The above is specific to outbound messaging but essentially the technology issues are the same.
Don't forget that Apex includes an HttpRequest Class that works as a lower layer than the SOAP APIs. You should be able to write up a test method that sends a hard-coded XML request to the server and dumps the HttpResponse so you can see it.
Adding my own best answer, based on some internet research:
You can use an external tool like Runscope as a webservice proxy to automatically forward requests and pass through responses and view the XML SOAP messages. This is not a native solution on SFDC but it does do the job.
https://www.runscope.com/
The issue is that Force.com is trying to parse a SOAP response that's actually just HTML. This happens sometimes when an error occurred server-side and the response is meant for a browser to display, rather than sending back an exception report via a properly formatted SOAP response.
If they can't figure out why they are not sending back a consumable SOAP response, then you can try using other tools (outside of Force.com) to make the same webservice call from your browser and then see what the HTML actually says on return.

Resources