Auto propogate traces in envoy - distributed

Based on the Documentation envoy is capable of generating and propagating the traces to the Jaeger service cluster.
It also states that
in order to fully take advantage of tracing, the application has to propagate trace headers that Envoy generates while making calls to other services.
So assuming if a client calls -> service A -> calls Service B, service A being proxied behind envoy.
If service A calls service B, this call from A to B would also have to go through envoy right. So the traced Id that was originally generated by envoy when the client called Service A, wouldn't this be propagated to Service B.
Why does the application (Service A) need to forward these headers?

When doing tracing in a service mesh, behind proxies, the traceID generated upon the initial client call is propagated automatically only so long as the call goes from proxy->proxy.
So:
Client sends request
Request hits some ingress proxy. A trace ID is generated
Request is routed to proxy A in front of Service A. The trace ID is propagated from the ingress proxy to proxy A
Request hits microservice A. The trace ID is propagated to here in the headers.
Microservice A now has full control over the request. If the service discards all the HTTP headers when making it's outgoing request to Service B, then the traceID will also be discarded.
To get around this, Microservice A just needs to know which headers represent the traceIDs, how to append into it, and some state to make sure it makes it to outgoing requests. Then you'll get a full transaction chain.
Without the service propagating the headers, tracing basically gives you each path that ends in a microservice. Still useful, but not as complete of a picture.

My interpretation of this is that We need to keep in mind that communication between services needs to support forwarding/"passing along" the trace ID's so that the tracing works correctly.
So it warns us against situations where:
Client calls -> Service A #using http request with trace ID in header.
Service A -> Service B #using tcp request that does not support headers and the trace ID header is lost.
This situation could break or limit tracing functionality.
On the other hand If we have situation where:
Client calls -> Service A #using http request with trace ID in header.
Service A -> Service B #using http request the trace ID is forwarded to service B.
This situation allows for the trace ID header to be present in both connections so the tracing can be logged and then viewed in tracing service dashboard. Then We can explore the path taken by the request and view the latency incurred at each hop.
Hope it helps.

Related

Network request failed from fetch in reactjs app

I am using fetch in a NodeJS application. Technically, I have a ReactJS front-end calling the NodeJS backend (as a proxy), and then the proxy calls out to backend services on a different domain.
However, from logging errors from consumers (I haven't been able to reproduce this issue myself) I see that a lot of these proxy calls (using fetch) throw an error that just says Network Request Failed, which is of no help. Some context:
This only occurs on a subset of all total calls (lets say 5% of traffic)
Users that encounter this error can often make the same call again some time later (next couple minutes/hours/days) and it will go through
From Application Insights, I can see no correlation between browsers, locations, etc
Calls often return fast, like < 100 ms
All calls are HTTPS, non are HTTP
We have a fetch polyfill from fetch-ponyfill that will take over if fetch is not available (Internet Explorer). I did test this package itself and the calls went through fine. I also mentioned that this error does occur on browsers that do support fetch, so I don't think this is the error.
Fetch settings for all requests
Method is set per request, but I've seen it fail on different types (GET, POST, etc)
Mode is set to 'same-origin'. I thought this was odd, since we were sending a request from one domain to another, but I tried to set it differently and it didn't affect anything. Also, why would some requests work for some, but not for others?
Body is set per request, based on the data being sent.
Headers is usually just Accept and Content-Type, both set to JSON.
I have tried researching this topic before, but most posts I found referenced React native applications running on iOS, where you have to set some security permissions in the plist file to allow HTTP requests or something to do with transport security.
I have implement logging specific points for the data in Application Insights, and I can see that fetch() was called, but then() was never reached; it went straight to the .catch(). So it's not even reaching code that parses the request, because apparently no request came back (we then parse the JSON response and call other functions, but like I said, it doesn't even reach this point).
Which is also odd, since the request never comes back, but it fails (often) within 100 ms.
My suspicions:
Some consumers have some sort of add-on for there browser that is messing with the request. Although, I run with uBlock Origin and HTTPS Everywhere and I have not seen this error. I'm not sure what else could be modifying requests that would cause it to immediately fail.
The call goes through, which then reaches an Azure Application Gateway, which might fail for some reason (too many connected clients, not enough ports, etc) and returns a response that immediately fails the fetch call without running the .then() on the response.
For #2, I remember I had traced a network call that failed and returned Network Request Failed: Made it through the proxy -> made it through the Application Gateway -> hit the backend services -> backend services sent a response. I am currently requesting access to backend service logs in order to verify this on some more recent calls (last time I did this, I did it through a screenshare with a backend developer), and hopefully clear up the path back to the client (the ReactJS application). I do remember though that it made it to the backend services successfully.
So I'm honestly not sure what's going on here. Does anyone have any insight?
Based on your excellent description and detective work, it's clear that the problem is between your Node app and the other domain. The other domain is throwing an error and your proxy has no choice but to say that there's an error on the server. That's why it's always throwing a 500-series error, the Network Request Failed error that you're seeing.
It's an intermittent problem, so the error is inconsistent. It's a waste of your time to continue to look at the browser because the problem will have been created beyond that, either in your proxy translating that request or on the remote server. You have to find that error.
Here's what I'd do...
Implement brute-force logging in your Node app. You can use Bunyan, or Winston or just require(fs) and write out to some file when an error occurs. Then look at the results. Only log it out when the response code from the other server is in the 400 or 500 ranges. Log the request object and the response object.
Something like this with Bunyan:
fetch(urlToRemoteServer)
.then(res => res.json())
.then(res => whateverElseYoureDoing(res))
.catch(err => {
// Get the request & response to the remote server
log.info({request: req, response: res, err: err});
});
where the res in this case is the response we just got from the other domain and req is our request to them.
The logs on your Azure server will then have the entire request and response. From this you can find commonalities. and (🤞) the cause of the problem.

Rest service request (>8KB) fails

A Java server exposes REST services using Apache CXF 3.1.10. Trying to call a GET service with a URL longer than 8K, the service gives error.
The REST server uses JAXRSServerFactoryBean that launch a Jetty server. I can not find a way to allow the server to accept request of more than 8K.
Get requests have a query size limit, both on client and server side. (check this for details: maximum length of HTTP GET request?)
Maybe you should move to POST services. Or if you control both the client and server, you may use the request body. (That is allowed for GET requests but there are some clients/servers not supporting that)

Difference between API calls from frontEnd and Api calls from Backend to any external Backend server Code

since i was struggling in making API calls to apache server from my angular app running in node-express,
So i was unable to call apache server with POST calls inspite of setting the CORS filter in most of the ways available,
So someone suggested rather of making calls from AngularJs(Frontend) , make it from NodeJs(Backend-server) which serves your angulas(frontEnd) code.
So kindly assisst me in this as to what exactly is the difference between making API call's from frontEnd to any server or from the backend(server) of the same frontEnd ??
What factors makes it more preferable over the other one ?
Is it proxy or CORS thing which effects FrontEnd based API calls ?
Thanking all in advance
Shohil Sethia
CORS is a policy that is voluntarily enforced by the browser (chrome, firefox, etc.). The decision to allow or deny a request is based on the presence of a certain header (Access-Control-Allow-Origin: *) in a response from the server. There is no equivalent policy in a server side setting, so you are free to make cross-origin requests all day.
From enable-cors.org:
[CORS] prevents JavaScript from making requests across domain boundaries
This is why I usually build a small server api in Node to grab data from external 3rd party servers.
When the user makes a request on the front end the request is sent to the backend function with optional parameters which the end-user specified.
Depending on the parameters supplied, different functions might be run before the backend queries the third party API.
3rd party API response is returned to the backend.
Backend either passes the response along or does more stuff before passing the response along.
Then the frontend does stuff with the data based on the response received (ie there were less than 5 results so adding pagination is not necessary).
If developed this way you gain access to the following which all benefit your application/website.
Keep any necessary credentials on the server. ( extremely important )
Obtain logs.
Validate on both the server side and the client side for an added layer of security.
Use the server to filter sensitive results if necessary before they reach the frontend.
Vary which parts of the heavy lifting are done on the server vs the device in order to improve the application performance.

multiple DNS queries in one web page request

I am working on a web proxy.The logic is client sends request to proxy, proxy sends the same request to server, and sends the answer back to the client.
For example, i want to visit www.baidu.com. I get "Host:www.baidu.com" in the GET: package, which is used to send a dns request, then i get the ip of "www.baidu.com", establish the socket between proxy and server.
The question is when I use wireshark to capture normal packages not with proxy, i find that there is more dns request queries visting "www.baidu.com" except query for www.baidu.com. It will query for nsclick.baidu.com and suggestion.baidu.com in different sockets.But there is no signal to let me to initiate these DNS queries, not like query for "www.baidu.com",in which i can initiate it when i detect "Host:". Can someone help me ? thank u.
This is not how this should be working probably in first place.
Imagine i hit www.baidu.com in my browser, which sends traffic via your proxy. For your proxy currently, www.baidu.com is the only thing to lookup for.
When my browser end up receiving html chunk for this request, received html/js code then loads requests for some images which comes from nsclick.baidu.com. Similarly requests for other resources (css, js, images) can be made. In turn they all again go through your proxy and then their you will be doing your usual dns query.

How often is the Silverlight Access policy accessed?

As you may well know, it is required to host an access policy
(clientaccesspolicy.xml) on your web server if you want SL apps
to perform HTTP requests, or you need to host an access server
on port 943 for socket connections.
My app makes many short requests and latency is important. I want
to know if this access policy file is accessed once for every
new HTTP request or is it accessed for the first request and have
its result cached on the client. It would be quite costly for me
to have two web requests (one for the policy, one for the HTTP GET)
for each HTTP request I create.
One easy way to test this is to use Fiddler and watch for requests to the policy file. The documentation also specifies that the cross-domain policy file is requested only once per application session. This means that the runtime will only request it once and store the result in memory for the silverlight session.

Resources