Everything works perfect when I run locally. When I deploy my app on AppEngine, for some reason, the most simple request gets timeout errors. I even implemented retry and, while I made some progress, it still not working well.
I don't think it matter since I don't have the problem running on local, but here's the code I just used for request-retry module:
request({
url: url,
maxAttempts: 5,
retryDelay: 1000, // 1s delay
}, function (error, res, body) {
if (!error && res.statusCode === 200) {
resolve(body);
} else {
console.log(c.red, 'Error getting data from url:', url, c.Reset);
reject(error);
}
});
Any suggestions?
Also, I can see this errors in the Debug:
This request caused a new process to be started for your application, and thus caused your application code to be loaded for the first time. This request may thus take longer and use more CPU than a typical request for your application.
────────────────────
The process handling this request unexpectedly died. This is likely to cause a new process to be used for the next request to your application. (Error code 203)
The error 203 means that Google App Engine detected that the RPC channel has closed unexpectedly and shuts down the instance. The request failure is caused by the instance shutting down.
The other message about a request causing a new process to start in you application is most likely caused by the instances shutting down. This message appears when a new instance starts serving a request. As your instances were dying due to the error 203, new instances were taking its place, serving your new requests and sending that message.
An explaination for why it's working on Google Cloud Engine (or locally) is because the App Engine component causing the error is not present on those environments.
Lastly, if you are still interested in solving the issue with App Engine and are entitled to GCP support, I suggest contacting with the Technical Support team. The issue seems exclusive to App Engine, but I can't answer further about the reason why, that's why I'm suggesting contacting with support. They have more tools available and will be able to help investigate the issue more thoughtfully.
Related
I can't get an Axios "get" request working for a front-end/back-end pair after moving the code from CentOS 7 to a CentOS 8 instance. The code in question works just fine on a different AWS EC2 instance. I can't make it work on the new EC2 instance running Rocky Linux v8.5.
When I catch the exception to look at the error, I see a most unhelpful:
Error: Network Error
I can find NO information about the complaint was or how to fix it. I can't get ANY useful information about what is causing the issue. I'm sure it's something stupid and easy to fix -- it would be much easier if I can somehow get the technology stack to tell me what the issue is.
I use the axios component to access a Node Express service running on the same instance that hosts the React app. The service is listening to https on port 7003. The React app calls this server, and returns data provided by the service.
I use axios for all communication between the React app and the rest of the world, so I need to fix this.
I use VisualStudio Code (VSC) to develop my React and NodeJS code.
When I exercise the service using wget, it seems to work:
$ wget "https://my.domain.name.com:7003/getEnvironment"
--2022-02-24 21:46:46-- https://my.domain.name.com:7003/getEnvironment
Resolving my.domain.name.com (my.domain.name.com)... 172.30.2.147
Connecting to my.domain.name.com (my.domain.name.com)|172.30.2.147|:7003... connected.
HTTP request sent, awaiting response... 200 OK
Length: 11205 (11K) [application/json]
Saving to: ‘getEnvironment’
getEnvironment 100%[===================================>] 10.94K --.-KB/s in 0s
2022-02-24 21:46:46 (220 MB/s) - ‘getEnvironment’ saved [11205/11205]
I notice that wget says it's using "HTTP" even though I've given it "https" in the command-line.
I run the front-end in VSC using the React/VSC development server. That server listens on port 3003.
I've turned on cors for the service, and it is listening on port 7003 as expected. That's why the wget works.
The back-end (service) code looks something like this:
...
var cors = require('cors');
...
var app = express();
...
app.use(cors({origin: true, credentials: true}));
...
The front-end code that is failing looks like this:
checkStatus(response) {
if (response.status >= 200 && response.status < 300) {
return response.data;
} else {
const error = new Error(`HTTP Error ${response.statusText}`);
error.status = response.statusText;
error.response = response;
console.log(error);
throw error;
}
}
privateLoadEnvironmentUsingURL(url) {
return axios
.get(url)
.then((response) => {return this.checkStatus(response)})
.catch((error) => {
console.log(error);
throw error;
});
}
I've exercised this with both front-end and service in VSC. The axios call is failing and so far as I can tell is doing so before ever invoking the service.
It takes awhile to fail, leading me to suspect a timeout is in play. I see no indication in the nodejs service code that the request is actually hitting the service.
It therefore appears that the "preflight" negotiation is blocking this call. Turning on cors is pretty much all I know how to do -- I don't have deep insight into cors.
I've been waving voodoo chickens at this code all afternoon to no avail.
How do other developers discover how to fix problems like this when the technology stack presents so little information about what is actually happening?
How do I get this working?
I found and solved the problem, a network configuration issue completely outside axios/nodejs/react. The fact remains that I think it should somehow be possible for a developer to get at least a hint from the exception raised by axios.
For those who are interested, the problem turned out to be the AWS Security Group configuration for the new system. I had to open port 7003 in the Security Group, allowing access from my local IP address, in order for the request to be forwarded to the platform.
I suppose I should have thought of this sooner -- VSC spawns a special Chrome browser on my local system with its own private tunnel and such. It appears that that browser instance running on my local machine makes the request against port 7003. The AWS Security Group was blocking that port, and so the request never made it to the server.
I identified the issue by doing the wget from my local machine rather than from the new target EC2 instance. That failed, and then I attempted to connect with telnet. When the latter could not connect, I knew it was a Security Group issue.
The bottom line is that it is sometimes too easy to forget that ALL React code runs in the browser. I know that's obvious. but its implications sometimes are not.
I am using fetch in a NodeJS application. Technically, I have a ReactJS front-end calling the NodeJS backend (as a proxy), and then the proxy calls out to backend services on a different domain.
However, from logging errors from consumers (I haven't been able to reproduce this issue myself) I see that a lot of these proxy calls (using fetch) throw an error that just says Network Request Failed, which is of no help. Some context:
This only occurs on a subset of all total calls (lets say 5% of traffic)
Users that encounter this error can often make the same call again some time later (next couple minutes/hours/days) and it will go through
From Application Insights, I can see no correlation between browsers, locations, etc
Calls often return fast, like < 100 ms
All calls are HTTPS, non are HTTP
We have a fetch polyfill from fetch-ponyfill that will take over if fetch is not available (Internet Explorer). I did test this package itself and the calls went through fine. I also mentioned that this error does occur on browsers that do support fetch, so I don't think this is the error.
Fetch settings for all requests
Method is set per request, but I've seen it fail on different types (GET, POST, etc)
Mode is set to 'same-origin'. I thought this was odd, since we were sending a request from one domain to another, but I tried to set it differently and it didn't affect anything. Also, why would some requests work for some, but not for others?
Body is set per request, based on the data being sent.
Headers is usually just Accept and Content-Type, both set to JSON.
I have tried researching this topic before, but most posts I found referenced React native applications running on iOS, where you have to set some security permissions in the plist file to allow HTTP requests or something to do with transport security.
I have implement logging specific points for the data in Application Insights, and I can see that fetch() was called, but then() was never reached; it went straight to the .catch(). So it's not even reaching code that parses the request, because apparently no request came back (we then parse the JSON response and call other functions, but like I said, it doesn't even reach this point).
Which is also odd, since the request never comes back, but it fails (often) within 100 ms.
My suspicions:
Some consumers have some sort of add-on for there browser that is messing with the request. Although, I run with uBlock Origin and HTTPS Everywhere and I have not seen this error. I'm not sure what else could be modifying requests that would cause it to immediately fail.
The call goes through, which then reaches an Azure Application Gateway, which might fail for some reason (too many connected clients, not enough ports, etc) and returns a response that immediately fails the fetch call without running the .then() on the response.
For #2, I remember I had traced a network call that failed and returned Network Request Failed: Made it through the proxy -> made it through the Application Gateway -> hit the backend services -> backend services sent a response. I am currently requesting access to backend service logs in order to verify this on some more recent calls (last time I did this, I did it through a screenshare with a backend developer), and hopefully clear up the path back to the client (the ReactJS application). I do remember though that it made it to the backend services successfully.
So I'm honestly not sure what's going on here. Does anyone have any insight?
Based on your excellent description and detective work, it's clear that the problem is between your Node app and the other domain. The other domain is throwing an error and your proxy has no choice but to say that there's an error on the server. That's why it's always throwing a 500-series error, the Network Request Failed error that you're seeing.
It's an intermittent problem, so the error is inconsistent. It's a waste of your time to continue to look at the browser because the problem will have been created beyond that, either in your proxy translating that request or on the remote server. You have to find that error.
Here's what I'd do...
Implement brute-force logging in your Node app. You can use Bunyan, or Winston or just require(fs) and write out to some file when an error occurs. Then look at the results. Only log it out when the response code from the other server is in the 400 or 500 ranges. Log the request object and the response object.
Something like this with Bunyan:
fetch(urlToRemoteServer)
.then(res => res.json())
.then(res => whateverElseYoureDoing(res))
.catch(err => {
// Get the request & response to the remote server
log.info({request: req, response: res, err: err});
});
where the res in this case is the response we just got from the other domain and req is our request to them.
The logs on your Azure server will then have the entire request and response. From this you can find commonalities. and (🤞) the cause of the problem.
We have a full-stack app running with a React front-end UI and a Node-based Sails API and Postgres database on the backend. The backend is set to run on a Docker virtual machine. So here's the problem: the UI is up and running fine as is the Docker server and VM as far as we can tell. However, our user login functionality has stopped connecting with our server for some reason. The POST request that is supposed to create a user returns a 404 Not Found error, never actually hitting the /users/create endpoint as it should.
We have tried completely reinstalling docker and the virtual machine, clearing out the cache and starting from scratch. The error does not change. We have also compared the code of the machines that are receiving this error with a computer that is NOT receiving the error and there is no difference in the actual server, endpoint or docker files.
Potentially worth mentioning is that our Sails server in Docker is not able to find and run our grunt file, which it used to be doing (and is still on one computer). But that file isn't doing anything directly related to the server and shouldn't be causing the 404 we don't think so it's probably not relevant.
Another problem is that prior to installing a CORS Chrome Extension that allows same origin requests, we were just getting a CORS error that prevented the request from even going off. The extension seems to have fixed this for now and this is likely also not related, but maybe worth mentioning anyways.
This is our create user JavaScript (React) function in our user reducer.
export const registrationEpic = action$ =>
action$.ofType(USER_REGISTRATION)
.mergeMap(action => ajax.post(`${process.env.REACT_APP_API_URL}/users/create`, action.payload)
.map(data => {
let response = data.response;
if (response.user && response.token) {
return userRegistrationFulfilled();
} else {
return Observable.of(userRegistrationRejected(["Unknown error. Please, refresh page and try again."]));
}
})
.catch(error => {
console.log(error)
let errors = sailsErrors.parseValidationErrors(error.response);
return Observable.of(userRegistrationRejected(!!errors ? errors : ["Unknown error. Please, refresh page and try again."]))
}));
I don't think this code is particularly relevant, though -- this functionality seems fine and was previously working. The problem seems to be in the Sails server - Docker connection which isn't "visible".
I expect that when users enter their information, they will be logged in via the create endpoint of our server (like it used to work). Now, the data is being submitted, but the console is just returning a 404 not found error on the create endpoint of our Sails server.
In the last two days my GAE App sometimes response quickly and most of the times loading so slow or loading forever.
Same application version in another domain works just fine.
I checked my logs to see any errors and I surprised with a lots of HTTP(503) from Unknown Origin:
The error details:
.
Any Idea?
Thanks
Your app seems to be configured for warmup requests. Unfortunately, your app is responding to the requests for /_ah/start with a 503, which is causing the process to be terminated (and an new process started, which will make your app seem very slow).
The relevant message is:
Process terminated because it failed to respond to the start request with an HTTP status code of 200-299 or 404.
You probably want to remove the -warmup from the inbound_services: section of your app.yaml, or configure a warmup handler on /_ah/start.
https://cloud.google.com/appengine/docs/php/warmup-requests/configuring
I cleared the Memcache and the problem gone away. ^_^
I created an App Engine backend to serve http requests for a long running process. The backend process works as expected when the query references an input of small size, but times out when the input size is large. The query parameter is the url of an App Engine BlobStore blob, which is the input data for the backend process. I thought the whole point of using App Engine backends was to avoid the timeout restricts that App Engine frontends possess. How can I avoid getting a timeout?
I call the backend like this, setting the connection timeout length to infinite:
HttpURLConnection connection = (HttpURLConnection)(new URL(url + "?" + query).openConnection());
connection.setRequestProperty("Accept-Charset", charset);
connection.setRequestMethod("GET");
connection.setConnectTimeout(0);
connection.connect();
InputStream in = connection.getInputStream();
int ch;
while ((ch = in.read()) != -1)
json = json + String.valueOf((char) ch);
System.out.println("Response Message is: " + json);
connection.disconnect();
The traceback (edited for anonymity) is:
Uncaught exception from servlet
java.net.SocketTimeoutException: Timeout while fetching URL: http://my-backend.myapp.appspot.com/somemethod?someparameter=AMIfv97IBE43y1pFaLNSKO1hAH1U4cpB45dc756FzVAyifPner8_TCJbg1pPMwMulsGnObJTgiC2I6G6CdWpSrH8TrRBO9x8BG_No26AM9LmGSkcbQZiilhC_-KGLx17mrS6QOLsUm3JFY88h8TnFNer5N6-cl0iKA
at com.google.appengine.api.urlfetch.URLFetchServiceImpl.convertApplicationException(URLFetchServiceImpl.java:142)
at com.google.appengine.api.urlfetch.URLFetchServiceImpl.fetch(URLFetchServiceImpl.java:43)
at com.google.apphosting.utils.security.urlfetch.URLFetchServiceStreamHandler$Connection.fetchResponse(URLFetchServiceStreamHandler.java:417)
at com.google.apphosting.utils.security.urlfetch.URLFetchServiceStreamHandler$Connection.getInputStream(URLFetchServiceStreamHandler.java:296)
at org.someorg.server.HUDXML3UploadService.doPost(SomeService.java:70)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:637)
As you can see, I'm not getting the DeadlineExceededException, so I think something other than Google's limits is causing the timeout, and also making this a different issue from similar stackoverflow posts on the topic.
I humbly thank you for any insights.
Update 2/19/2012: I see what's going on, I think. I should be able to have the client wait indefinitely using GWT [or any other type of client-side async framework] async handler for an any client request to complete, so I don't think that is the problem. The problem is that the file upload is calling the _ah/upload App Engine system endpoint which then, once the blob is stored in the Blobstore) calls the upload service's doPost backend to process the blob. The client request to _ah/upload is what is timing out, because the backend doesn't return in a timely fashion. To make this timeout problem go away, I attempted to make the _ah_upload service itself a public backend accessible via http://backend_name.project_name.appspot.com/_ah/upload, but I don't think that google allows a system service (like _ah/upload) to be run as a backend. Now my next approach is to just have ah_upload immediately return after triggering the backend processing, and then call another service to get the original response I wanted, after processing is finished.
The solution was to start a backend process as a tasks and add that to the task queue, then returning a response to client before it waits to process the backend task (which can take a long time). If I could have assigned ah_upload to a backend, this would have also solved the problem, since the clien't async handler could wait forever for the backend to finish, but I do not think Google permits assigning System Servlets to backends. The client will now have to poll persisted backend process response data, as Paul C mentioned, since tasks can not respond like a normal servlet.