GAE urlfetch returning 500 uncaught exception in production - google-app-engine

from google.appengine.api import urlfetch
totango_url = "https://sdr.totango.com/pixel.png"
totango_url2 = "https://app.totango.com/images/accounts-users.png"
result = urlfetch.fetch(totango_url, validate_certificate=None )
print result.status_code
In production , request to totango_url logs indicate (with no error_detail) :
DownloadError: Unable to fetch URL: https://sdr.totango.com/pixel.gif
i ran this curl command. works fine from local setup , for both the https totango urls
curl -v "https://sdr.totango.com/pixel.gif"
curl -v "https://app.totango.com/images/accounts-users.png"
The ssl certificates are valid and same for both urls.
using the urlfetch.fetch on both urls also returns 200 from my (local) datastore console.
However , the urlfetch.fetch calls to https://sdr.totango.com/pixel.png fails with the above error
Also , i ran the same code in the google cloud playground tweaking the sample app-engine application and seem to get a 200 response for totango_url2 while it returns a 500 for totango_url. Both have the same ssl certificate , i think.
is there some ip whitelisting /firewall issue that app-engine in production that i need to take care of?

This sounds more like an issue on the remote side. If you're able to fetch that image from one place but not another, that speaks to the remote site doing some sort of filtering, possibly by IP address.

Related

Is there correct way to set Ngrok file to skip browser warning page

Here is sample of Ngrok file which I'm using within tunnel:
authtoken: somevalue
version: "2"
tunnels:
sometunellName
proto: http
addr: 5555
schemes:
- http
- https
host_header: rewrite
request_header:
add:
- "ngrok-skip-browser-warning:true"
log_level: debug
log_format: json
log: ngrok.log
Several common headers didn't give any new result.
The "ngrok-skip-browser-warning:true" header has to be added in the browser as the ngrok cloud side of things has to see it to skip the browser warning. With your config, you've added it in the ngrok cloud so only your app is seeing it.
~ an ngrok employee

Getting 503 service unavailable after 30 sec on browser console - React App

I am getting 503 after exactly 30 sec while exporting all user data from react app.
export const get = (
url: string,
queryParams: Object = {},
extraHeaders: Object = {},
responseType: string = 'text',
callback?: number => void
): Promise<*> =>
superagent
.get(url)
.timeout({
response: 500000,
deadline: 600000
})
.use(noCache)
.set('Accept-Language', (i18n.language || 'en').split('_')[0])
.set(extraHeaders)
.responseType(responseType)
.query(queryParams)
.on('progress', e => {
if (callback) {
callback(e.percent)
}
})
Technology Stack used: Akka http (backend), react js (front end), Nginx (docker image).
i have tried to access akka api directly with curl command request executed in 2.1min successfully data exported in .csv file.
Curl command : curl --request GET --header "Content-Type: text/csv(UTF-8)" "http://${HOST}/engine/export/details/31a0686a-21c6-4776-a380-99f61628b074?dataset=${DATASET_ID}" > export_data.csv
NOTE: on my local env. i am able to export all records from react UI in 2.5 min.
but this issue is coming on TEST site. and test site is setup with docker env. images for this application.
Error At Browser Console:
GET http://{HOST}/engine/export/details/f4078a63-85bc-43ac-b9a9-c58f6c8193da?dataset=mexico 503 (Service Unavailable)
Uncaught (in promise) Error: Service Unavailable
at Request.<anonymous> (vendor.js:1)
at Request.Emitter.emit (vendor.js:1)
at XMLHttpRequest.t.onreadystatechange (vendor.js:1)
this is coming on PRODUCTION and TEST site. then only difference in local and test site is docker images.
Could you please help me for the same?
Thank you in advance.
On your local machine you have plenty of resources. On your remote host, responding with 503, you have exceeded capacity in one of four resource types:
CPU
RAM
DISK
Network
These are ordered by least expensive to most expensive. Both Disk and Network are typically off-bus, with network orders of magnitude slower than any other access type.
On your local machine I am guessing you have exclusive access, so locked resources that need cleanup are a non-issue. You also have arbitrated (non exclusive) access to the environment when requests are concurrent with others. It could be something as simple as running out of file handles/file descriptors to satisfy your query because the back end hosts do not clean up orphaned connections fast enough.
If you have nailed down all of the differences between your two configurations and there are no differences (just local vs remote), then you are left with the resource problem of other users on the system

JWT Auth token works in Homestead but not on production server

So here is the deal. I have an ionic app with "satellizer" and "angular-jwt" which communicates with a Laravel5 backend with barryvdh/laravel-cors and tymondesigns/jwt-auth. Running this combo works fine in Homestead in combination witg ionic serve. An authentication token is created and stored in localStorage and validates with Laravel. This request looks as followed:
[IONIC SERVE] http://192.168.1.54:8100/#/auth
[POST] http://192.168.10.10/api/v1/authenticate?email=[email]&password=[password]
Returns [the_token]
[GET] http://192.168.10.10/api/v1/authenticate/user?token=[the_token]
Returns the user object from Laravel
As soon as I change the api url to my live server
[POST] http://domaintomyvps.com/api/v1/authenticate?email=[email]&password=[password]
Returns [the_token]
[GET] http://domaintomyvps.com/api/v1/authenticate/user?token=[the_token]
Returns {"error":"token_not_provided"}
The authentication works fine and a token got returned. But when sending the get request I get the error "token_not_provided".
Then the strange thing happens. When trying the same request from Chrome Postman the token validates and the user object is returned.
My homestead is running as a Vagrant in a VirtualBox and Nginx in Ubuntu. My production server is Debian with Apache on a VPS. The Laravel installations are identical regarding settings and keys. The database is exactly the same (mysqldump) and works due Postman creates a successful result.
Anyone who can guide me to the right direction or has had the same problem? Do you need any more information regarding setup or code?
This solved my problem.
Adding the following to my apache .htaccess disable apache to remove auth header from the request. I found this answer in the: following thread
RewriteCond %{HTTP:Authorization} ^(.*)
RewriteRule .* - [e=HTTP_AUTHORIZATION:%1]

appcfg appengine 502 Proxy error in localhost

I am trying to upload some data to my local datastore in appengine.
The command I am using is the next one:
appcfg.py upload_data --config_file="C:\config.yml" --filename="C:\mycsv.csv" --url=http://localhost:8888/remote_api --kind=MyEntity
The problem is that I'm working behind my company proxy and I am getting the next ERROR even trying to connect to the localhost server:
Error Code: 502 Proxy Error. The ISA Server denied the specified Uniform Resource Locator (URL). (12202)
It seems the authentication is ok, but somehow the proxy tries to filter my connection to my own computer.
Some ideas about how can I solve this?
Thanks.
Remove/disable proxy settings of your network then try the above command.
I was facing the similar issue and this issue resolved when i disable my proxy settings.

How does AppEngine(or its embedded-server) processes different urls

I created a simple appengine project that accesses a restful webservice hosted in http://commerce.qa.mycomp.com (mycomp should be replaced with my actual company name).
I am using the Jersey Client to make the client request. I do a POST request to the above url.
When I run the application locally, it always returns 404 Not Found response. For experimenting, I did a POST request to http://www.bbc.co.uk/news/ and this works fine, and returns 200 as the status.
I just decoupled my app from appengine and ran it in the separately-configured tomcat server, and there it works fine and returns 200 status code. I think appengine uses Jetty as the server. Does Jetty have any bugs processing urls like commerce.qa.mycomp.com. Why I am asking this is urls that start with www.any.com seems to be working fine.
The two code snippets shown below is not working when run within appengine locally(not running even if I host it to appspot).
Client client = Client.create();
WebResource service = client.resource("http://commerce.qa.mycomp.com/rest");
ClientResponse response = service
.header("Content-Type", "text/xml; Charset=utf-8")
.header("Authorization", "Basic dwt3hkl553lsfsfssf3")
.post(ClientResponse.class, "does not need to be actual xml");
URL url = new URL("http://commerce.qa.mycomp.com/rest");
HttpURLConnection conn = (HttpURLConnection)url.openConnection();
conn.setDoOutput(true);
conn.setRequestMethod("POST");
conn.setRequestProperty("Content-Type", "text/xml; Charset=utf-8");
conn.addRequestProperty("Authorization", "Basic dwt3hkl553lsfsfssf3");
OutputStream os = conn.getOutputStream();
os.write("no need to be actual xml".getBytes());
os.flush();
System.out.println("Response Code: " + conn.getResponseCode());
But when run with tomcat, it just works.
My installations are:
Google App Engine Java SDK 1.6.1
Google Plugin for Eclipse 3.7
jersey-client-1.12, jersey-core-1.12, jersey-json-1.8
Please share thoughts.

Resources