I've got a Django app deployed on a Google App Engine (standard) which is working fine if I access it via the browser (any). However, I cannot run any tests using curl on the terminal or using the Postman app. When I try running curl, it times out:
curl --verbose -X GET "https://SERVER.appspot.com"
Note: Unnecessary use of -X or --request, GET is already inferred.
* Rebuilt URL to: https://SERVER.appspot.com/
* Trying x.x.x.x...
* TCP_NODELAY set
* Connected to SERVER.appspot.com (x.x.x.x) port 443 (#0)
* Operation timed out after 300225 milliseconds with 0 out of 0 bytes received
* Curl_http_done: called premature == 1
* Closing connection 0
curl: (28) Operation timed out after 300225 milliseconds with 0 out of 0 bytes received
Any idea what I'm doing wrong? The same app runs fine locally and I can run Postman / curl on it. If I try the same (i.e. curl or Postman) with a http:// instead of https://, it works just fine (yet https:// is working when using a browser).
What do I need to do to get it to work from the terminal?
Turned out to be a SSL 'trust' issue. One needs to add SSL support for each sub-domain mapped to an App Engine instance. Turns out, if you haven't done this correctly (in my case one of the CA intermediate certificates was missing from the .crt bundle), your browser may assume the site to be secure, but curl / postman will fail - or rather timeout without telling you much. I was able to run some tests on my sub-domain here: https://www.ssllabs.com/ssltest
It pointed out that the CA certificates supplied were incomplete and rated it B. Fixing this immediately fixed the curl and postman issues (and I assume postman internally is using curl ? I digress).
Related
Since Salesforce TLS1.0 is getting disabled on July 22nd I was testing as suggested in the below url https://help.salesforce.com/articleView?id=Salesforce-disabling-TLS-1-0&language=en_US&type=1 .
We interact with salesforce from backend server via REST API Initiatilly we do a grant_type=password for getting the access token and instance_url. Its pointed to https://login.salesforce.com/services/oauth2/token and it works now. As mentioned in the help url tried to point it to https://tls1test.salesforce.com/services/oauth2/token and it gives me an error {"error":"unsupported_grant_type","error_description":"grant type not supported"} . Tried in the below format from linux command line it gives me the same response.
curl -v https://tls1test.salesforce.com/services/oauth2/token -d "grant_type=password&client_id=<>&client_secret=<>&username=username%40domain.com&password=<>" -H "Content-Type:application/x-www-form-urlencoded"
Google compute engine console return 399 error code already asks my question but the solution is not as suggested there. Since the URL is little old starting a new thread.
I am trying to do a wget using:
wget https://console.developers.google.com/m/cloudstorage/b/m-lab/o/ndt/2012/05/23/20120523T000000Z-mlab1-ams01-ndt-0000.tgz
I see the error:
Resolving console.developers.google.com (console.developers.google.com)... 216.239.32.27
Connecting to console.developers.google.com (console.developers.google.com)|216.239.32.27|:443... connected.
HTTP request sent, awaiting response... 399 Internal Server Error
2014-08-26 20:02:18 ERROR 399: Internal Server Error.
I am new to Linux commands so wanted to know if am missing something obvious.
The address works when I use Chrome downloader but fails with wget with me as well
I have never seen this behaviour before
You can also use cURL to download files, I used the -v switch and got a dns error(no idea why)
curl -v http://console.developers.googlO.com/m/cloudstorage/b/m-lab/o/ndt/2012/05/23/20120523T000000Z-mlab1-ams01-ndt-0000.tgz
We cannot download with traditional tools we have to use gsutil utility provided by google, using which automation is possible.
You need to use the following URI pattern:
http://storage.googleapis.com/<bucket>/<object>
In this case, you can download that file using the command:
wget http://storage.googleapis.com/m-lab/ndt/2012/05/23/20120523T000000Z-mlab1-ams01-ndt-0000.tgz
I upgraded to GAE 1.7.7 today and found out that task queues stopped working on my development setup.
I'm using https on my development environment through an nginx set up to proxy the connections from fakedomain.local:80 and fakedomain.local:443 to localhost:8080 (where GAE listens).
With this setup, taskqueues end up being created to be executed at fakedomain.local:80. This used to work because the request would be picked up by nginx, but the version 1.7.7 of the development server seems to have a port registry which won't serve a request unless the port is known (if I understand google.appengine.tools.devappserver2.Dispatcher._resolve_target correctly). Of course, GAE listens on port 8080 and my tasks marked to run on fakedomain.local:80 never get executed (GAE logs this error: An error occured while sending the task "task1" (Url: "...") in queue...).
I tried patching dispatcher.py:577 so instead of raising a ServerDoesNotExistError when the port is not known it will just use the default server. With this modification I can get the taskqueues running again, but I'd rather use a solution which doesn't involve changing GAE's code.
How can I make GAE register the port 80 and 443 in version 1.7.7? Alternatively, is there a way I could specify the complete target URL for the task? (ie fakedomain.local:8080/my_task, instead of just /mytask).
taskqueue.add(target=taskqueue.DEFAULT_APP_VERSION, ...)
will run it on your default app, which should do exactly what you want.
taskqueue.DEFAULT_APP_VERSION =>
app_identity.get_default_version_hostname() =>
environ['DEFAULT_VERSION_HOSTNAME'] =>
'%s:%s' % (environ['SERVER_NAME'], server_port)
I'm working on a JS based project that runs off GAE and part of the code gets the user's avatar using OAuth from Facebook, Twitter or Google. I'm trying to write tests in Mocha in order to test this but I'm running into some problems.
The code works when I test it in the front end, and the way I envisaged it to work would be to use ZombieJS to run the app on GAE's dev_appserver.py, fire the OAuth functions, fill in the appropriate auth stuff and then complete the test by returning the image URL.
However the first hurdle I've got is that it appears that NodeJS's server is not allowing GAE's server to run on the same IP address. For example:
exec 'dev_appserver.py .', ->
console.log arguments
This returns the error 'Address already in use'. How can I get around this apart from running it on a different machine? Is it possible to tell NodeJS to not reserve the whole IP and just a port? I'm running GAE on 8080 and it works fine when it isn't invoked by NodeJS.
The second problem is ZombieJS. I'm trying to figure out a way I can listen to when new windows are opened and, essentially, tail the console of the browser. I've started two discussions on the Google group but no one has responded yet (https://groups.google.com/forum/?hl=en#!topic/zombie-js/cJklyMbwxRE and https://groups.google.com/forum/?hl=en#!topic/zombie-js/tOhk_lZv5eA)
While the latter isn't as important as I can find ways around it (I hope), the former is the main issue, so I'd greatly appreciate any direction on how to resolve this address conflict.
Here's my NodeJS script:
exec = ( require 'child_process' ).exec
fs = require 'fs'
should = require 'should'
yaml = require 'yaml'
Zombie = require 'zombie'
common = require '../../static/assets/js/common'
url = 'ahmeds.local'
browser = new Zombie()
config = null
consoleCb = 'function consoleSuccess(){console.log("success",arguments)}function consoleFailure(){console.log("failure",arguments)}'
browser.debug = true
browser.silent = false
fs.readFile '../../config.yaml', (error, data) ->
config = yaml.eval data.toString 'ascii'
exec 'cd ../../ && dev_appserver.py -a ' + url + ' .', ->
console.log arguments
# browser.visit config.local.url, ->
browser.visit 'http://' + url + ':8080', ->
browser.evaluate consoleCb
browser.evaluate 'profileImage("facebook",consoleSuccess,consoleFailure)'
console.log browser.window.console.output
I have only limited familiarity with NodeJS, but I just tested running a NodeJS server and App Engine local dev server on the same machine — it works just fine. Without seeing your NodeJS code, I'm guessing you're also trying to run NodeJS on port 8080, and so the App Engine server complains when it's started (8080 is the default, and you noted it's the port you are using).
Try passing --port=8081 (or some other port) to your invocation of dev_appserver.py and it should resolve the conflict.
Nothing in the code you've shown (other than the invocation of dev_appserver) should even be listening on any port (unless zombie implements a "server" for remote debugging or something like that). It looks like the port conflict is coming from somewhere else.
Note that zombie's own Mocha test framework does set up an express server, so if you're using it or code lifted from it, that might be doing it.
What does netstat have to say about who's binding to what port?
I've been running nagios for about two years, but recently this problem started appearing with one of my services.
I'm getting
CRITICAL - Socket timeout after 10 seconds
for a check_http -H my.host.com -f follow -u /abc/def check, which used to work fine. No other services are reporting this problem. The remote site is up and healthy, and I can do a wget http://my.host.com/abc/def from the nagios server, and it downloads the response just fine. Also, doing a check_http -H my.host.com -f follow works just fine, i.e. it's only when I use the -u argument that things break. I also tried passing it a different user agent string, no difference. I tried increasing the timeout, no luck. I tried with -v, but all it get is:
GET /abc/def HTTP/1.0
User-Agent: check_http/v1861 (nagios-plugins 1.4.11)
Connection: close
Host: my.host.com
CRITICAL - Socket timeout after 10 seconds
... which does not tell me what's going wrong.
Any ideas how I could resolve this?
Thanks!
Try using the -N option of check_http.
I ran into similar problems, and in my case the web server didn't terminate the connection after sending the response (https was working, http wasn't). check_http tries to read from the open socket until the server closes the connection. If that doesn't happen then the timeout occurs.
The -N option tells check_http to receive only the header, but not the content of the page / document.
I tracked my issue down to an issue with the security providers configured in the most recent version of OpenSUSE.
From summary of other web pages it appears to be an issue with an attempt to use TLSv2 protocol which does not appear to work correctly, or is missing something in the default configurations to allow it to work.
To overcome the problem I commented out the security provider in question from the JRE security configuration file.
#security.provider.10=sun.security.pkcs11.SunPKCS11
The security.provider. value may be different in your configuration, but essentially the SunPKCS11 provider is at issue.
This configuration is normally found in
$JAVA_HOME/lib/security/java.security
of the JRE that you are using.
Fixed with this url in nrpe.cfg: (on Deb 6.0 Squeeze using nagios-nrpe-server)
command[check_http]=/usr/lib/nagios/plugins/check_http -H localhost -p 8080 -N -u /login?from=%2F
For whoever is interested, I stumbled in this problem too and the problem ended up being in mod_itk on the web server.
A patch is available, even if it seems it's not included in the current CentOS or Debian packages:
https://lists.err.no/pipermail/mpm-itk/2015-September/000925.html
In my case /etc/postfix/main.cf file was not good configured.
My mailserverrelay was not defined and was also very restrictive.
I should to add:
relayhost = mailrelay.ext.example.com
smtpd_relay_restrictions = permit_mynetworks permit_sasl_authenticated defer_unauth_destination