I have set up Varnish on my centos server which runs my drupal site.
Browsing to any page returns a blank page due to 503 :Service Unavailable
I have read many questions and answers about intermittent 503's but this is occurring constantly. I can still browse to the site using www.example.com:8080 .
I am running on Centos 6 using the VCL :
https://raw.githubusercontent.com/NITEMAN/Varnish_VCL_samps-hacks/master/varnish3/drupal-base.vcl
I have also tried https://fourkitchens.atlassian.net/wiki/display/TECH/Configure+Varnish+3+for+Drupal+7 .
Not sure where to even start in debugging this.
ADDITIONAL INFO:
NITEMANS answer below provides some really helpful debug suggestions.
In my case it was something very simple, I had left the default 127.0.0.1 in my default.vcl . Changing this to my real external IP got things working. I hope that is the correct thing to do!
As you're running my sample VCL, it should be easy to debug (try each step separately):
Make sure apache is listening on 127.0.0.1:8080 (as it can be listening on another IP and not in the local loopback). netstat -lpn | grep 8080 should help.
Rise backend timeouts (if the server is very slow, since defined timeouts are already huge). Requires a Varnish reload.
Disable health probe (as Varnish can be marking the backend as sick). Comment probe basic block and probe line on backend default. Requires a Varnish reload.
Disable Varnish logic, uncommenting the first return(pipe) on sub vcl_recv. Requires a Varnish reload.
You should also provide when debugging:
varnishadm debug.health output
varnishlog output for a sample request
Hope it helps!
Related
I'm developing on a shared local server with some other people. This server has one Apache/PHP instance, but then it has multiple "sites-available" (VirtualHost) for different people.
I would like to get XDebug working so every one of us can use it independently. The problem is that if we enable XDebug and, for example, my IDE is connected to the server and I set a breakpoint, it'll stop if other person is browsing the page, even if it's in his/her own VirtualHost.
Any hints on how to properly set this up?
Edit:
Forgot to mention that the webserver is running in Docker.
This is the current configuration of XDebug:
# automatically start debugger on every request
xdebug.remote_enable=1
xdebug.remote_autostart=1
xdebug.remote_port=9000
# send all debug requests to 127.0.0.1
xdebug.remote_connect_back=0
xdebug.remote_host=host.docker.internal
#log all xdebug requests to see is it working correctly
xdebug.remote_log=/var/log/debug.log
Thanks.
This should not happen at all, unless you have xdebug.remote_autostart turned on, and have hard coded xdebug.remote_host (instead of using xdebug.remote_connect_back). You don't want to hard code xdebug.remote_host in a multi-user environment really.
Debugging sessions are only initialised when the XDEBUG_SESSION_START is detected (which is what the browser extension will set, or when that is added to the GET/POST parameters)--and continued requests.
There is also no such concept as:
my IDE is connected to the server
Upon every request, Xdebug (if set to trigger with the cookie) will connect to the IDE. And it uses the xdebug.remote_host setting, or the inferred IP address if xdebug.remote_connect_back is enabled to connect to. And the end of the request, that connection is severed. You can use xdebug.remote_log=/tmp/xdebug.log to create a log file, which will indicate when connections are being made, and whether they work.
All,
I am trying to install Solr 7.2.1. While the installation works, i am not able to run solr successfully. Whenever i try to run, i get the following error.
SolrCore Initialization Failures {{core}}: {{error}}
Please check your logs for more information
I am not sure exactly what the error is. i dont see anything in the logs either. all i see are are some info.
Please advice.
cmd solr.cmd start
or cmd solr.cmd stop -p 8983
doesn't have any issues.
i am running solr in 8982 port instead of the usual 8983. not sure if that makes a difference.
As mentioned by others, this problem is related to the javascript interface not related to the solr server. The same thing happened to me in Chrome. Clearing the site data worked for me.
Clear site data by opening developer tools and going to Application -> clear storage
Important Note:
The usage was 0B before and after clearing the site data but after clearing the site data solr home page came up without any error.
CTRL + F5 did the trick for me :-)
The issue is with the IE only, I have faced the same error. There was nothing in the logs as well. I finally Enabled the JavaScript for IE, Closed all the browser instance and tried again. And it is working perfectly fine for me.
Please refer below image for the reference.
UPDATE
App Engine SDK 1.9.24 was released on July 20, 2015, so if you're still experiencing this, you should be able to fix this simply by updating. See +jpatokal's answer below for an explanation of the exact problem and solution.
Original Question
I have an application I'm working with and running into troubles when developing locally.
We have some shared code that checks an auth server for our apps using urllib2.urlopen. When I develop locally, I'm getting rejected with a 404 on my app that makes the request from AppEngine, but the request succeeds just fine from a terminal.
I have appengine running on port localhost:8000, and the auth server on localhost:8001
import urllib2
url = "http://localhost:8001/api/CheckAuthentication/?__client_id=dev&token=c7jl2y3smhzzqabhxnzrlyq5r5sdyjr8&username=amadison&__signature=6IXnj08bAnKoIBvJQUuBG8O1kBuBCWS8655s3DpBQIE="
try:
r = urllib2.urlopen(url)
print(r.geturl())
print(r.read())
except urllib2.HTTPError as e:
print("got error: {} - {}".format(e.code, e.reason))
which results in got error: 404 - Not Found from within AppEngine
It appears that AppEngine is adding the schema, host and port to the PATH portion of the url I'm trying to hit, as this is what I see on the auth server:
[02/Jul/2015 16:54:16] "GET http://localhost:8001/api/CheckAuthentication/?__client_id=dev&token=c7jl2y3smhzzqabhxnzrlyq5r5sdyjr8&username=amadison&__signature=6IXnj08bAnKoIBvJQUuBG8O1kBuBCWS8655s3DpBQIE= HTTP/1.1" 404 10146
and from the request header we can see the whole scheme and host and port are being passed along as part of the path (header pieces below):
'HTTP_HOST': 'localhost:8001',
'PATH_INFO': u'http://localhost:8001/api/CheckAuthentication/',
'SERVER_PORT': '8001',
'SERVER_PROTOCOL': 'HTTP/1.1',
Is there any way to not have the AppEngine Dev server hijack this request to localhost on a different port? Or am I not misunderstanding what is happening? Everything works fine in production where our domains are different.
Thanks in advance for any assistance helping to point me in the right direction.
This is an annoying problem introduced by the urlfetch_stub implementation. I'm not sure what gcloud sdk version introduced it.
I've fixed this by patching the gcloud SDK - until Google does.
which means this answer will hopefully be irrelevant shortly
Find and open urlfetch_stub.py, which can often be found at ~/google-cloud-sdk/platform/google_appengine/google/appengine/api/urlfetch_stub.py
Around line 380 (depends on version), find:
full_path = urlparse.urlunsplit((protocol, host, path, query, ''))
and replace it with:
full_path = urlparse.urlunsplit(('', '', path, query, ''))
more info
You were correct in assuming the issue was a broken PATH_INFO header. The full_path here is being passed after the connection is made.
disclaimer
I may very easily have broken proxy requests with this patch. Because I expect google to fix it, I'm not going to go too crazy about it.
To be very clear this bug is ONLY related to LOCAL app development - you won't see this on production.
App Engine SDK 1.9.24 was released on July 20, 2015, so if you're still experiencing this, you should be able to fix this simply by updating.
Here's a brief explanation of what happened. Until 1.9.21, the SDK was formatting URL fetch requests with relative paths, like this:
GET /test/ HTTP/1.1
Host: 127.0.0.1:5000
In 1.9.22, to better support proxies, this changed to absolute paths:
GET http://127.0.0.1:5000/test/ HTTP/1.1
Host: 127.0.0.1:5000
Both formats are perfectly legal per the HTTP/1.1 spec, see RFC 2616, section 5.1.2. However, while that spec dates to 1999, there are apparently quite a few HTTP request handlers that do not parse the absolute form correctly, instead just naively concatenating the path and the host together.
So in the interest of compatibility, the previous behavior has been restored. (Unless you're using a proxy, in which case the RFC requires absolute paths.)
The OPTION/POST Request is failing inconsistently with a console Error as err_timed_out. We get the issue inconsistently, it's only observed sometimes. Otherwise the request gets proper response from the back end. When it's timing out, the request doesn't even reach the server.
I have done some research on the stuff and found that due to maximum 6 connections to a resource restrictions it may wait for getting a connection released. But, I don't see any other requests pending ,all the other requests were completed.
In the timeline I can always see that it stalled for 20.00 seconds. Most of the time the time is same. But, it only shows that its been stalled for some time nothing else in the timeline.
The status of the request shows failed ERR_Connection_Timed_Out. Please help.
The Network Timing
Console Error
I've seen this issue when I use an authenticated proxy server and usually a refresh of the page fixes it.
Are you using an authenticated proxy server where you are seeing this behavior? Have you tried on a pc with direct access (i.e. without proxy) to the Internet?
I've got the same problem when I choose another ISP. I thought I would have only to put my new ID and password, but it wasn't the case.
I have an ADSL modem with a dry loop.
All others services were fine (DNS resolution, IP telephony, FTP, etc).
I did a lot of tests (disable firewall, try some others navigator, try under Linux, modem default factory, etc) none of those tests were successful.
To resolve the problem ERR_TIMED_OUT, I had to adjust the MTU and MRU values. I put 1458 rather than 1492, which is the default value.
It works for me. Maybe some ISPs use different values. Good luck.
I'm quite positive this is an xdebug issue and not a PHPStorm issue but to be clear up front I am using PHPStorm locally to debug PHP code residing on a remote server. I have xdebug set up on the server and am using the following config in php.ini on the server:
zend_extension=/home/httpd/php_extensions/xdebug-2.1.4/modules/xdebug.so
xdebug.remote_enable=1
xdebug.remote_port=9000
xdebug.remote_connect_back=1
xdebug.idekey=PHPSTORM-XDEBUG
I have set up PHPStorm as my local debugger. I use XDebug Helper in Chrome or easy Xdebug in Firefox to initialize xdebug (my problem occurs regardless of which I use). In general, debugging works fine. I can set breakpoints, step through code, see variables, etc.
The problem comes when certain requests never receive a response from the server. The server just never responds and I've validated this with Charles Web Debugging Proxy. This always happens on the same specific requests and happens regardless of whether breakpoints are set or not. The requests that don't receive a response are all similar--they call a php script which minifies and concatenates multiple JavaScript files and echos the result.
To troubleshoot, I've enabled xdebug logging by adding this to php.ini:
xdebug.remote_log=/home/httpd/xdebug.log
When I grep the log for the name of the php file being hit as the endpoint for these problematic requests I get 0 results (unless I've explicitly added breakpoints to that endpoint). When I do add breakpoints to that endpoint (minify.php) I can step through it in PHPStorm just fine and it seems to make it through the code even through echoing out the minified and concatenated JS code--yet the response is still never sent from the server as far as my local machine is aware.
Any idea what's going on here? It's really hampering my ability to use xdebug. Thanks.
It's likely that cookies don't propagate to those requests. I would suggest to see whether you can set xdebug.remote_autostart and whether it then tries to connect?
Thanks for the response. That wasn't the problem but it led me to dig further. It turns out to be a bug where xdebug crashes when php calls exit(). Using the latest release candidate fixed the problem as specified here:
http://bugs.xdebug.org/view.php?id=815
Thanks!