Google App Engine 500 internal server error on post requests - google-app-engine

I'm create Google App Engine instance with code based on vertx and try to habdle post requests.
If i launch my application locally - everything work ok, but if i try to POST request to installed app engine host - all my requests ends with 500 internal server error.
Why? Maybe i forget something?
For example:
curl -w "%{http_code}" -d "url=test" -X POST "http://localhost:80/hashposttest"
Use url: http://nifty-memory-w307.appspot.com/hashget/200okkk#Sprite:~$
curl -w "%{http_code}" -d "url=test" -X POST "https://nifty-memory-268407.appspot.com/hashposttest"
<html><head>
<meta http-equiv="content-type" content="text/html;charset=utf-8">
<title>500 Server Error</title>
</head>
<body text=#000000 bgcolor=#ffffff>
<h1>Error: Server Error</h1>
<h2>The server encountered an error and could not complete your request.<p>Please try again in 30 seconds.</h2>
<h2></h2>
</body></html>
500

Related

Solarium return Solr HTTP error : OK (404)

I use Solarium to access Solr with Symfony. It works without problem on my computer and dev computer but not on prod server.
On the prod server, Sorl is running with the same configuration, same port, same logins.
Do you have any idea of what can be the problem?
Here is the error
Solr HTTP error: OK (404)
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN""http://www.w3.org/TR/html4/strict.dtd">
<HTML><HEAD><TITLE>Not Found</TITLE>
<META HTTP-EQUIV="Content-Type" Content="text/html; charset=us-ascii"></HEAD>
<BODY><h2>Not Found</h2>
<hr><p>HTTP Error 404. The requested resource is not found.</p>
</BODY></HTML>
Problem solved. There was a wrong proxy installed on the windows server.

How to restore postgreSQL database for odoo to google cloud?

$ curl -F 'master_pwd=superadmin_passwd' -F backup_file=https://storage.cloud.google.com/smart02-bucket-1/Smart.zip -F 'copy=true' -F 'name=Smart1' http://34.91.12.120/web/database/
restore
After executing this line this is the result
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<title>400 Bad Request</title>
<h1>Bad Request</h1>
<p>Session expired (invalid CSRF token)</p>

How configure timeout for apache camel jetty component

I use Talend Open Studio 5.6 ESB, I make a apache camel route. The end of my route is :
.removeHeaders("CamelHttpPath")
.removeHeaders("CamelHttpUrl")
.removeHeaders("CamelServletContextPath")
.to("jetty:http://toOverRide?bridgeEndpoint=false&throwExceptionOnFailure=false&useContinuation=false&httpClient.timeout=120000&httpClient.idleTimeout=120000")
Before this, I overide the url in the jetty component for call a remote service. This service takes 30 seconds to reply, the route closes the connection and send a error 503. How can I increase the timeout.
log camel :
[WARN ]: org.apache.camel.component.jetty.CamelContinuationServlet - Continuation expired of exchangeId: ID-A1995-62398-1480423883621-0-1
[WARN ]: org.apache.camel.component.jetty.CamelContinuationServlet - Cannot resume expired continuation of exchangeId: ID-A1995-62398-1480423883621-0-1
reponse :
HTTP/1.1 503 Service Unavailable
Cache-Control: must-revalidate,no-cache,no-store
Content-Type: text/html;charset=ISO-8859-1
Content-Length: 1325
Server: Jetty(8.y.z-SNAPSHOT)
<html>
<head>
<meta http-equiv="Content-Type" content="text/html;charset=ISO-8859-1"/>
<title>Error 503 Service Unavailable</title>
</head>
<body>
<h2>HTTP ERROR: 503</h2>
<p>Problem accessing /sync/mockTmcWithLog/utilisateurs/30000. Reason:
<pre> Service Unavailable</pre></p>
<hr /><i><small>Powered by Jetty://</small></i>
</body>
</html>
Here is an example:
.to("jetty:http://toOverRide?continuationTimeout=900000&httpClient.timeout=900000")

What SOLR configuration is required to fetch an html page and parse it?

I've been consulting one tutorial after another and have spent oodles of time searching.
I installed SOLR from scratch and start it up.
bin/solr start
I successfully navigate to the SOLR admin. Then I create a new core.
bin/solr create -c core_wiki -d basic_configs
I look at the help for the bin/post command.
bin/post -h
...
* Web crawl: bin/post -c gettingstarted http://lucene.apache.org/solr -recursive 1 -delay 1
...
So I try to make a similar call... but I keep getting a FileNotFound error.
bin/post -c core_wiki http://localhost:8983/solr/ -recursive 1 -delay 10
/usr/lib/jvm/java-7-openjdk-amd64/jre//bin/java -classpath /home/ubuntu/src/solr-5.4.0/dist/solr-core-5.4.0.jar -Dauto=yes -Drecursive=1 -Ddelay=10 -Dc=core_wiki -Ddata=web org.apache.solr.util.SimplePostTool http://localhost:8983/solr/
SimplePostTool version 5.0.0
Posting web pages to Solr url http://localhost:8983/solr/core_wiki/update/extract
Entering auto mode. Indexing pages with content-types corresponding to file endings xml,json,csv,pdf,doc,docx,ppt,pptx,xls,xlsx,odt,odp,ods,ott,otp,ots,rtf,htm,html,txt,log
Entering recursive mode, depth=1, delay=10s
Entering crawl at level 0 (1 links total, 1 new)
SimplePostTool: WARNING: Solr returned an error #404 (Not Found) for url: http://localhost:8983/solr/core_wiki/update/extract?literal.id=http%3A%2F%2Flocalhost%3A8983%2Fsolr&literal.url=http%3A%2F%2Flocalhost%3A8983%2Fsolr
SimplePostTool: WARNING: Response: <html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
<title>Error 404 Not Found</title>
</head>
<body><h2>HTTP ERROR 404</h2>
<p>Problem accessing /solr/core_wiki/update/extract. Reason:
<pre> Not Found</pre></p><hr><i><small>Powered by Jetty://</small></i><hr/>
</body>
</html>
SimplePostTool: WARNING: IOException while reading response: java.io.FileNotFoundException: http://localhost:8983/solr/core_wiki/update/extract?literal.id=http%3A%2F%2Flocalhost%3A8983%2Fsolr&literal.url=http%3A%2F%2Flocalhost%3A8983%2Fsolr
SimplePostTool: WARNING: An error occurred while posting http://localhost:8983/solr
0 web pages indexed.
COMMITting Solr index changes to http://localhost:8983/solr/core_wiki/update/extract...
SimplePostTool: WARNING: Solr returned an error #404 (Not Found) for url: http://localhost:8983/solr/core_wiki/update/extract?commit=true
SimplePostTool: WARNING: Response: <html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
<title>Error 404 Not Found</title>
</head>
<body><h2>HTTP ERROR 404</h2>
<p>Problem accessing /solr/core_wiki/update/extract. Reason:
<pre> Not Found</pre></p><hr><i><small>Powered by Jetty://</small></i><hr/>
</body>
</html>
Time spent: 0:00:00.041
I'm still fairly new to SOLR indexing. Any hints that could point me in the right direction would be appreciated.
It seems that the request handler named /update/extract is missing from your configuration.
The ExtractingRequestHandler is not incorporated into the solr war
file, it is provided as a SolrPlugins, and you have to load it (and
it's dependencies) explicitly. (Apache Solr Wiki)
It should be defined in solrconfig.xml, like :
<requestHandler name="/update/extract" class="org.apache.solr.handler.extraction.ExtractingRequestHandler">

Indexing .tar.gz files in Solr 5.3.1: HTTP Error 405 POST not supported

Under a Solr 5.3.1 installation with /update working as expected I tried to index a .tar.gz file with the update/extract query handler,
curl "http://localhost:8983/solr/#/myfirstcore/update/extract?literal.id=adocument&commit=true" -H 'Content-type:application/octet-stream' --data-binary "#encapsulate.tar.gz"
But receive the following,
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
<title>Error 405 HTTP method POST is not supported by this URL</title>
</head>
<body><h2>HTTP ERROR 405</h2>
<p>Problem accessing /solr/admin.html. Reason:
<pre> HTTP method POST is not supported by this URL</pre></p><hr><i><small>Powered by Jetty://</small></i><hr/>
</body>
</html>
Under the admin panel, the update/extract specification is
/update/extract
class:org.apache.solr.handler.extraction.ExtractingRequestHandler
version:5.3.1
description:Add/Update Rich document
src:null
And solr was generally installed according to these directions: Digital Ocean: Installing Solr 5.2.1 on Ubuntu 14.4
Given the above error message how can I configure Solr to index zipped files (including .tar.gz)? The use case is to associate content with taxonomy metadata stored in json format by zipping them together. This way Solr will index both documents and associated taxonomy metadata together and follow on partial update commands are not needed.
Solution, change:
curl "http://localhost:8983/solr/#/myfirstcore/update/extract?literal.id=adocument&commit=true" -H 'Content-type:application/octet-stream' --data-binary "#encapsulate.tar.gz"
to
curl "http://localhost:8983/solr/myfirstcore/update/extract?literal.id=adocument&commit=true" -H 'Content-type:application/octet-stream' --data-binary "#encapsulate.tar.gz"
And a query for id=adocument returns 1 hit. That it didn't pick up the fields is a separate issue.

Resources