With a route like this
<route id="proxy">
<from uri="jetty:http://0.0.0.0:9092/Domain?matchOnUriPrefix=true"/>
<to uri="http4://localhost:8080/Domain?bridgeEndpoint=true&throwExceptionOnFailure=false"/>
</route>
The response from the proxy is not GZIP encoded if the response from local host is.
Response from localhost:8080
HTTP/1.1 202 Accepted
Server: Apache-Coyote/1.1
Content-Encoding: gzip
Date: Sat, 10 Sep 2016 15:39:31 GMT
Vary: Accept-Encoding
Content-Type: multipart/mixed
Transfer-Encoding: chunked
Response from localhost:9092
HTTP/1.1 202 Accepted
Content-Type: multipart/mixed
Server: Apache-Coyote/1.1
Vary: Accept-Encoding
Transfer-Encoding: chunked
The HTTP4 component seems to uncompress the GZIP stream and remove the Content-Encoding header even though the bridgeEndpoint is set to true?
When I do the same proxy with in the to uri
<to uri="http://localhost:8080/ReferenceDomain.svc?bridgeEndpoint=true&throwExceptionOnFailure=false"/>
or
<to uri="jetty:http://localhost:8080/ReferenceDomain.svc?bridgeEndpoint=true&throwExceptionOnFailure=false"/>
it works as expected.
What am I missing/doing wrong?
(I am using Camel 2.15.1)
It might be too late for an answer, but i recently stumbled upon the same issue, e.g. Content-Encoding header is removed when proxying requests through Camel. Initially I thought something wrong with Camel HTTP Component, but it is apparently Apache HTTP Client.
E.g. if you leave default configuration for Apache HTTP Client builder in Camel, it would come back with interceptor which automatically decodes gzip'ed content and clears Content-Encoding header from response, thus Camel doesn't even have chance to read the header. Check contentCompressionDisabled property of the HttpClientBuilder.
So, my solution was to override default HttpClientBuilder to disable content compression, e.g.
public class CustomHttp4Component extends HttpComponent {
#Override
protected HttpClientBuilder createHttpClientBuilder(final String uri, final Map<String, Object> parameters,
final Map<String, Object> httpClientOptions) throws Exception {
HttpClientBuilder builder = super.createHttpClientBuilder(uri, parameters, httpClientOptions);
// If not set, http client will decompress the entity and remove content-encoding headers from response.
// There is logic in Camel to decompress if header is set hence we leave decompression logic to Camel and disable decomp in Apache HTTP client.
builder.disableContentCompression();
return builder;
}
Related
I'm upgrading an app that uses http and jersey to upload a file to Mule 3.7.0 and migrating to the new HTTP implementation. Before updating I was able to upload a file using the following configuration
<http:connector name="HttpConnector" >
<service-overrides messageFactory="org.mule.transport.http.HttpMuleMessageFactory"
sessionHandler="org.mule.session.NullSessionHandler" />
</http:connector>
<flow name="UploadFlow">
<http:inbound-endpoint address="http://0.0.0.0:8095/sds" connector-ref="HttpConnector"/>
<jersey:resources>
<component>
<spring-object bean="FileUploadResource" />
</component>
</jersey:resources>
</flow>
where FileUploadResource is
#POST
#Path("module/upload")
#Consumes(MediaType.MULTIPART_FORM_DATA)
#Produces(MediaType.TEXT_PLAIN)
public Response uploadModule(#FormDataParam("file") final InputStream is,
#FormDataParam("file") FormDataContentDisposition fileDetails) throws IOException {
String filename = fileDetails.getFileName();
.....
}
The updated configuration is as follows
<http:listener-config name="HttpListenerConfig" host="0.0.0.0" basePath="/sds" port="8095"/>
<flow name="UploadFlow">
<http:listener config-ref="HttpListenerConfig" path="/*"/>
<jersey:resources>
<component>
<spring-object bean="FileUploadResource" />
</component>
</jersey:resources>
</flow>
and FileUploadResource is unchanged. When attempting to upload a file, I receive an HTTP 400 Bad Request error. What is the correct way to migrate this functionality to the new implementation? Thanks in advance.
The upload request is as follows:
Host: 192.168.29.129:8095
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:42.0) Gecko/20100101 Firefox/42.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Referer: http://192.168.29.129:8090/mule/
Content-Length: 56068
Content-Type: multipart/form-data; boundary=--------------------------- 12776546320886
Origin: http://192.168.29.129:8090
Connection: keep-alive
Pragma: no-cache
Cache-Control: no-cache
Hard to know what the problem is without logs. However, why to use jersey for this? You can upload the files in multipart request simply using the http connector. Check this page for information on how to build multipart requests: https://docs.mulesoft.com/mule-user-guide/v/3.7/http-request-connector
If you still have compelling reasons for staying with Jersey, make sure that the jersey multipart support is enabled for Jersey (take in count that Jersey was upgraded in Mule 3.6)
I solved this by setting parseRequest="false" on the http listener
<http:listener config-ref="HttpListenerConfig" path="/*" parseRequest="false"/>
on the server side I have the following filter in apache which allow all methods and all origins by defaults
<filter>
<filter-name>CorsFilter</filter-name>
<filter-class>org.apache.catalina.filters.CorsFilter</filter-class>
</filter>
<filter-mapping>
<filter-name>CorsFilter</filter-name>
<url-pattern>/*</url-pattern>
</filter-mapping>
Using angular $http One post is working, but another fail failed. The request that fails talks to another app on the same apache.
Cross-Origin Request Blocked: The Same Origin Policy disallows
reading the remote resource at http://localhost:..
(Reason: CORS header 'Access-Control-Allow-Origin'
does not match 'http://localhost:8100, http://localhost:8100').
But the response header does contain the ACAO
HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
access-control-allow-credentials: true, true
access-control-allow-origin: http://localhost:8100, http://localhost:8100
Vary: Origin
Content-Type: application/json;charset=UTF-8
Transfer-Encoding: chunked
Date: Thu, 22 Oct 2015 04:35:29 GMT
Where did the ' http://localhost:8100, http://localhost:8100' come from ? Do you think it is angular $http or Apache problem ?
Access-Control-Allow-Origin accepts either '*' or a single origin for its value. You can't put a comma-separated list there.
The browser is matching the origin (http://localhost:8100 against http://localhost:8100, http://localhost:8100 and not getting a match.
You have a similar problem on the line before. It looks like you are running the code to insert your CORS headers twice.
I have an app wich download a file from a server, receiving it in tcp packets and I want to found the path of file on the server. With wireshark I read in the first packet some information like date, domain, file name and as path I read path=/ but it isn't in domain.com/filename (404). Is there any way to get the real path where the file is on the server?
edit:
All I found comprehensible in the first packet:
HTTP/1.1 200 OK
Date: Sat, 30 Aug 2014 14:35:55 GMT
Server: Apache/2.2.3 (CentOS)
X-Powered-By: PHP/5.3.24
Set-Cookie: frontend=m90hqgtsu70hk9pprd39sllqk4; expires=Sat, 30-Aug-2014 25:35:55 GMT; path=/; domain=www.exaple.com; HttpOnly
Content-Disposition: attachment; filename="xxx.y"
Content-Length: 46458848
Connection: close
Content-Type: application/octet-stream
The request:
GET /index.php/rest/server?method=download&sessionId=xxx&userId=a#a.com&deviceToken=xxx&sku=filename&version=2
HTTP/1.1
Connection: Keep-Alive
Accept Encoding: gzip
Accept-Language: it-IT,en,*
User-Agent: Mozilla/5.0
Host: www.domain.com
The file is being downloaded using HTTP (read RFC 2616). The packet you are looking at is a response. The domain and path information you are looking for is not in the response, it is in the request instead:
GET /index.php/rest/server?method=download&sessionId=xxx&userId=a#a.com&deviceToken=xxx&sku=filename&version=2 HTTP/1.1
Connection: Keep-Alive
Accept Encoding: gzip
Accept-Language: it-IT,en,*
User-Agent: Mozilla/5.0
Host: www.domain.com
So the URL to request the file would be http://www.domain.com/index.php/rest/server?method=download&sessionId=xxx&userId=a#a.com&deviceToken=xxx&sku=filename&version=2.
The filename you see in the response is the actual filename for the file. But not all responses will include such a filename, so be prepared for that. If there is no Content-Disposition header (or it does not have a filename attribute), look for a name attribute in the Content-Type header. If none, you will have to parse the request URL (see RFC 3986) looking for a filename in its Path component (in the above URL, that is /index.php/rest/server).
The domain and path pieces you see in the response are not related to the file at all. They belong to a cookie (see RFC 6265) that is used to persist server-side data between HTTP requests.
If the server does not voluntarily provide the path you are looking for there is no way to find out. The file it sends might not even be on disk. It might be generated data or data cached in application memory.
If the response does not contain the path (and that is unlikely because no server I know of would send it) you can't do anything to find it.
I'd like to obtain the JSON contents of a URL by performing an HTTP (GET) request (unless there's another way to do it...). What is the best way to perform an HTTP request over WiFi? I'm writing this in C/C++, so I'm wondering about a way to do an HTTP request programmatically.
I'm trying to read some JSON from a page I have. I'm not sure how to properly perform the HTTP request. The code I'm trying is as follows:
Serial.println(client.print("GET /private/tweets_json.php?count=1&screen_name=rawd_dev HTTP/1.1\r\nHost: example.com\r\nConnection: close\r\n"));
The 'client' is verified to have been connected earlier in the code. However, I get a 501 response:
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>501 Method Not Implemented</title>
</head><body>
<h1>Method Not Implemented</h1>
<p>GET to /index.html not supported.<br />
</p>
<p>Additionally, a 404 Not Found
error was encountered while trying to use an ErrorDocument to handle the request.</p>
</body></html>
HTTP/1.0 200 OK
Content-Type: text/html; charset=utf-8
Cache-Control: no-cache
Vary: Accept-Encoding
Date: Thu, 05 Sep 2013 03:15:12 GMT
Server: Google Frontend
Alternate-Protocol: 80:quic
I'm not sure what is causing this. How should it be done?
There has been this new video on youtube demonstrating the strength of EdgeCaching in the GAE architecture, and at this particular point in the video they demonstrate how easy it is to leverage:
http://www.youtube.com/watch?v=QJp6hmASstQ#t=11m12
Unfortunately it's not that easy...
I'm looking to enable edge caching using the webapp2 framework provided by Google.
I'm calling:
self.response.pragma = 'Public'
self.response.cache_expires(300)
but it seems overridden by something else.
The header I get is:
HTTP/1.1 200 OK
Pragma: Public
Cache-Control: max-age=300, no-cache
Expires: Sat, 23 Feb 2013 19:15:11 GMT
Content-Type: application/json; charset=utf-8
Content-Encoding: gzip
X-AppEngine-Estimated-CPM-US-Dollars: $0.000085
X-AppEngine-Resource-Usage: ms=39 cpu_ms=64
Date: Sat, 23 Feb 2013 19:10:11 GMT
Pragma: no-cache
Expires: Fri, 01 Jan 1990 00:00:00 GMT
Cache-Control: no-cache, must-revalidate
Vary: Accept-Encoding
Server: Google Frontend
Content-Length: 600
I'm using ndb top level:
app = ndb.toplevel(webapp2.WSGIApplication(...
I tried the technics explained here, but they don't seem to apply to webapp2:
http://code.google.com/p/googleappengine/issues/detail?id=2258#c14
I also looked at this post too:
https://groups.google.com/d/topic/webapp2/NmHXoZZSVvo/discussion
I tried to set everything manually with no success. Something is overriding my cache settings.
Is there a way to make it work with webapp2? Any other option is welcome.
EDIT: I'm using an url with version prefix: http://version.appname.appspot.com and it's probably the cause of my problem.
This should be all you need:
self.response.cache_control = 'public'
self.response.cache_control.max_age = 300
Check Caching Details for more information, may be you broke some rules. Next the best part:
A response can be stored in Cloud CDN caches only if all of the following are true:
It was served by a backend service with caching enabled.
It was a response to a GET request.
The status code was 200, 203, 300, 301, 302, 307, or 410.
It has a Cache-Control: public directive.
It has a Cache-Control: s-maxage, Cache-Control: max-age, or Expires
header.
It has either a Content-Length header or a Transfer-Encoding header.
Additionally, there are checks that will block caching of responses. A response will not be cached if any of the following are true:
It has a Set-Cookie header.
Its body exceeds 4 MB.
It has a Vary header with a value other than Accept, Accept-Encoding, or - Origin.
It has a Cache-Control: no-store, no-cache, or private directive.
The corresponding request had a Cache-Control: no-store directive.
I'm guessing that you're mixing up two related but distinct ideas.
The first idea, which the video you link to talks about, is arranging to have certain files in your app served by a pool of App Engine servers that specialize in serving static content. This is faster than having your app serve these files, since there won't be a delay to start up a new instance of your app to serve a static file. (Strongly consider serving up your .js and .css this way.) This static serving facility is controlled entirely at app update (upload) time, via declarations you make in app.yaml (or appengine-web.xml for Java apps).
The second idea is arranging, via HTTP response headers, for pages that your app emits to be cacheable by caches outside of app engine.
If you declare files as static, you have some control over addition HTTP response headers that get served along with the file. See the documentation on configuring static files.