I am exploring mountebank and have a case where I need to analyse the gziped json request in order to create a predicate that returns the appropriate response. Can I unzip a json request and analyse the json with mountebank?
It does sound possible using Injection - this way, you should be able to require zlib in your JavaScript function, use it to unzip the payload, parse the result to JSON and then return a response as you see fit.
Depending on what you want to return though, you may need to use a combination of Predicate Injection (where a simple true/false determines whether or not the stub responds) and Response Injection (where you can tailor the response depending on the content of the payload).
Sorry for the late reply - but I thought I would add an answer.
Please see https://groups.google.com/forum/#!topic/mountebank-discuss/lvJq9PdIRlo for an update. There is now an open ticket to add support for this.
Related
I havent find ressources online to solve my problem.
I'm creating an app with React Native that fetches and shows news articles from my database.
At the top of the page, there's some buttons with filters inside, for example:
one button "energy",
one button "politics"
one button "people"
one button "china"
etc...
Everytime I press one of those buttons, the filter corresponding is stored in an array "selectedFilters", and I want to fetch my database to only show articles that are corresponding to those filters.
Multiple filters can be selected at the same time.
I know one way of doing it, with a POST request:
await fetch('187.345.32.33:3000/fetch-articles', {
method: 'POST',
headers: {'Content-Type':'application/x-www-form-urlencoded'},
body: 'filters=${JSON.stringify(selectedFilters)}'
});
But the fact is, I read everywhere, and I also was teached, that POST request are used when creating or removing, and theoretically, what I should use is a GET request.
But I don't know how to send an Array with GET request.
I read online that I can pass multiple parameters to my url(for example: arr[0]=selectedFilters[0]&arr[1]=... but the fact is I never know in advance how many items will be in my array.
And also I'm not sure if I could write exactly the same way as my POST request above, but with GET:
await fetch('187.345.32.33:3000/fetch-articles', {
method: 'GET',
headers: {'Content-Type':'application/x-www-form-urlencoded'},
body: 'filters=${JSON.stringify(selectedFilters)}'
});
or if I can only pass items in the url, but does this work ?
await fetch('187.345.32.33:3000/fetch-articles?arr[0]=${selectedFilters[0]', {
Or even better if something like this could work:
await fetch('187.345.32.33:3000/fetch-articles?filters=${JSON.stringify(selectedFilters)}', {
Thanks for your help
You should definitely use a GET request if your purpose is to fetch the data.
One way of passing the array through the URL is by using a map function to create a comma separated string with all the filters. This way you would not need to know in advance how many elements are in the array. The server can then fetch the string from the URL and split it on the commas.
One more method you can try is to save a filters array on the server side for the session. You can then use a POST/PUT request to modify that array with new filter as user adds or remove them. Finally you can use an empty GET request to fetch the news as the server will already have the filters for that session.
But the fact is, I read everywhere, and I also was teached, that POST request are used when creating or removing, and theoretically, what I should use is a GET request.
Yes, you do read that everywhere. It's wrong (or at best incomplete).
POST serves many useful purposes in HTTP, including the general purpose of “this action isn’t worth standardizing.” (Fielding, 2009)
It may help to remember that on the HTML web, POST was the only supported method for requesting changes to resources, and the web was catastrophically successful.
For requests that are effectively read only, we should prefer to use GET, because general purpose HTTP components can leverage the fact that GET is safe (for example, we can automatically retry a safe request if the response is lost on an unreliable network).
I'm not sure if I could write exactly the same way as my POST request above, but with GET
Not quite exactly the same way
A client SHOULD NOT generate content in a GET request unless it is made directly to an origin server that has previously indicated, in or out of band, that such a request has a purpose and will be adequately supported. An origin server SHOULD NOT rely on private agreements to receive content, since participants in HTTP communication are often unaware of intermediaries along the request chain. -- RFC 9110
The right idea is to think about this in the framing of HTML forms; in HTML, the same collection of input controls can be used with both GET and POST. The difference is what the browser does with the information.
Very roughly, a GET form is used when you want to put the key value pairs described by the submitted form into the query part of the request target. So something roughly like
await fetch('187.345.32.33:3000/fetch-articles?filters=${JSON.stringify(selectedFilters)}', {
method: 'GET'
});
Although we would normally want to be using a URI Template to generate the request URI, rather than worrying about escaping everything correctly "by hand".
However, there's no rule that says general purpose HTTP components need to support infinitely long URI (for instance, Internet Explorer used to have a limit just over 2000 characters).
To work around these limits, you might choose to support POST - it's a tradeoff, you lose the benefits of safe semantics and general purpose cache invalidation, you gain that it works in extreme cases.
I am trying to create a mock server for handling functional tests in Karate. For that purpose I need to match certain incoming requests based on certain elements like method, path and presence of "Authorization" header in the incoming requests.
The condition I have is something like:
methodIs('get') && pathMatches('/mypath')
I need to write a condition for the presence of an "Authorization" header in the request.
As per documentation, we can use:
karate.get('requestHeaders.Authorization[0]') == 'foo'
However, when I am trying to use the above, it isn't working. I checked for the presence of requestHeaders.Authorization[0] but that is being returned as Null. My idea was to modify the above to something like karate.get('requestHeaders.Authorization[0]') == '#notnull'.
I ended up trying headerContains('Authorization',''), which seems to be working - however I am not sure if that is the right way to check for the presence of that specific header. Is there any other (better) way to do this?
We are planning to improve this in a future release: https://github.com/karatelabs/karate/issues/1962
Meanwhile I would have thought that a simple JavaScript condition like requestHeaders.Authorization would work. For e.g:
Scenario: requestHeaders.Authorization && methodIs('get')
Unfortunately it is case-sensitive, which is what we plan to fix in a future release. You could use an OR condition for the time being.
For more explanation, refer: https://stackoverflow.com/a/55823180/143475
I'm new to AngularJS and I am currently building a webapp using a Django/Tastypie API. This webapp works with posts and an API call (GET) looks like :
{
title: "Bootstrap: wider input field - Stack Overflow",
link: "http://stackoverflow.com/questions/978...",
author: "/v1/users/1/",
resource_uri: "/v1/posts/18/",
}
To request these objects, I created an angular's service which embed resources declared like the following one :
Post: $resource(ConfigurationService.API_URL_RESOURCE + '/v1/posts/:id/')
Everything works like a charm but I would like to solve two problems :
How to properly replace the author field by its value ? In other word, how the request as automatically as possible every reference field ?
How to cache this value to avoid several calls on the same endpoint ?
Again, I'm new to angularJS and I might missed something in my comprehension of the $resource object.
Thanks,
Maxime.
With regard to question one, I know of no trivial, out-of-the-box solution. I suppose you could use custom response transformers to launch subsidiary $resource requests, attaching promises from those requests as properties of the response object (in place of the URLs). Recent versions of the $resource factory allow you to specify such a transformer for $resource instances. You would delegate to the global default response transformer ($httpProvider.defaults.transformResponse) to get your actual JSON, then substitute properties and launch subsidiary requests from there. And remember, when delegating this way, to pass along the first TWO, not ONE, parameters your own response transformer receives when it is called, even though the documentation mentions only one (I learned this the hard way).
There's no way to know when every last promise has been fulfilled, but I presume you won't have any code that will depend on this knowledge (as it is common for your UI to just be bound to bits and pieces of the model object, itself a promise, returned by the original HTTP request).
As for question two, I'm not sure whether you're referring to your main object (in which case $scope should suffice as a means of retaining a reference) or these subsidiary objects that you propose to download as a means of assembling an aggregate on the client side. Presuming the latter, I guess you could do something like maintaining a hash relating URLs to objects in your $scope, say, and have the success functions on your subsidiary $resource requests update this dictionary. Then you could make the response transformer I described above check the hash first to see if it's already got the resource instance desired, getting the $resource from the back end only when such a local copy is absent.
This is all a bunch of work (and round trips) to assemble resources on the client side when it might be much easier just to assemble your aggregate in your application layer and serve it up pre-cooked. REST sets no expectations for data normalization.
It's regarding content length.
I know its use. It tells the receiver that "this much" data I am sending.
It's available with both request & response. In Java, we have setContentLength() method for response, we can set content length to some value using this method and browser is going to read only up to that point in the body.
In request, we don't have any setContentLength() method - at least not in Java. So I believe browser sets this as per the data request is containing. Now let’s say if I modify this request in between – change a parameter value to some bigger value, how do I change the content length? If I don’t change content length, server doesn’t read the complete body – it only reads up to content length & ignores rest of the body content.
I hope my question is clear.
Thanks.
PS: I am changing request in a C filter in web server before request goes to application server.
I'm not sure if it's a good idea to change content length of request, I don't think you can achieve that simply, the only way i can think of is parse the content of request, the strip unneeded data and the generate the request that simple mimics original request but contains stripped data, it also depends on the whenever you are using POST or GET requests
I am working on an ISAPI Filter to strip certain content out of responses. I need to collect all the body of the response before I do the processing, as the content I'm stripping could overlap send buffers.
To do this I'd like to buffer the response content with each SF_NOTIFY_SEND_RAW_DATA notification until I get to the last one, then send the translated data. I would like to know the best way to determine which SF_NOTIFY_SEND_RAW_DATA is actually the last. If I wait until the SF_NOTIFY_END_OF_REQUESTnotification, then I don't know how to send the data I've buffered.
One approach would be to use the content-length. This would require I detect the end of the headers. It would also require assuming the content-length header is correct (is that guaranteed?). Since HTTP doesn't even require a content-length header, I'm not even sure it will always be there. There seems like there should be an easier way.
I'm assuming the response is not chunked, so I am not handling dechunking before I do the response change. Also, when I do the modifications to the response body, the size of teh response body will not change, so I do not need to go back and update the content-length.
I eventually found some good discussions via google.
This posts answers my questions, as well as raises issues a more complicated filter would have to address: http://groups.google.com/group/microsoft.public.platformsdk.internet.server.isapi-dev/browse_thread/thread/85a5e75f342fad2b/cbb638f9a85c9e03?q=HTTP_FILTER_RAW_DATA&_done=%2Fgroups%3Fq%3DHTTP_FILTER_RAW_DATA%26start%3D20%26&_doneTitle=Back+to+Search&&d&pli=1
The filter I have s buffering the full request into its own buffer then using the SF_NOTIFY_END_OF_REQUEST to send the contents. The modification it does does not change the size, and precludes the possibility that the response is chunked, so in my case the filter is relatively simple.