My JMeter version is apache-jmeter-5.4.1.
I am trying to set up a HTTP Request something like this on a react based website:
HTTP Request - GET http://YYY.YYY.YYYY/141719 (With Retrieve Embedded Resources checked)
When I run this, I can see JMeter captures embedded resource requests (Secondary requests) which are like *.css, *.js
Second set of embedded resource requests :
However one of these secondary requests called - bundle.xxxxxxx.js creates another set of embedded resource requests to the server which retrieve further *.js files as part of the Request Initiator chain.
While the name of this file itself is randomly generated like, ex., bundle.0787f963ab0ac67dd7d4.js
The browser of course parses this bundle.xxxxxxx.js immediately and gets all the embedded resources/requests(including chunk.*.js)
My problem is how do I replicate this behaviour using JMeter, for the second set of embedded resource requests also to be triggered. At the moment, I can only achieve capturing the first set of embedded resource requests. This does not give me true load test results as the second set has more traffic to the server. Is there a way to recursively Retrieve all embedded resources.
Our application under test is based on React JS.
As per JMeter project main page:
JMeter is not a browser, it works at protocol level. As far as web-services and remote services are concerned, JMeter looks like a browser (or rather, multiple browsers); however JMeter does not perform all the actions supported by browsers. In particular, JMeter does not execute the Javascript found in HTML pages. Nor does it render the HTML pages as a browser does (it's possible to view the response as HTML etc., but the timings are not included in any samples, and only one sample in one thread is ever displayed at a time).
There are several initiator types:
parser (index, styles, fonts, etc.) - these guys will be captured by embedded resources downloading functionality. All below will need to be handled somehow else
redirect
script
other
So if you need to mimic a number of HTTP requests which originate from JavaScript it will be required to replicate the logic from this JavaScript code in JMeter.
There is a minimal chance that you will be able to copy and paste the JavaScript into JSR223 PostProcessor however most likely it relies on certain objects like navigator, window or XMLHttpRequest so it's highly likely that you will have to re-write it in Groovy. Once you have enough data to properly build HTTP Request samplers most probably you will need to put them under the Parallel Controller
Another option is going for WebDriver Sampler and use real browsers for your tests but be aware that browsers are very resources intensive comparing to HTTP Request samplers
How to pass properties in camel to different camel context's? My current architecture will process 4 different kind of messages (A,B,C,D) and it uses same routes for all of them only while saving it changes the DB table names based on message type, but now I have a requirement that I need to save only few values from the exchange object for a particular message. I'm thinking of setting a property in a route and message type is 'E' I'll direct that to another route. but how do I pass the property value to different camel context
I don't know if you mean application properties (such as in Java property files) or an Exchange property as in Camels Exchange object to wrap a message.
However, it sounds like the latter since application properties are normally not passed around.
Exchange properties are just part of the Camel wrapper around a message during processing. If you send a message during route processing to another endpoint as with .to(endpoint), normally only the message is sent and the Exchange is thrown away.
from(endpoint)
.setProperty("myProperty", value)
.to("activemq:queue:myQueue")
// myProperty is no more available at myQueue
There are of course exceptions, it depends on the endpoint type. For example when sending to direct endpoints (synchronous Camel in-memory endpoint), the Exchange survives. But direct endpoints do not work across different Camel contexts. For other endpoint types like HTTP, JMS etc the properties are lost.
Therefore, if you want to set a "message variable" that is passed to other endpoints, especially across different Camel contexts, you have to set a message header.
from(endpoint)
.setHeader("myHeader", value)
.to("activemq:queue:myQueue")
// myHeader is still available at myQueue
In reading about the File API and wanting to write data directly from an indexedDB database to the client disk instead of first building and holding a large blob in RAM to download to disk, there are a few basic items I'm not understanding.
In the MDN documents these two statements are found:
In Gecko, privileged code can create File objects representing any local file without user interaction.
If you want to use the DOM File API in chrome code, you can do so without restriction. In fact, you get one bonus feature: you can create File objects specifying the path of the file on the user's computer. This only works from privileged code, so web content can't do it.
Where exactly does one write chrome code and/or Gecko priveleged code? Is this beyond a web extension? I've read and experimented with extensions; so, I'm not asking specifically about how to access them.
I'm not concerned about a 'normal' web page and server accessing the client disk. I know that it's not permitted inorder to protect the individual.
I'm interested in what can be done offline through the browser--with the aid of web extensions and/or a separate profile granting special permissions but without node.js, electron, etc.-- by an individual who knowingly wants to use the browser to do maybe what they should have built in the OS rather than the browser.
Put another way, if I want to use the browser just to run my javascript code to perform tasks all offline on my own machine, where is the privileged code written that gives access to these types of APIs that aren't subject to the security issues of a normal web page?
Is it still javascript or C++ in these areas?
Thank you.
This old question provides a link to their extension which includes the File API that writes to disk in a way that appears to provide a means to bypass the creation of a large blob of data. It's six years old but appears to contain what is needed, at least to get started.
I'm not referring to their trying to get around using indexedDB, but just that using this type of extension could allow for writing each object from the database directly to the client disk without first having to generate a large blob to download.
Attempt at employing Andrew Swan's suggestion
I'm trying to put the pieces together but have reached a point where am not sure how to continue. I wrote the code below in the background script of an extension. In attempting to employ Andrew Swan's suggestion, the plan is to intitiate a GET request for a text/csv file, which is intercepted and replaced by data extracted from database and written to the GET request by the stream filter.
First, make a GET request to a bogus url and listen for response, as follows:
let request = new XMLHttpRequest();
request.open("GET", url );
request.setRequestHeader( "Content-Type", "text/csv" );
request.send(null);
request.onreadystatechange = () =>
{
portFromCS.postMessage( { 'func' : 'disp_result', 'args' : { 'msg' : "request.status :", 'value' : request.status + ' : ' + request.statusText} } );
};
Second, intercept the request and write to the GET, as follows:
browser.webRequest.onBeforeRequest.addListener(
listener,
{ urls : ["<all_urls>"] },
["blocking"] );
function listener( details )
{
let filter = browser.webRequest.filterResponseData(details.requestId);
let decoder = new TextDecoder("utf-8");
let encoder = new TextEncoder();
filter.onstart = event =>
{
let str = decoder.decode(event.data, {stream: true});
str = '' +
'HTTP/1.1 200 OK \r\n' +
'Content-Length: 17 \r\n' +
'Content-Type: text/csv \r\n\r\n\r\n' +
'This is a string.';
filter.write(encoder.encode(str));
filter.disconnect();
};
}
The message sent from the background script in the request.onreadystatechange function is received in the content script, and the request.status is '0'.
The filter.onstart is used because the ondata event will never fire since the url is bogus. Also, that means there will be no converting of data from the url, but only the writing of new data through the filter.
The str data is written and received by the request, but only as responseText and not as a response header. The request.status remains '0' instead of '200'.
It seems that can't change the response header unless in onHeadersReceived which will never take place, it appears, for a bogus url. However, I tried this on a real url and, even though the event fired, an error of webRequest.HttpHeaders is not a function was thrown. I had "responseHeaders" in the webRequest extraInfoSpec at that time.
My questions are:
Can a response header be written to set the request.status to '200' and then start writing the database data through an async function in small blocks as retrieved?
Can the Content Disposition section of the header response be set such that it automatically starts the download of the response.text and allows the user to select the file name and save location, and stay "open" as keep writing to the file as the data is extracted from the database and passed to the GET request through the filter.write()?
Thank you.
Conclusion
It was a good idea but I don't think it is possible for at least two reasons.
One is that webRequest doesn't appear to intercept a downloads.download() function at all, or any download event; so, you can't intercept a download, and an event with a Content Disposition of 'attachment' is needed to even try to write to it with a stream. I could intercept a forced click to an anchor tag href but no other events fired beyond onBeforeRequest.
The other is that a response header can't be modified until an onHeadersReceived event, which means the fake URL has to return something. You can't just cancel it in onBeforeRequest. So, this wouldn't work offline. But, even if you let it process online to an existing URL that returns a reponse header, it won't accept a modification. I tried repeatedly to modify the response header and it just won't work. I tried an XMLHttpRequest GET and can intercept the events that fire but can't modify the response header; so, can't set Content Disposition to 'attachment', with or without file, to start a download. I can write to it but it's no good unless it is going to download what is written. It would be ok if the written content was going to a web page.
Also, if you redirect the URL along the way to anything other than a webRequest acceptable URL, the other events won't be interceptable. So, if redirect to an object URL in onBeforeRequest, you won't intercept the response headers stage in webRequest but can view threm in the onreadystatechange event of the XMLHttpRequest.
So, the upshot is that it appears the response headers cannot be modified even though the MDN Web Docs say it is possible. And, this idea of using awebRequest stream filter to stream data generated on the client or extracted from an indexedDB database, as opposed to building one large blob for download, won't work because can't intercept a download or change the response headers to trigger a download into which to write via the stream filter.
It was an interesting idea though. I still wonder whether or not the download would remain 'open', so to speak, while the data was being written on the client and passed in blocks or chunks. Perhaps, if that part of the response headers that states how data is to be passed and received was modified also it would work.
For now, I am no longer pursuing this approach. One of the Web Docs or a bug records stated that it is planned to allow a data URL to be intercepted. Perhaps, for an offline download to the client, that would be preferrable to a fake URL.
If anyone gets this to work, please let us know. Thank you.
A couple of terms:
"Gecko" is the rendering engine on which Firefox (and a few other applications like Thunderbird) is built
"Chrome" in this context means the browser user interface and features, as opposed to the contents of a web page being displayed by the browser.
In Firefox, much of the browser chrome is implemented in Javascript. The code that implements the user interface needs to be able to do things that normal web pages cannot do (such as reading and writing the local filesystem). Therefore, this code runs with different privileges than Javascript that runs as part of a web page. The terms "privileged code", "chrome privileged code", "Gecko privileged code" are all different ways to describe the same thing: Javascript code that is built in to the browser and has access to capabilities that web pages do not have.
Prior to the Firefox Quantum (version 57) release, Firefox extensions were allowed to run privileged Javascript code. As you might imagine, this was fraught with problems for security, performance, and stability, among other things. With WebExtensions, extensions now run with the same level of privilege as regular web content (ie, they do not execute with elevated privileges). Some browser features are exposed to extensions through extension APIs.
So, if you're interested in what you can do from an extension, any documents on MDN that reference privileged code, are effectively irrelevant. There are not currently any APIs available to WebExtensions that would allow you to directly access the filesystem, but there is an open bug to add some this capability. (that bug has existed for quite some time, but I suspect there will be progress relatively soon...)
This may be a naive question but I was planning to create a new channel just before the existing channel timed out to make sure that my client was never without a channel. I thought I was being pretty clever until I read this caveat in the google channel api docs:
One Client Per Channel Per Page
A client can only connect to one channel per page. If an application needs to send multiple types of data to a client, aggregate it on the server side and send it to appropriate handlers in the client’s socket.onmessage callback.
I'm new to this, but it's not obvious to me how the channel unique identifies the page to which it is connected. Is there something in the javascript for channel.open() call that identifies the page it is being called in?
Thanks.
The channel javascript creates a hidden iframe with a given id (on production). The communications takes place within the iframe. The javascript code will always access that iframe (and hence channel).
When you close the socket and channel, the hidden iframe will be destroyed. Afterwards you can create a new channe for the page.
I made a pass web service already. Next, I need to make a push notification when my pass is updated. From Updating a pass of passkit programming guide, it is not in detail. Can you explain this in detail ?
The requirements and protocol for push notifications is documented in the Push Notification Programming Guide.
There are a few special considerations for Passbook:
All Pass push requests must be sent to the production APNS server (gateway.push.apple.com on port 2195)
You must use your Pass Type ID certificate and key to authenticate with the APNS server (do not use App APNS certificates)
There is no need to handle device registrations, you simply use the pushToken that your web service received when the device registered the pass
The payload should be an empty - E.g. {"aps":""}
alert, badge, sound and custom property keys are all ignored - the push's only purpose is to notify Passbook that your web service has a fresh pass. The notification text will be determined by the changeMessage key in pass.json and the differences between the old and the new .pkpass bundles
ThechangeMessage string should contain %# if you wish for the content of the value key to be displayed. Change messages may have static text in addition to the %# variable, such as this: "changeMessage":"New updates: %#". If no %# is provided, a generic message with the kind of pass is displayed: "Store card changed".
As of iOS9, if you modify more than one field at a time, only one generic message will be displayed on the lock screen.
You still need to regularly query the feedback service and purge expired/invalid pushTokens from your database
Note that push updates can be implemented independently of your web service. Apple provide some sample objective-c code in Listing 5-1 here.